This pull request corrects the URL links in the Async API documentation
to align with the updated project layout. The links had not been updated
despite the changes in layout.
Not obvious what the error is when you cannot index. This pr adds the
ability to log the first errors reason, to help the user diagnose the
issue.
Also added some more documentation for when you want to use the
vectorstore with an embedding model deployed in elasticsearch.
Credit: @elastic and @phoey1
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Added the capability to handles structured data from
google enterprise search,
- Issue: Retriever failed when underline search engine was integrated
with structured data,
- Dependencies: google-api-core
- Tag maintainer: @jarokaz
- Twitter handle: anifort
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Christos Aniftos <aniftos@google.com>
Co-authored-by: Holt Skinner <13262395+holtskinner@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
# Description
This PR introduces a new toolkit for interacting with the AINetwork
blockchain. The toolkit provides a set of tools for performing various
operations on the AINetwork blockchain, such as transferring AIN,
reading and writing values to the blockchain database, managing apps,
setting rules and owners.
# Dependencies
[ain-py](https://github.com/ainblockchain/ain-py) >= 1.0.2
# Misc
The example notebook
(langchain/docs/extras/integrations/toolkits/ainetwork.ipynb) is in the
PR
---------
Co-authored-by: kriii <kriii@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Link an example of deploying a Langchain app to an AzureML
online endpoint to the deployments documentation page.
Co-authored-by: Vanessa Arndorfer <vaarndor@microsoft.com>
### Description
Polars is a DataFrame interface on top of an OLAP Query Engine
implemented in Rust.
Polars is faster to read than pandas, so I'm looking forward to seeing
it added to the document loader.
### Dependencies
polars (https://pola-rs.github.io/polars-book/user-guide/)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Add PromptGuard integration
-------
There are two approaches to integrate PromptGuard with a LangChain
application.
1. PromptGuardLLMWrapper
2. functions that can be used in LangChain expression.
-----
- Dependencies
`promptguard` python package, which is a runtime requirement if you'd
try out the demo.
- @baskaryan @hwchase17 Thanks for the ideas and suggestions along the
development process.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: added graph_memgraph_qa.ipynb which shows how to use LLMs
to provide a natural language interface to a Memgraph database using
[MemgraphGraph](https://github.com/langchain-ai/langchain/pull/8591)
class.
- Dependencies: given that the notebook utilizes the MemgraphGraph
class, it relies on both this class and several Python packages that are
installed in the notebook using pip (langchain, openai, neo4j,
gqlalchemy). The notebook is dependent on having a functional Memgraph
instance running, as it requires this instance to establish a
connection.
- Improved docs
- Improved performance in multiple ways through batching, threading,
etc.
- fixed error message
- Added support for metadata filtering during similarity search.
@baskaryan PTAL
[Epsilla](https://github.com/epsilla-cloud/vectordb) vectordb is an
open-source vector database that leverages the advanced academic
parallel graph traversal techniques for vector indexing.
This PR adds basic integration with
[pyepsilla](https://github.com/epsilla-cloud/epsilla-python-client)(Epsilla
vectordb python client) as a vectorstore.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: support [ERNIE
Embedding-V1](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu),
which is part of ERNIE ecology
- Issue: None
- Dependencies: None
- Tag maintainer: @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Updating documentation to add AmazonTextractPDFLoader
according to
[comment](https://github.com/langchain-ai/langchain/pull/8661#issuecomment-1666572992)
from [baskaryan](https://github.com/baskaryan)
Adding one notebook and instructions to the
modules/data_connection/document_loaders/pdf.mdx
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Made the notion document of how Langchain executes agents method by
method in the codebase.
Can be helpful for developers that just started working with the
Langchain codebase.
The current Collab URL returns a 404, since there is no `chatbots`
directory under `use_cases`.
<!-- Thank you for contributing to LangChain!
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
### Summary
Fixes a bug from #7850 where post processing functions in Unstructured
loaders were not apply. Adds a assertion to the test to verify the post
processing function was applied and also updates the explanation in the
example notebook.
Fix spelling errors in the text: 'Therefore' and 'Retrying
I want to stress that your feedback is invaluable to us and is genuinely
cherished.
With gratitude,
@baskaryan @hwchase17
Removed extra "the" in the sentence about the chicken crossing the road
in fallbacks.ipynb. The sentence now reads correctly: "Why did the
chicken cross the road?" This resolves the grammatical error and
improves the overall quality of the content.
@baskaryan , @hinthornw , @hwchase17
I want to extend my heartfelt gratitude to the creator for masterfully
crafting this remarkable application. 🙌 I am truly impressed by the
meticulous attention to grammar and spelling in the documentation, which
undoubtedly contributes to a polished and seamless reader experience.
As always, your feedback holds immense value and is greatly appreciated.
@baskaryan , @hwchase17
I want to convey my deep appreciation to the creator for their expert
craftsmanship in developing this exceptional application. 👏 The
remarkable dedication to upholding impeccable grammar and spelling in
the documentation significantly enhances the polished and seamless
experience for readers.
I want to stress that your feedback is invaluable to us and is genuinely
cherished.
With gratitude,
@baskaryan, @hwchase17
In this commit, I have made a modification to the term "Langchain" to
correctly reflect the project's name as "LangChain". This change ensures
consistency and accuracy throughout the codebase and documentation.
@baskaryan , @hwchase17
Refined the example in router.ipynb by addressing a minor typographical
error. The typo "rins" has been corrected to "rains" in the code snippet
that demonstrates the usage of the MultiPromptChain. This change ensures
accuracy and consistency in the provided code example.
This improvement enhances the readability and correctness of the
notebook, making it easier for users to understand and follow the
demonstration. The commit aims to maintain the quality and accuracy of
the content within the repository.
Thank you for your attention to detail, and please review the change at
your convenience.
@baskaryan , @hwchase17
- Description: Fix a minor variable naming inconsistency in a code
snippet in the docs
- Issue: N/A
- Dependencies: none
- Tag maintainer: N/A
- Twitter handle: N/A
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: @baskaryan
- Twitter handle: symbldotai
---------
Co-authored-by: toshishjawale <toshish@symbl.ai>
Update documentation and URLs for the Langchain Context integration.
We've moved from getcontext.ai to context.ai \o/
Thanks in advance for the review!
Now with ElasticsearchStore VectorStore merged, i've added support for
the self-query retriever.
I've added a notebook also to demonstrate capability. I've also added
unit tests.
**Credit**
@elastic and @phoey1 on twitter.
Todo:
- [x] Connection options (cloud, localhost url, es_connection) support
- [x] Logging support
- [x] Customisable field support
- [x] Distance Similarity support
- [x] Metadata support
- [x] Metadata Filter support
- [x] Retrieval Strategies
- [x] Approx
- [x] Approx with Hybrid
- [x] Exact
- [x] Custom
- [x] ELSER (excluding hybrid as we are working on RRF support)
- [x] integration tests
- [x] Documentation
👋 this is a contribution to improve Elasticsearch integration with
Langchain. Its based loosely on the changes that are in master but with
some notable changes:
## Package name & design improvements
The import name is now `ElasticsearchStore`, to aid discoverability of
the VectorStore.
```py
## Before
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch, ElasticKnnSearch
## Now
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```
## Retrieval Strategy support
Before we had a number of classes, depending on the strategy you wanted.
`ElasticKnnSearch` for approx, `ElasticVectorSearch` for exact / brute
force.
With `ElasticsearchStore` we have retrieval strategies:
### Approx Example
Default strategy for the vast majority of developers who use
Elasticsearch will be inferring the embeddings from outside of
Elasticsearch. Uses KNN functionality of _search.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index"
)
output = docsearch.similarity_search("foo", k=1)
```
### Approx, with hybrid
Developers who want to search, using both the embedding and the text
bm25 match. Its simple to enable.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(hybrid=True)
)
output = docsearch.similarity_search("foo", k=1)
```
### Approx, with `query_model_id`
Developers who want to infer within Elasticsearch, using the model
loaded in the ml node.
This relies on the developer to setup the pipeline and index if they
wish to embed the text in Elasticsearch. Example of this in the test.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(
query_model_id="sentence-transformers__all-minilm-l6-v2"
),
)
output = docsearch.similarity_search("foo", k=1)
```
### I want to provide my own custom Elasticsearch Query
You might want to have more control over the query, to perform
multi-phase retrieval such as LTR, linearly boosting on document
parameters like recently updated or geo-distance. You can do this with
`custom_query_fn`
```py
def my_custom_query(query_body: dict, query: str) -> dict:
return {"query": {"match": {"text": {"query": "bar"}}}}
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts, FakeEmbeddings(), **elasticsearch_connection, index_name=index_name
)
docsearch.similarity_search("foo", k=1, custom_query=my_custom_query)
```
### Exact Example
Developers who have a small dataset in Elasticsearch, dont want the cost
of indexing the dims vs tradeoff on cost at query time. Uses
script_score.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ExactRetrievalStrategy(),
)
output = docsearch.similarity_search("foo", k=1)
```
### ELSER Example
Elastic provides its own sparse vector model called ELSER. With these
changes, its really easy to use. The vector store creates a pipeline and
index thats setup for ELSER. All the developer needs to do is configure,
ingest and query via langchain tooling.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.SparseVectorStrategy(),
)
output = docsearch.similarity_search("foo", k=1)
```
## Architecture
In future, we can introduce new strategies and allow us to not break bwc
as we evolve the index / query strategy.
## Credit
On release, could you credit @elastic and @phoey1 please? Thank you!
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Adds [DeepSparse](https://github.com/neuralmagic/deepsparse) as an LLM
backend. DeepSparse supports running various open-source sparsified
models hosted on [SparseZoo](https://sparsezoo.neuralmagic.com/) for
performance gains on CPUs.
Twitter handles: @mgoin_ @neuralmagic
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
# Added SmartGPT workflow by providing SmartLLM wrapper around LLMs
Edit:
As @hwchase17 suggested, this should be a chain, not an LLM. I have
adapted the PR.
It is used like this:
```
from langchain.prompts import PromptTemplate
from langchain.chains import SmartLLMChain
from langchain.chat_models import ChatOpenAI
hard_question = "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?"
hard_question_prompt = PromptTemplate.from_template(hard_question)
llm = ChatOpenAI(model_name="gpt-4")
prompt = PromptTemplate.from_template(hard_question)
chain = SmartLLMChain(llm=llm, prompt=prompt, verbose=True)
chain.run({})
```
Original text:
Added SmartLLM wrapper around LLMs to allow for SmartGPT workflow (as in
https://youtu.be/wVzuvf9D9BU). SmartLLM can be used wherever LLM can be
used. E.g:
```
smart_llm = SmartLLM(llm=OpenAI())
smart_llm("What would be a good company name for a company that makes colorful socks?")
```
or
```
smart_llm = SmartLLM(llm=OpenAI())
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=smart_llm, prompt=prompt)
chain.run("colorful socks")
```
SmartGPT consists of 3 steps:
1. Ideate - generate n possible solutions ("ideas") to user prompt
2. Critique - find flaws in every idea & select best one
3. Resolve - improve upon best idea & return it
Fixes#4463
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
- @hwchase17
- @agola11
Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit adds the LangChain utility which allows for the real-time
retrieval of cryptocurrency exchange prices. With LangChain, users can
easily access up-to-date pricing information by running the command
".run(from_currency, to_currency)". This new feature provides a
convenient way to stay informed on the latest exchange rates and make
informed decisions when trading crypto.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Adds the ArcGISLoader class to
`langchain.document_loaders`
- Allows users to load data from ArcGIS Online, Portal, and similar
- Users can authenticate with `arcgis.gis.GIS` or retrieve public data
anonymously
- Uses the `arcgis.features.FeatureLayer` class to retrieve the data
- Defines the most relevant keywords arguments and accepts `**kwargs`
- Dependencies: Using this class requires `arcgis` and, optionally,
`bs4.BeautifulSoup`.
Tagging maintainers:
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Updated interactive walkthrough link in index.md to resolve 404 error.
Also, expressing deep gratitude to LangChain library developers for
their exceptional efforts 🥇 .
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added a new use case category called "Web Scraping", and
a tutorial to scrape websites using OpenAI Functions Extraction chain to
the docs.
- Tag maintainer:@baskaryan @hwchase17 ,
- Twitter handle: https://www.linkedin.com/in/haiphunghiem/ (I'm on
LinkedIn mostly)
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
This PR introduces [Label Studio](https://labelstud.io/) integration
with LangChain via `LabelStudioCallbackHandler`:
- sending data to the Label Studio instance
- labeling dataset for supervised LLM finetuning
- rating model responses
- tracking and displaying chat history
- support for custom data labeling workflow
### Example
```
chat_llm = ChatOpenAI(callbacks=[LabelStudioCallbackHandler(mode="chat")])
chat_llm([
SystemMessage(content="Always use emojis in your responses."),
HumanMessage(content="Hey AI, how's your day going?"),
AIMessage(content="🤖 I don't have feelings, but I'm running smoothly! How can I help you today?"),
HumanMessage(content="I'm feeling a bit down. Any advice?"),
AIMessage(content="🤗 I'm sorry to hear that. Remember, it's okay to seek help or talk to someone if you need to. 💬"),
HumanMessage(content="Can you tell me a joke to lighten the mood?"),
AIMessage(content="Of course! 🎭 Why did the scarecrow win an award? Because he was outstanding in his field! 🌾"),
HumanMessage(content="Haha, that was a good one! Thanks for cheering me up."),
AIMessage(content="Always here to help! 😊 If you need anything else, just let me know."),
HumanMessage(content="Will do! By the way, can you recommend a good movie?"),
])
```
<img width="906" alt="image"
src="https://github.com/langchain-ai/langchain/assets/6087484/0a1cf559-0bd3-4250-ad96-6e71dbb1d2f3">
### Dependencies
- [label-studio](https://pypi.org/project/label-studio/)
- [label-studio-sdk](https://pypi.org/project/label-studio-sdk/)
https://twitter.com/labelstudiohq
---------
Co-authored-by: nik <nik@heartex.net>
As of the recent PR at #9043, after some testing we've realised that the
default values were not being used for `api_key` and `api_url`. Besides
that, the default for `api_key` was set to `argilla.apikey`, but since
the default values are intended for people using the Argilla Quickstart
(easy to run and setup), the defaults should be instead `owner.apikey`
if using Argilla 1.11.0 or higher, or `admin.apikey` if using a lower
version of Argilla.
Additionally, we've removed the f-string replacements from the
docstrings.
---------
Co-authored-by: Gabriel Martin <gabriel@argilla.io>
In second section it looks like a copy/paste from the first section and
doesn't include the specific embedding model mentioned in the example so
I added it for clarity.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
The table creation process in these examples commands do not match what
the recently updated functions in these example commands is looking for.
This change updates the type in the table creation command.
Issue Number for my report of the doc problem #7446
@rlancemartin and @eyurtsev I believe this is your area
Twitter: @j1philli
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.
- **Issue**: Not applicable.
- **Dependencies**: `betabageldb` PyPi package.
- **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
- **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
We ran `make format`, `make lint` and `make test` locally.
Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
---------
Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.
### Maintainers
@agola11
@aarora79
### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.
- [x] Make Lint
- [x] Make Format
- [x] Make Test
#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
Expressing gratitude to the creator for crafting this remarkable
application. 🙌, Would like to Enhance grammar and spelling in the
documentation for a polished reader experience.
Your feedback is valuable as always
@baskaryan , @hwchase17 , @eyurtsev
This PR adds the ability to temporarily cache or persistently store
embeddings.
A notebook has been included showing how to set up the cache and how to
use it with a vectorstore.
- Description: Improvement in the Grobid loader documentation, typos and
suggesting to use the docker image instead of installing Grobid in local
(the documentation was also limited to Mac, while docker allow running
in any platform)
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @whitenoise
This pull request aims to ensure that the `OpenAICallbackHandler` can
properly calculate the total cost for Azure OpenAI chat models. The
following changes have resolved this issue:
- The `model_name` has been added to the ChatResult llm_output. Without
this, the default values of `gpt-35-turbo` were applied. This was
causing the total cost for Azure OpenAI's GPT-4 to be significantly
inaccurate.
- A new parameter `model_version` has been added to `AzureChatOpenAI`.
Azure does not include the model version in the response. With the
addition of `model_name`, this is not a significant issue for GPT-4
models, but it's an issue for GPT-3.5-Turbo. Version 0301 (default) of
GPT-3.5-Turbo on Azure has a flat rate of 0.002 per 1k tokens for both
prompt and completion. However, version 0613 introduced a split in
pricing for prompt and completion tokens.
- The `OpenAICallbackHandler` implementation has been updated with the
proper model names, versions, and cost per 1k tokens.
Unit tests have been added to ensure the functionality works as
expected; the Azure ChatOpenAI notebook has been updated with examples.
Maintainers: @hwchase17, @baskaryan
Twitter handle: @jjczopek
---------
Co-authored-by: Jerzy Czopek <jerzy.czopek@avanade.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Instruction for integration with Log10: an [open
source](https://github.com/log10-io/log10) proxiless LLM data management
and application development platform that lets you log, debug and tag
your Langchain calls
- Tag maintainer: @baskaryan
- Twitter handle: @log10io @coffeephoenix
Several examples showing the integration included
[here](https://github.com/log10-io/log10/tree/main/examples/logging) and
in the PR
Description: Adds Rockset as a chat history store
Dependencies: no changes
Tag maintainer: @hwchase17
This PR passes linting and testing.
I added a test for the integration and an example notebook showing its
use.
This PR adds 8 new loaders:
* `AirbyteCDKLoader` This reader can wrap and run all python-based
Airbyte source connectors.
* Separate loaders for the most commonly used APIs:
* `AirbyteGongLoader`
* `AirbyteHubspotLoader`
* `AirbyteSalesforceLoader`
* `AirbyteShopifyLoader`
* `AirbyteStripeLoader`
* `AirbyteTypeformLoader`
* `AirbyteZendeskSupportLoader`
## Documentation and getting started
I added the basic shape of the config to the notebooks. This increases
the maintenance effort a bit, but I think it's worth it to make sure
people can get started quickly with these important connectors. This is
also why I linked the spec and the documentation page in the readme as
these two contain all the information to configure a source correctly
(e.g. it won't suggest using oauth if that's avoidable even if the
connector supports it).
## Document generation
The "documents" produced by these loaders won't have a text part
(instead, all the record fields are put into the metadata). If a text is
required by the use case, the caller needs to do custom transformation
suitable for their use case.
## Incremental sync
All loaders support incremental syncs if the underlying streams support
it. By storing the `last_state` from the reader instance away and
passing it in when loading, it will only load updated records.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: added filter to query methods in VectorStoreIndexWrapper
for filtering by metadata (i.e. search_kwargs)
- Tag maintainer: @rlancemartin, @eyurtsev
Updated the doc snippet on this topic as well. It took me a long while
to figure out how to filter the vectorstore by filename, so this might
help someone else out.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: I have added an example showing how to pass a custom
template to ConversationRetrievalChain. Instead of
CONDENSE_QUESTION_PROMPT we can pass any prompt in the argument
condense_question_prompt. Look in Use cases -> QA over Documents -> How
to -> Store and reference chat history,
- Issue: #8864,
- Dependencies: NA,
- Tag maintainer: @hinthornw,
- Twitter handle:
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876
This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY
Thanks again in advance for your help!
cc: @hwchase17, @baskaryan
---------
Co-authored-by: dvonthenen <david.vonthenen@gmail.com>