Update API reference documentation. This PR will pick up a number of missing classes, it also applies selective formatting based on the class / object type.
Description: Adding support for [Amazon
Textract](https://aws.amazon.com/textract/) as a PDF document loader
---------
Co-authored-by: schadem <45048633+schadem@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Resolves occasional JSON parsing error when some predictions are passed
through a `MultiPromptChain`.
Makes [this
modification](https://github.com/langchain-ai/langchain/issues/5163#issuecomment-1652220401)
to `multi_prompt_prompt.py`, which is much cleaner than appending an
entire example object, which is another community-reported solution.
@hwchase17, @baskaryan
cc: @SimasJan
- Description: Added a missing word and rearranged a sentence in the
documentation of Self Query Retrievers.,
- Issue: NA,
- Dependencies: NA,
- Tag maintainer: @baskaryan,
- Twitter handle: NA
Thanks for your time.
llamacpp params (per their own code) are unstable, so instead of
adding/deleting them constantly adding a model_kwargs parameter that
allows for arbitrary additional kwargs
cc @jsjolund and @zacps re #8599 and #8704
There is already a `loads()` function which takes a JSON string and
loads it using the Reviver
But in the callbacks system, there is a `serialized` object that is
passed in and that object is already a deserialized JSON-compatible
object. This allows you to call `load(serialized)` and bypass
intermediate JSON encoding.
I found one other place in the code that benefited from this
short-circuiting (string_run_evaluator.py) so I fixed that too.
Tagging @baskaryan for general/utility stuff.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Description: Add ScaNN vectorstore to langchain.
ScaNN is a Open Source, high performance vector similarity library
optimized for AVX2-enabled CPUs.
https://github.com/google-research/google-research/tree/master/scann
- Dependencies: scann
Python notebook to illustrate the usage:
docs/extras/integrations/vectorstores/scann.ipynb
Integration test:
libs/langchain/tests/integration_tests/vectorstores/test_scann.py
@rlancemartin, @eyurtsev for review.
Thanks!
This PR updates _load_reduce_documents_chain to handle
`reduce_documents_chain` and `combine_documents_chain` config
Please review @hwchase17, @baskaryan
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
# What
- This is to add filter option to sklearn vectore store functions
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Add filter to sklearn vectore store functions.
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This is to add save_local and load_local to tfidf_vectorizer and docs in
tfidf_retriever to make the vectorizer reusable.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: add save_local and load_local to tfidf_vectorizer and
docs in tfidf_retriever
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Removing score threshold parameter of faiss
_similarity_search_with_relevance_scores as the thresholding part is
implemented in similarity_search_with_relevance_scores method which
calls this method.
As this method is supposed to be a private method of faiss.py this will
never receive the score threshold parameter as it is popped in the super
method similarity_search_with_relevance_scores.
@baskaryan @hwchase17
Just a tiny change to use `list.append(...)` and `list.extend(...)`
instead of `list += [...]` so that no unnecessary temporary lists are
created.
Since its a tiny miscellaneous thing I guess @baskaryan is the
maintainer to tag?
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Simple retriever that applies an LLM between the user input and the
query pass the to retriever.
It can be used to pre-process the user input in any way.
The default prompt:
```
DEFAULT_QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an assistant tasked with taking a natural languge query from a user
and converting it into a query for a vectorstore. In this process, you strip out
information that is not relevant for the retrieval task. Here is the user query: {question} """
)
```
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description:
- Provides a new attribute in the AmazonKendraRetriever which processes
a ResultItem and returns a string that will be used as page_content;
- The excerpt metadata should not be changed, it will be kept as was
retrieved. But it is cleaned when composing the page_content;
- Refactors the AmazonKendraRetriever to improve code reusability;
- Issue: #7787
- Tag maintainer: @3coins @baskaryan
- Twitter handle: wilsonleao
**Why?**
Some use cases need to adjust the page_content by dynamically combining
the ResultItem attributes depending on the context of the item.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#7854
Added the ability to use the `separator` ase a regex or a simple
character.
Fixed a bug where `start_index` was incorrectly counting from -1.
Who can review?
@eyurtsev
@hwchase17
@mmz-001
When using AzureChatOpenAI the openai_api_type defaults to "azure". The
utils' get_from_dict_or_env() function triggered by the root validator
does not look for user provided values from environment variables
OPENAI_API_TYPE, so other values like "azure_ad" are replaced with
"azure". This does not allow the use of token-based auth.
By removing the "default" value, this allows environment variables to be
pulled at runtime for the openai_api_type and thus enables the other
api_types which are expected to work.
This fixes#6650
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: updates to Vectara documentation with more details on how
to get started.
- Issue: NA
- Dependencies: NA
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @vectara, @ofermend
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This lets you pass callbacks when you create the summarize chain:
```
summarize = load_summarize_chain(llm, chain_type="map_reduce", callbacks=[my_callbacks])
summary = summarize(documents)
```
See #5572 for a similar surgical fix.
tagging @hwchase17 for callbacks work
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
This is another case, similar to #5572 and #7565 where the callbacks are
getting dropped during construction of the chains.
tagging @hwchase17 and @agola11 for callbacks propagation
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Description: I have added two methods serializer and deserializer
methods. There was method called save local but it saves the to the
local disk. I wanted the vectorstore in the format using which i can
push it to the sql database's blob field. I have used this while i was
working on something
@rlancemartin, @eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
It fails currently because the event loop is already running.
The `retry` decorator alraedy infers an `AsyncRetrying` handler for
coroutines (see [tenacity
line](aa6f8f0a24/tenacity/__init__.py (L535)))
However before_sleep always gets called synchronously (see [tenacity
line](aa6f8f0a24/tenacity/__init__.py (L338))).
Instead, check for a running loop and use that it exists. Of course,
it's running an async method synchronously which is not _nice_. Given
how important LLMs are, it may make sense to have a task list or
something but I'd want to chat with @nfcampos on where that would live.
This PR also fixes the unit tests to check the handler is called and to
make sure the async test is run (it looks like it's just been being
skipped). It would have failed prior to the proposed fixes but passes
now.
Replace this comment with:
- Description: added a document loader for a list of RSS feeds or OPML.
It iterates through the list and uses NewsURLLoader to load each
article.
- Issue: N/A
- Dependencies: feedparser, listparser
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @ruze
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Solves #8644
This embedding models output identical random embedding vectors, given
the input texts are identical.
Useful when used in unittest.
@baskaryan