**Description:**
Updated documentation for DeepLake init method.
Especially the exec_option docs needed improvement, but did a general
cleanup while I was looking at it.
**Issue:** n/a
**Dependencies:** None
---------
Co-authored-by: Nathan Voxland <nathan@voxland.net>
- **Description:** In order to override the bool value of
"fetch_schema_from_transport" in the GraphQLAPIWrapper, a
"fetch_schema_from_transport" value needed to be added to the
"_EXTRA_OPTIONAL_TOOLS" dictionary in load_tools in the "graphql" key.
The parameter "fetch_schema_from_transport" must also be passed in to
the GraphQLAPIWrapper to allow reading of the value when creating the
client. Passing as an optional parameter is probably best to avoid
breaking changes. This change is necessary to support GraphQL instances
that do not support fetching schema, such as TigerGraph. More info here:
[TigerGraph GraphQL Schema
Docs](https://docs.tigergraph.com/graphql/current/schema)
- **Threads handle:** @zacharytoliver
---------
Co-authored-by: Zachary Toliver <zt10191991@hotmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Add missing chunk parameter for _stream/_astream for some
chat models, make all chat models in a consistent behaviour.
- Issue: N/A
- Dependencies: N/A
**Description:** Here is a minimal example to illustrate behavior:
```python
from langchain_core.runnables import RunnableLambda
def my_function(*args, **kwargs):
return 3 + kwargs.get("n", 0)
runnable = RunnableLambda(my_function).bind(n=1)
assert 4 == runnable.invoke({})
assert [4] == list(runnable.stream({}))
assert 4 == await runnable.ainvoke({})
assert [4] == [item async for item in runnable.astream({})]
```
Here, `runnable.invoke({})` and `runnable.stream({})` work fine, but
`runnable.ainvoke({})` raises
```
TypeError: RunnableLambda._ainvoke.<locals>.func() got an unexpected keyword argument 'n'
```
and similarly for `runnable.astream({})`:
```
TypeError: RunnableLambda._atransform.<locals>.func() got an unexpected keyword argument 'n'
```
Here we assume that this behavior is undesired and attempt to fix it.
**Issue:** https://github.com/langchain-ai/langchain/issues/17241,
https://github.com/langchain-ai/langchain/discussions/16446
In this pull request, we introduce the add_images method to the
SingleStoreDB vector store class, expanding its capabilities to handle
multi-modal embeddings seamlessly. This method facilitates the
incorporation of image data into the vector store by associating each
image's URI with corresponding document content, metadata, and either
pre-generated embeddings or embeddings computed using the embed_image
method of the provided embedding object.
the change includes integration tests, validating the behavior of the
add_images. Additionally, we provide a notebook showcasing the usage of
this new method.
---------
Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
- **Description:**
The existing `RedisCache` implementation lacks proper handling for redis
client failures, such as `ConnectionRefusedError`, leading to subsequent
failures in pipeline components like LLM calls. This pull request aims
to improve error handling for redis client issues, ensuring a more
robust and graceful handling of such errors.
- **Issue:** Fixes#16866
- **Dependencies:** No new dependency
- **Twitter handle:** N/A
Co-authored-by: snsten <>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Sent to LangSmith
Thank you for contributing to LangChain!
Checklist:
- [ ] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
- [ ] PR message: **Delete this entire template message** and replace it
with the following bulleted list
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [ ] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
Description:
In this PR, I am adding a PolygonTickerNews Tool, which can be used to
get the latest news for a given ticker / stock.
Twitter handle: [@virattt](https://twitter.com/virattt)
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
- Example: "community: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
**Description**: CogniSwitch focusses on making GenAI usage more
reliable. It abstracts out the complexity & decision making required for
tuning processing, storage & retrieval. Using simple APIs documents /
URLs can be processed into a Knowledge Graph that can then be used to
answer questions.
**Dependencies**: No dependencies. Just network calls & API key required
**Tag maintainer**: @hwchase17
**Twitter handle**: https://github.com/CogniSwitch
**Documentation**: Please check
`docs/docs/integrations/toolkits/cogniswitch.ipynb`
**Tests**: The usual tool & toolkits tests using `test_imports.py`
PR has passed linting and testing before this submission.
---------
Co-authored-by: Saicharan Sridhara <145636106+saiCogniswitch@users.noreply.github.com>
## Amazon Personalize support on Langchain
This PR is a successor to this PR -
https://github.com/langchain-ai/langchain/pull/13216
This PR introduces an integration with [Amazon
Personalize](https://aws.amazon.com/personalize/) to help you to
retrieve recommendations and use them in your natural language
applications. This integration provides two new components:
1. An `AmazonPersonalize` client, that provides a wrapper around the
Amazon Personalize API.
2. An `AmazonPersonalizeChain`, that provides a chain to pull in
recommendations using the client, and then generating the response in
natural language.
We have added this to langchain_experimental since there was feedback
from the previous PR about having this support in experimental rather
than the core or community extensions.
Here is some sample code to explain the usage.
```python
from langchain_experimental.recommenders import AmazonPersonalize
from langchain_experimental.recommenders import AmazonPersonalizeChain
from langchain.llms.bedrock import Bedrock
recommender_arn = "<insert_arn>"
client=AmazonPersonalize(
credentials_profile_name="default",
region_name="us-west-2",
recommender_arn=recommender_arn
)
bedrock_llm = Bedrock(
model_id="anthropic.claude-v2",
region_name="us-west-2"
)
chain = AmazonPersonalizeChain.from_llm(
llm=bedrock_llm,
client=client
)
response = chain({'user_id': '1'})
```
Reviewer: @3coins
Hi, I'm from the LanceDB team.
Improves LanceDB integration by making it easier to use - now you aren't
required to create tables manually and pass them in the constructor,
although that is still backward compatible.
Bug fix - pandas was being used even though it's not a dependency for
LanceDB or langchain
PS - this issue was raised a few months ago but lost traction. It is a
feature improvement for our users kindly review this , Thanks !
- OpenLLM was using outdated method to get the final text output from
openllm client invocation which was raising the error. Therefore
corrected that.
- OpenLLM `_identifying_params` was getting the openllm's client
configuration using outdated attributes which was raising error.
- Updated the docstring for OpenLLM.
- Added timeout parameter to be passed to underlying openllm client.
Another PR will be done for the langchain-astradb package.
Note: for future PRs, devs will be done in the partner package only. This one is just to align with the rest of the components in the community package and it fixes a bunch of issues.
- **Description:** adds an `exclude` parameter to the DirectoryLoader
class, based on similar behavior in GenericLoader
- **Issue:** discussed in
https://github.com/langchain-ai/langchain/discussions/9059 and I think
in some other issues that I cannot find at the moment 🙇
- **Dependencies:** None
- **Twitter handle:** don't have one sorry! Just https://github/nejch
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- **Description:** Addresses the bugs described in linked issue where an
import was erroneously removed and the rename of a keyword argument was
missed when migrating from beta --> stable of the azure-search-documents
package
- **Issue:** https://github.com/langchain-ai/langchain/issues/17598
- **Dependencies:** N/A
- **Twitter handle:** N/A
- **Description:** This fixes an issue with working with RecordManager.
RecordManager was generating new hashes on documents because `add_texts`
was modifying the metadata directly. Additionally moved some tests to
unit tests since that was a more appropriate home.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** `@_morgan_adams_`
The current issue:
Most of the deprecation descriptions are duplicated. For example:
`[Deprecated] Chat Agent.[Deprecated] Chat Agent.` for the [ChatAgent
class](https://api.python.langchain.com/en/latest/langchain_api_reference.html#classes)
description.
NOTE: I've tested it only with new ut! I cannot build API Reference
locally :(
**Description:** This PR introduces a new "Astra DB" Partner Package.
So far only the vector store class is _duplicated_ there, all others
following once this is validated and established.
Along with the move to separate package, incidentally, the class name
will change `AstraDB` => `AstraDBVectorStore`.
The strategy has been to duplicate the module (with prospected removal
from community at LangChain 0.2). Until then, the code will be kept in
sync with minimal, known differences (there is a makefile target to
automate drift control. Out of convenience with this check, the
community package has a class `AstraDBVectorStore` aliased to `AstraDB`
at the end of the module).
With this PR several bugfixes and improvement come to the vector store,
as well as a reshuffling of the doc pages/notebooks (Astra and
Cassandra) to align with the move to a separate package.
**Dependencies:** A brand new pyproject.toml in the new package, no
changes otherwise.
**Twitter handle:** `@rsprrs`
---------
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
- **Description:** Updates to the Kuzu API had broken this
functionality. These updates resolve those issues and add a new test to
demonstrate the updates.
- **Issue:** #11874
- **Dependencies:** No new dependencies
- **Twitter handle:** @amirk08
Test results:
```
tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_no_params PASSED [ 33%]
tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_params PASSED [ 66%]
tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_refresh_schema PASSED [100%]
=================================================== slowest 5 durations ===================================================
0.53s call tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_refresh_schema
0.34s call tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_no_params
0.28s call tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_params
0.03s teardown tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_refresh_schema
0.02s teardown tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_params
==================================================== 3 passed in 1.27s ====================================================
```
- **Description:** Allow a bool value to be passed to
fetch_schema_from_transport since not all GraphQL instances support this
feature, such as TigerGraph.
- **Threads:** @zacharytoliver
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:** Resolving problem in
`langchain_community\document_loaders\pebblo.py` with `import pwd`.
`pwd` is not available on windows. import moved to try catch block
- **Issue:** #17514
This PR is adding support for NVIDIA NeMo embeddings issue #16095.
---------
Co-authored-by: Praveen Nakshatrala <pnakshatrala@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Noticed and fixed a few typos in the SmartLLMChain default ideation and
critique prompts
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Adds an optional name param to our base message to support passing names
into LLMs.
OpenAI supports having a name on anything except tool message now
(system, ai, user/human).
https://github.com/langchain-ai/langchain/issues/17525
### Example Code
```python
from langchain_community.document_loaders.athena import AthenaLoader
database_name = "database"
s3_output_path = "s3://bucket-no-prefix"
query="""SELECT
CAST(extract(hour FROM current_timestamp) AS INTEGER) AS current_hour,
CAST(extract(minute FROM current_timestamp) AS INTEGER) AS current_minute,
CAST(extract(second FROM current_timestamp) AS INTEGER) AS current_second;
"""
profile_name = "AdministratorAccess"
loader = AthenaLoader(
query=query,
database=database_name,
s3_output_uri=s3_output_path,
profile_name=profile_name,
)
documents = loader.load()
print(documents)
```
### Error Message and Stack Trace (if applicable)
NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject
operation: The specified key does not exist
### Description
Athena Loader errors when result s3 bucket uri has no prefix. The Loader
instance call results in a "NoSuchKey: An error occurred (NoSuchKey)
when calling the GetObject operation: The specified key does not exist."
error.
If s3_output_path contains a prefix like:
```python
s3_output_path = "s3://bucket-with-prefix/prefix"
```
Execution works without an error.
## Suggested solution
Modify:
```python
key = "/".join(tokens[1:]) + "/" + query_execution_id + ".csv"
```
to
```python
key = "/".join(tokens[1:]) + ("/" if tokens[1:] else "") + query_execution_id + ".csv"
```
9e8a3fc4ff/libs/community/langchain_community/document_loaders/athena.py (L128)
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Fri Sep 15 13:41:30 PDT
2023; root:xnu-8796.141.3.700.8~1/RELEASE_ARM64_T8103
> Python Version: 3.9.9 (main, Jan 9 2023, 11:42:03)
[Clang 14.0.0 (clang-1400.0.29.102)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
1. integrate with
[`Yuan2.0`](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md)
2. update `langchain.llms`
3. add a new doc for [Yuan2.0
integration](docs/docs/integrations/llms/yuan2.ipynb)
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
If the SQLAlchemyMd5Cache is shared among multiple processes, it is
possible to encounter a race condition during the cache update.
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- **Description:** Support filtering databases in the use case where
devs do not want to query ALL entries within a DB,
- **Issue:** N/A,
- **Dependencies:** N/A,
- **Twitter handle:** I don't have Twitter but feel free to tag my
Github!
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This pull request introduces support for various Approximate Nearest
Neighbor (ANN) vector index algorithms in the VectorStore class,
starting from version 8.5 of SingleStore DB. Leveraging this enhancement
enables users to harness the power of vector indexing, significantly
boosting search speed, particularly when handling large sets of vectors.
---------
Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:**
1. Added _clear_edges()_ and _get_number_of_nodes()_ functions in
NetworkxEntityGraph class.
2. Added the above two function in graph_networkx_qa.ipynb
documentation.
- **Description:** Callback manager can't catch chain input or output
validation errors because `prepare_input` and `prepare_output` are not
part of the try/raise logic, this PR fixes that logic.
- **Issue:** #15954
- **Description:** Fixes a type annotation issue in the definition of
BedrockBase. This issue was that the annotation for the `config`
attribute includes a ForwardRef to `botocore.client.Config` which is
only imported when `TYPE_CHECKING`. This can cause pydantic to raise an
error like `pydantic.errors.ConfigError: field "config" not yet prepared
so type is still a ForwardRef, ...`.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** `@__nat_n__`
- **Description :**
Fix: Use shallow copy for schema manipulation in get_format_instructions
Prevents side effects on the original schema object by using a
dictionary comprehension for a safer and more controlled manipulation of
schema key-value pairs, enhancing code reliability.
- **Issue:** #17161
- **Dependencies:** None
- **Twitter handle:** None
Users can provide an Elasticsearch connection with custom headers. This
PR makes sure these headers are preserved when adding the langchain user
agent header.