**Description:**
This commit adds a vector store for the Postgres-based vector database
(`TimescaleVector`).
Timescale Vector(https://www.timescale.com/ai) is PostgreSQL++ for AI
applications. It enables you to efficiently store and query billions of
vector embeddings in `PostgreSQL`:
- Enhances `pgvector` with faster and more accurate similarity search on
1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based
partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and
relational data.
Timescale Vector scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata,
vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade
feature liked streaming backups and replication, high-availability and
row-level security.
- Enables a worry-free experience with enterprise-grade security and
compliance.
Timescale Vector is available on Timescale, the cloud PostgreSQL
platform. (There is no self-hosted version at this time.) LangChain
users get a 90-day free trial for Timescale Vector.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Avthar Sewrathan <avthar@timescale.com>
- **Description:** This PR implements a new LLM API to
https://gradient.ai
- **Issue:** Feature request for LLM #10745
- **Dependencies**: No additional dependencies are introduced.
- **Tag maintainer:** I am opening this PR for visibility, once ready
for review I'll tag.
- ```make format && make lint && make test``` is running.
- added a `integration` and `mock unit` test.
Co-authored-by: michaelfeil <me@michaelfeil.eu>
Co-authored-by: Bagatur <baskaryan@gmail.com>
We are introducing the py integration to Javelin AI Gateway
www.getjavelin.io. Javelin is an enterprise-scale fast llm router &
gateway. Could you please review and let us know if there is anything
missing.
Javelin AI Gateway wraps Embedding, Chat and Completion LLMs. Uses
javelin_sdk under the covers (pip install javelin_sdk).
Author: Sharath Rajasekar, Twitter: @sharathr, @javelinai
Thanks!!
### Description
- Add support for streaming with `Bedrock` LLM and `BedrockChat` Chat
Model.
- Bedrock as of now supports streaming for the `anthropic.claude-*` and
`amazon.titan-*` models only, hence support for those have been built.
- Also increased the default `max_token_to_sample` for Bedrock
`anthropic` model provider to `256` from `50` to keep in line with the
`Anthropic` defaults.
- Added examples for streaming responses to the bedrock example
notebooks.
**_NOTE:_**: This PR fixes the issues mentioned in #9897 and makes that
PR redundant.
- **Description:** QianfanEndpoint bugs for SystemMessages. When the
`SystemMessage` is input as the messages to
`chat_models.QianfanEndpoint`. A `TypeError` will be raised.
- **Issue:** #10643
- **Dependencies:**
- **Tag maintainer:** @baskaryan
- **Twitter handle:** no
This PR addresses the limitation of Azure OpenAI embeddings, which can
handle at maximum 16 texts in a batch. This can be solved setting
`chunk_size=16`. However, I'd love to have this automated, not to force
the user to figure where the issue comes from and how to solve it.
Closes#4575.
@baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description:** Possible to filter with substrings in
similarity_search_with_score, for example: filter={'user_id':
{'substring': 'user'}}
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description:**
changed return parameter of YouTubeSearchTool
1. changed the returning links of youtube videos by adding prefix
"https://www.youtube.com", now this will return the exact links to the
videos
2. updated the returning type from 'string' to 'list', which will be
more suited for further processings
**Issue:**
Fixes#10742
**Dependencies:**
None
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** changed return parameter of YouTubeSearchTool
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** None
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description:** This PR adds HTTP PUT support for the langchain openapi
agent toolkit by leveraging existing structure and HTTP put request
wrapper. The PUT method is almost identical to HTTP POST but should be
idempotent and therefore tighter than POST which is not idempotent. Some
APIs may consider to use PUT instead of POST which is unfortunately not
supported with the current toolkit yet.
### Description
Implements synthetic data generation with the fields and preferences
given by the user. Adds showcase notebook.
Corresponding prompt was proposed for langchain-hub.
### Example
```
output = chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}})
print(output)
# {'fields': {'colors': ['blue', 'yellow']},
'preferences': {'style': 'Make it in a style of a weather forecast.'},
'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}
```
### Twitter handle
@deepsense_ai @matt_wosinski
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description:** upgrade the `dataclasses_json` dependency to its latest
version ([no real breaking
change](https://github.com/lidatong/dataclasses-json/releases/tag/v0.6.0)
if used correctly), while allowing previous version to not break other
users' setup
**Issue:** I need to use the latest version of that dependency in my
project, but `langchain` prevents it.
Note: it looks like running `poetry lock --no-update` did some changes
to the lockfiles as it was the first time it was with the
`macosx_11_0_arm64` architecture 🤷
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description**
Adds new output parser, this time enabling the output of LLM to be of an
XML format. Seems to be particularly useful together with Claude model.
Addresses [issue
9820](https://github.com/langchain-ai/langchain/issues/9820).
**Twitter handle**
@deepsense_ai @matt_wosinski
using sample:
```
endpoint_url = API URL
ChatGLM_llm = ChatGLM(
endpoint_url=endpoint_url,
api_key=Your API Key by ChatGLM
)
print(ChatGLM_llm("hello"))
```
```
model = ChatChatGLM(
chatglm_api_key="api_key",
chatglm_api_base="api_base_url",
model_name="model_name"
)
chain = LLMChain(llm=model)
```
Description: The call of ChatGLM has been adapted.
Issue: The call of ChatGLM has been adapted.
Dependencies: Need python package `zhipuai` and `aiostream`
Tag maintainer: @baskaryan
Twitter handle: None
I remove the compatibility test for pydantic version 2, because pydantic
v2 can't not pickle classmethod,but BaseModel use @root_validator is a
classmethod decorator.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
If metadata field returned in results, previous behavior unchanged. If
metadata field does not exist in results, expand metadata to any fields
returned outside of content field.
There's precedence for this as well, see the retriever:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/azure_cognitive_search.py#L96C46-L96C46
Issue:
#9765 - Ameliorates hard-coding in case you already indexed to cognitive
search without a metadata field but rather placed metadata in separate
fields.
@hwchase17
## Description
This PR updates the `NeptuneGraph` class to start using the boto API for
connecting to the Neptune service. With boto integration, the graph
class now supports authenticating requests using Sigv4; this is
encapsulated with the boto API, and users only have to ensure they have
the correct AWS credentials setup in their workspace to work with the
graph class.
This PR also introduces a conditional prompt that uses a simpler prompt
when using the `Anthropic` model provider. A simpler prompt have seemed
to work better for generating cypher queries in our testing.
**Note**: This version will require boto3 version 1.28.38 or greater to
work.
**Description:**
This commit enriches the `WeaviateHybridSearchRetriever` class by
introducing a new parameter, `hybrid_search_kwargs`, within the
`_get_relevant_documents` method. This parameter accommodates arbitrary
keyword arguments (`**kwargs`) which can be channeled to the inherited
public method, `get_relevant_documents`, originating from the
`BaseRetriever` class.
This modification facilitates more intricate querying capabilities,
allowing users to convey supplementary arguments to the `.with_hybrid()`
method. This expansion not only makes it possible to perform a more
nuanced search targeting specific properties but also grants the ability
to boost the weight of searched properties, to carry out a search with a
custom vector, and to apply the Fusion ranking method. The documentation
has been updated accordingly to delineate these new possibilities in
detail.
In light of the layered approach in which this search operates,
initiating with `query.get()` and then transitioning to
`.with_hybrid()`, several advantageous opportunities are unlocked for
the hybrid component that were previously unattainable.
Here’s a representative example showcasing a query structure that was
formerly unfeasible:
[Specific Properties
Only](https://weaviate.io/developers/weaviate/search/hybrid#selected-properties-only)
"The example below illustrates a BM25 search targeting the keyword
'food' exclusively within the 'question' property, integrated with
vector search results corresponding to 'food'."
```python
response = (
client.query
.get("JeopardyQuestion", ["question", "answer"])
.with_hybrid(
query="food",
properties=["question"], # Will now be possible moving forward
alpha=0.25
)
.with_limit(3)
.do()
)
```
This functionality is now accessible through my alterations, by
conveying `hybrid_search_kwargs={"properties": ["question", "answer"]}`
as an argument to
`WeaviateHybridSearchRetriever.get_relevant_documents()`. For example:
```python
import os
from weaviate import Client
from langchain.retrievers import WeaviateHybridSearchRetriever
client = Client(
url=os.getenv("WEAVIATE_CLIENT_URL"),
additional_headers={
"X-OpenAI-Api-Key": os.getenv("OPENAI_API_KEY"),
"Authorization": f"Bearer {os.getenv('WEAVIATE_API_KEY')}",
},
)
index_name = "Document"
text_key = "content"
attributes = ["title", "summary", "header", "url"]
retriever = ExtendedWeaviateHybridSearchRetriever(
client=client,
index_name=index_name,
text_key=text_key,
attributes=attributes,
)
# Warning: to utilize properties in this way, each use property must also be in the list `attributes + [text_key]`.
hybrid_search_kwargs = {"properties": ["summary^2", "content"]}
query_text = "Some Query Text"
relevant_docs = retriever.get_relevant_documents(
query=query_text,
hybrid_search_kwargs=hybrid_search_kwargs
)
```
In my experience working with the `weaviate-client` library, I have
found that these supplementary options stand as vital tools for
refining/finetuning searches, notably within multifaceted datasets. As a
final note, this implementation supports both backwards and forward
(within reason) compatiblity. It accommodates any future additional
parameters Weaviate may add to `.with_hybrid()`, without necessitating
further alterations.
**Additional Documentation:**
For a more comprehensive understanding and to explore a myriad of useful
options that are now accessible, please refer to the Weaviate
documentation:
- [Fusion Ranking
Method](https://weaviate.io/developers/weaviate/search/hybrid#fusion-ranking-method)
- [Selected Properties
Only](https://weaviate.io/developers/weaviate/search/hybrid#selected-properties-only)
- [Weight Boost Searched
Properties](https://weaviate.io/developers/weaviate/search/hybrid#weight-boost-searched-properties)
- [With a Custom
Vector](https://weaviate.io/developers/weaviate/search/hybrid#with-a-custom-vector)
**Tag Maintainer:**
@hwchase17 - I have tagged you based on your frequent contributions to
the pertinent file, `/retrievers/weaviate_hybrid_search.py`. My
apologies if this was not the appropriate choice.
Thank you for considering my contribution, I look forward to your
feedback, and to future collaboration.
I was trying to use web loaders on some spanish documentation (e.g.
[this site](https://www.fromdoppler.com/es/mailing-tendencias/), but the
auto-encoding introduced in
https://github.com/langchain-ai/langchain/pull/3602 was detected as
"MacRoman" instead of the (correct) "UTF-8".
To address this, I've added the ability to disable the auto-encoding, as
well as the ability to explicitly tell the loader what encoding to use.
- **Description:** Makes auto-setting the encoding optional in
`WebBaseLoader`, and introduces an `encoding` option to explicitly set
it.
- **Dependencies:** N/A
- **Tag maintainer:** @hwchase17
- **Twitter handle:** @czue
**Description:**
Pinecone hybrid search is now limited to default namespace. There is no
option for the user to provide a namespace to partition an index, which
is one of the most important features of pinecone.
**Resource:**
https://docs.pinecone.io/docs/namespaces
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- **Description:** Updating URL in Context Callback Docstrings and
update metadata key Context CallbackHandler uses to send model names.
- **Issue:** The URL in ContextCallbackHandler is out of date. Model
data being sent to Context should be under the "model" key and not
"llm_model". This allows Context to do more sophisticated analysis.
- **Dependencies:** None
Tagging @agamble.
- This pr adds `llm_kwargs` to the initialization of Xinference LLMs
(integrated in #8171 ).
- With this enhancement, users can not only provide `generate_configs`
when calling the llms for generation but also during the initialization
process. This allows users to include custom configurations when
utilizing LangChain features like LLMChain.
- It also fixes some format issues for the docstrings.
Hello @hwchase17
**Issue**:
The class WebResearchRetriever accept only
RecursiveCharacterTextSplitter, but never uses a specification of this
class. I propose to change the type to TextSplitter. Then, the lint can
accept all subtypes.
- tools invoked in async methods would not work due to missing await
- RunnableSequence.stream() was creating an extra root run by mistake,
and it can simplified due to existence of default implementation for
.transform()
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
**Description:** Renamed argument `database` in
`SQLDatabaseSequentialChain.from_llm()` to `db`,
I realize it's tiny and a bit of a nitpick but for consistency with
SQLDatabaseChain (and all the others actually) I thought it should be
renamed. Also got me while working and using it today.
✔️ Please make sure your PR is passing linting and
testing before submitting. Run `make format`, `make lint` and `make
test` to check this locally.
This PR is a documentation fix.
Description:
* fixes imports in the code samples in the docstrings of
`create_openai_fn_chain` and `create_structured_output_chain`
* fixes imports in
`docs/extras/modules/chains/how_to/openai_functions.ipynb`
* removes unused imports from the notebook
Issues:
* the docstrings use `from pydantic_v1 import BaseModel, Field` which
this PR changes to `from langchain.pydantic_v1 import BaseModel, Field`
* importing `pydantic` instead of `langchain.pydantic_v1` leads to
errors later in the notebook
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
- Description: Added support for Ollama embeddings
- Issue: the issue # it fixes (if applicable),
- Dependencies: N/A
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: @herrjemand
cc https://github.com/jmorganca/ollama/issues/436
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
Hello,
this PR improves coverage for caching by the two Cassandra-related
caches (i.e. exact-match and semantic alike) by switching to the more
general `dumps`/`loads` serdes utilities.
This enables cache usage within e.g. `ChatOpenAI` contexts (which need
to store lists of `ChatGeneration` instead of `Generation`s), which was
not possible as long as the cache classes were relying on the legacy
`_dump_generations_to_json` and `_load_generations_from_json`).
Additionally, a slightly different init signature is introduced for the
cache objects:
- named parameters required for init, to pave the way for easier changes
in the future connect-to-db flow (and tests adjusted accordingly)
- added a `skip_provisioning` optional passthrough parameter for use
cases where the user knows the underlying DB table, etc already exist.
Thank you for a review!
Adding support for Neo4j vector index hybrid search option. In Neo4j,
you can achieve hybrid search by using a combination of vector and
fulltext indexes.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description:
* Baidu AI Cloud's [Qianfan
Platform](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) is an
all-in-one platform for large model development and service deployment,
catering to enterprise developers in China. Qianfan Platform offers a
wide range of resources, including the Wenxin Yiyan model (ERNIE-Bot)
and various third-party open-source models.
- Issue: none
- Dependencies:
* qianfan
- Tag maintainer: @baskaryan
- Twitter handle:
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
`langchain.agents.openai_functions[_multi]_agent._parse_ai_message()`
incorrectly extracts AI message content, thus LLM response ("thoughts")
is lost and can't be logged or processed by callbacks.
This PR fixes function call message content retrieving.
- Description: Set up 'file_headers' params for accessing pdf file url
- Tag maintainer: @hwchase17
✅ make format, make lint, make test
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>