Enviroment -> Environment
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
- **Description:** A Document Loader for MongoDB
- **Issue:** n/a
- **Dependencies:** Motor, the async driver for MongoDB
- **Tag maintainer:** n/a
- **Twitter handle:** pigpenblue
Note that an initial mongodb document loader was created 4 months ago,
but the [PR ](https://github.com/langchain-ai/langchain/pull/4285)was
never pulled in. @leo-gan had commented on that PR, but given it is
extremely far behind the master branch and a ton has changed in
Langchain since then (including repo name and structure), I rewrote the
branch and issued a new PR with the expectation that the old one can be
closed.
Please reference that old PR for comments/context, but it can be closed
in favor of this one. Thanks!
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Based on the customers' requests for native langchain integration,
SearchApi is ready to invest in AI and LLM space, especially in
open-source development.
- This is our initial PR and later we want to improve it based on
customers' and langchain users' feedback. Most likely changes will
affect how the final results string is being built.
- We are creating similar native integration in Python and JavaScript.
- The next plan is to integrate into Java, Ruby, Go, and others.
- Feel free to assign @SebastjanPrachovskij as a main reviewer for any
SearchApi-related searches. We will be glad to help and support
langchain development.
## Description
Expanded the upper bound for `networkx` dependency to allow installation
of latest stable version. Tested the included sample notebook with
version 3.1, and all steps ran successfully.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:** Bedrock updated boto service name to
"bedrock-runtime" for the InvokeModel and InvokeModelWithResponseStream
APIs. This update also includes new model identifiers for Titan text,
embedding and Anthropic.
Co-authored-by: Mani Kumar Adari <maniadar@amazon.com>
The new Fireworks and FireworksChat implementations are awesome! Added
in this PR https://github.com/langchain-ai/langchain/pull/11117 thank
you @ZixinYang
However, I think stop words were not plumbed correctly. I've made some
simple changes to do that, and also updated the notebook to be a bit
clearer with what's needed to use both new models.
---------
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
The intermediate steps example in docs has an example on how to retrieve
and display the intermediate steps.
But the intermediate steps object is of type AgentAction which cannot be
passed to json.dumps (it raises an error).
I replaced it with Langchain's dumps function (from langchain.load.dump
import dumps) which is the preferred way to do so.
Description
* Refactor Fireworks within Langchain LLMs.
* Remove FireworksChat within Langchain LLMs.
* Add ChatFireworks (which uses chat completion api) to Langchain chat
models.
* Users have to install `fireworks-ai` and register an api key to use
the api.
Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin @baskaryan
This enables bulk args like `chunk_size` to be passed down from the
ingest methods (from_text, from_documents) to be passed down to the bulk
API.
This helps alleviate issues where bulk importing a large amount of
documents into Elasticsearch was resulting in a timeout.
Contribution Shoutout
- @elastic
- [x] Updated Integration tests
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Fixed navbar:
- renamed several files, so ToC is sorted correctly
- made ToC items consistent: formatted several Titles
- added several links
- reformatted several docs to a consistent format
- renamed several files (removed `_example` suffix)
- added renamed files to the `docs/docs_skeleton/vercel.json`
Sometimes you don't want the LLM to be aware of the whole graph schema,
and want it to ignore parts of the graph when it is constructing Cypher
statements.
- **Description**: Adding retrievers for [kay.ai](https://kay.ai) and
SEC filings powered by Kay and Cybersyn. Kay provides context as a
service: it's an API built for RAG.
- **Issue**: N/A
- **Dependencies**: Just added a dep to the
[kay](https://pypi.org/project/kay/) package
- **Tag maintainer**: @baskaryan @hwchase17 Discussed in slack
- **Twtter handle:** [@vishalrohra_](https://twitter.com/vishalrohra_)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
The huggingface pipeline in langchain (used for locally hosted models)
does not support batching. If you send in a batch of prompts, it just
processes them serially using the base implementation of _generate:
https://github.com/docugami/langchain/blob/master/libs/langchain/langchain/llms/base.py#L1004C2-L1004C29
This PR adds support for batching in this pipeline, so that GPUs can be
fully saturated. I updated the accompanying notebook to show GPU batch
inference.
---------
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
This PR aims at showcasing how to use vLLM's OpenAI-compatible chat API.
### Context
Lanchain already supports vLLM and its OpenAI-compatible `Completion`
API. However, the `ChatCompletion` API was not aligned with OpenAI and
for this reason I've waited for this
[PR](https://github.com/vllm-project/vllm/pull/852) to be merged before
adding this notebook to langchain.
LLMRails Embedding Integration
This PR provides integration with LLMRails. Implemented here are:
langchain/embeddings/llm_rails.py
docs/extras/integrations/text_embedding/llm_rails.ipynb
Hi @hwchase17 after adding our vectorstore integration to langchain with
confirmation of you and @baskaryan, now we want to add our embedding
integration
---------
Co-authored-by: Anar Aliyev <aaliyev@mgmt.cloudnet.services>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Adds support for gradient.ai's embedding model.
This will remain a Draft, as the code will likely be refactored with the
`pip install gradientai` python sdk.
- chat vertex async
- vertex stream
- vertex full generation info
- vertex use server-side stopping
- model garden async
- update docs for all the above
in follow up will add
[] chat vertex full generation info
[] chat vertex retries
[] scheduled tests
This adds a section on usage of `CassandraCache` and
`CassandraSemanticCache` to the doc notebook about caching LLMs, as
suggested in [this
comment](https://github.com/langchain-ai/langchain/pull/9772/#issuecomment-1710544100)
on a previous merged PR.
I also spotted what looks like a mismatch between different executions
and propose a fix (line 98).
Being the result of several runs, the cell execution numbers are
scrambled somewhat, so I volunteer to refine this PR by (manually)
re-numbering the cells to restore the appearance of a single, smooth
running (for the sake of orderly execution :)
**Description:**
This commit adds a vector store for the Postgres-based vector database
(`TimescaleVector`).
Timescale Vector(https://www.timescale.com/ai) is PostgreSQL++ for AI
applications. It enables you to efficiently store and query billions of
vector embeddings in `PostgreSQL`:
- Enhances `pgvector` with faster and more accurate similarity search on
1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based
partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and
relational data.
Timescale Vector scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata,
vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade
feature liked streaming backups and replication, high-availability and
row-level security.
- Enables a worry-free experience with enterprise-grade security and
compliance.
Timescale Vector is available on Timescale, the cloud PostgreSQL
platform. (There is no self-hosted version at this time.) LangChain
users get a 90-day free trial for Timescale Vector.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Avthar Sewrathan <avthar@timescale.com>
- **Description:** This PR implements a new LLM API to
https://gradient.ai
- **Issue:** Feature request for LLM #10745
- **Dependencies**: No additional dependencies are introduced.
- **Tag maintainer:** I am opening this PR for visibility, once ready
for review I'll tag.
- ```make format && make lint && make test``` is running.
- added a `integration` and `mock unit` test.
Co-authored-by: michaelfeil <me@michaelfeil.eu>
Co-authored-by: Bagatur <baskaryan@gmail.com>
We are introducing the py integration to Javelin AI Gateway
www.getjavelin.io. Javelin is an enterprise-scale fast llm router &
gateway. Could you please review and let us know if there is anything
missing.
Javelin AI Gateway wraps Embedding, Chat and Completion LLMs. Uses
javelin_sdk under the covers (pip install javelin_sdk).
Author: Sharath Rajasekar, Twitter: @sharathr, @javelinai
Thanks!!
### Description
- Add support for streaming with `Bedrock` LLM and `BedrockChat` Chat
Model.
- Bedrock as of now supports streaming for the `anthropic.claude-*` and
`amazon.titan-*` models only, hence support for those have been built.
- Also increased the default `max_token_to_sample` for Bedrock
`anthropic` model provider to `256` from `50` to keep in line with the
`Anthropic` defaults.
- Added examples for streaming responses to the bedrock example
notebooks.
**_NOTE:_**: This PR fixes the issues mentioned in #9897 and makes that
PR redundant.
- **Description:** QianfanEndpoint bugs for SystemMessages. When the
`SystemMessage` is input as the messages to
`chat_models.QianfanEndpoint`. A `TypeError` will be raised.
- **Issue:** #10643
- **Dependencies:**
- **Tag maintainer:** @baskaryan
- **Twitter handle:** no
### Description
Implements synthetic data generation with the fields and preferences
given by the user. Adds showcase notebook.
Corresponding prompt was proposed for langchain-hub.
### Example
```
output = chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}})
print(output)
# {'fields': {'colors': ['blue', 'yellow']},
'preferences': {'style': 'Make it in a style of a weather forecast.'},
'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}
```
### Twitter handle
@deepsense_ai @matt_wosinski
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description**
Adds new output parser, this time enabling the output of LLM to be of an
XML format. Seems to be particularly useful together with Claude model.
Addresses [issue
9820](https://github.com/langchain-ai/langchain/issues/9820).
**Twitter handle**
@deepsense_ai @matt_wosinski
- **Description:** Added integration instructions for Remembrall.
- **Tag maintainer:** @hwchase17
- **Twitter handle:** @raunakdoesdev
Fun fact, this project originated at the Modal Hackathon in NYC where it
won the Best LLM App prize sponsored by Langchain. Thanks for your
support 🦜
~~Because we can't pass extra parameters into a prompt, we have to
prepend a function before the runnable calls in the branch and it's a
bit less elegant than I'd like.~~
All good now that #10765 has landed!
@eyurtsev @hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- This pr adds `llm_kwargs` to the initialization of Xinference LLMs
(integrated in #8171 ).
- With this enhancement, users can not only provide `generate_configs`
when calling the llms for generation but also during the initialization
process. This allows users to include custom configurations when
utilizing LangChain features like LLMChain.
- It also fixes some format issues for the docstrings.
This PR is a documentation fix.
Description:
* fixes imports in the code samples in the docstrings of
`create_openai_fn_chain` and `create_structured_output_chain`
* fixes imports in
`docs/extras/modules/chains/how_to/openai_functions.ipynb`
* removes unused imports from the notebook
Issues:
* the docstrings use `from pydantic_v1 import BaseModel, Field` which
this PR changes to `from langchain.pydantic_v1 import BaseModel, Field`
* importing `pydantic` instead of `langchain.pydantic_v1` leads to
errors later in the notebook
Description: This PR changes the import section of the
`PydanticOutputParser` notebook.
* Import from `langchain.pydantic_v1` instead of `pydantic`
* Remove unused imports
Issue: running the notebook as written, when pydantic v2 is installed,
results in the following:
```python
PydanticDeprecatedSince20: Pydantic V1 style `@validator` validators are deprecated. You should migrate to Pydantic V2 style `@field_validator` validators, see the migration guide for more details. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.3/migration/
```
[...]
```python
PydanticUserError: The `field` and `config` parameters are not available in Pydantic V2, please use the `info` parameter instead.
For further information visit https://errors.pydantic.dev/2.3/u/validator-field-config-info
```
**Description:**
I've added a new use-case to the Web scraping docs. I also fixed some
typos in the existing text.
---------
Co-authored-by: davidjohnbarton <41335923+davidjohnbarton@users.noreply.github.com>
- Description: Added support for Ollama embeddings
- Issue: the issue # it fixes (if applicable),
- Dependencies: N/A
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: @herrjemand
cc https://github.com/jmorganca/ollama/issues/436
Adding support for Neo4j vector index hybrid search option. In Neo4j,
you can achieve hybrid search by using a combination of vector and
fulltext indexes.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description:
* Baidu AI Cloud's [Qianfan
Platform](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) is an
all-in-one platform for large model development and service deployment,
catering to enterprise developers in China. Qianfan Platform offers a
wide range of resources, including the Wenxin Yiyan model (ERNIE-Bot)
and various third-party open-source models.
- Issue: none
- Dependencies:
* qianfan
- Tag maintainer: @baskaryan
- Twitter handle:
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
The `self-que[ring`
navbar](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
has repeated `self-quering` repeated in each menu item. I've simplified
it to be more readable
- removed `self-quering` from a title of each page;
- added description to the vector stores
- added description and link to the Integration Card
(`integrations/providers`) of the vector stores when they are missed.
This PR addresses a few minor issues with the Cassandra vector store
implementation and extends the store to support Metadata search.
Thanks to the latest cassIO library (>=0.1.0), metadata filtering is
available in the store.
Further,
- the "relevance" score is prevented from being flipped in the [0,1]
interval, thus ensuring that 1 corresponds to the closest vector (this
is related to how the underlying cassIO class returns the cosine
difference);
- bumped the cassIO package version both in the notebooks and the
pyproject.toml;
- adjusted the textfile location for the vector-store example after the
reshuffling of the Langchain repo dir structure;
- added demonstration of metadata filtering in the Cassandra vector
store notebook;
- better docstring for the Cassandra vector store class;
- fixed test flakiness and removed offending out-of-place escape chars
from a test module docstring;
To my knowledge all relevant tests pass and mypy+black+ruff don't
complain. (mypy gives unrelated errors in other modules, which clearly
don't depend on the content of this PR).
Thank you!
Stefano
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
* More clarity around how geometry is handled. Not returned by default;
when returned, stored in metadata. This is because it's usually a waste
of tokens, but it should be accessible if needed.
* User can supply layer description to avoid errors when layer
properties are inaccessible due to passthrough access.
* Enhanced testing
* Updated notebook
---------
Co-authored-by: Connor Sutton <connor.sutton@swca.com>
Co-authored-by: connorsutton <135151649+connorsutton@users.noreply.github.com>