There are several pages in `integrations/providers/more` that belongs to
Google and AWS `integrations/providers`.
- moved content of these pages into the Google and AWS
`integrations/providers` pages
- removed these individual pages
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
- **Description:** Doc corrections and resolve notebook rendering issue
on GH
- **Issue:** N/A
- **Dependencies:** N/A
- **Tag maintainer:** @baskaryan
- **Twitter handle:** `@isaacchung1217`
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
### Description
renamed several repository links from `hwchase17` to `langchain-ai`.
### Why
I discovered that the README file in the devcontainer contains an old
repository name, so I took the opportunity to rename the old repository
name in all files within the repository, excluding those that do not
require changes.
### Dependencies
none
### Tag maintainer
@baskaryan
### Twitter handle
[kzk_maeda](https://twitter.com/kzk_maeda)
- updated titles and descriptions of the `integrations/memory` notebooks
into consistent and laconic format;
- removed
`docs/extras/integrations/memory/motorhead_memory_managed.ipynb` file as
a duplicate of the
`docs/extras/integrations/memory/motorhead_memory.ipynb`;
- added `integrations/providers` Integration Cards for `dynamodb`,
`motorhead`.
- updated `integrations/providers/redis.mdx` with links
- renamed several notebooks; updated `vercel.json` to reroute new names.
Based on the customers' requests for native langchain integration,
SearchApi is ready to invest in AI and LLM space, especially in
open-source development.
- This is our initial PR and later we want to improve it based on
customers' and langchain users' feedback. Most likely changes will
affect how the final results string is being built.
- We are creating similar native integration in Python and JavaScript.
- The next plan is to integrate into Java, Ruby, Go, and others.
- Feel free to assign @SebastjanPrachovskij as a main reviewer for any
SearchApi-related searches. We will be glad to help and support
langchain development.
This enables bulk args like `chunk_size` to be passed down from the
ingest methods (from_text, from_documents) to be passed down to the bulk
API.
This helps alleviate issues where bulk importing a large amount of
documents into Elasticsearch was resulting in a timeout.
Contribution Shoutout
- @elastic
- [x] Updated Integration tests
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
We are introducing the py integration to Javelin AI Gateway
www.getjavelin.io. Javelin is an enterprise-scale fast llm router &
gateway. Could you please review and let us know if there is anything
missing.
Javelin AI Gateway wraps Embedding, Chat and Completion LLMs. Uses
javelin_sdk under the covers (pip install javelin_sdk).
Author: Sharath Rajasekar, Twitter: @sharathr, @javelinai
Thanks!!
- This pr adds `llm_kwargs` to the initialization of Xinference LLMs
(integrated in #8171 ).
- With this enhancement, users can not only provide `generate_configs`
when calling the llms for generation but also during the initialization
process. This allows users to include custom configurations when
utilizing LangChain features like LLMChain.
- It also fixes some format issues for the docstrings.
The `self-que[ring`
navbar](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
has repeated `self-quering` repeated in each menu item. I've simplified
it to be more readable
- removed `self-quering` from a title of each page;
- added description to the vector stores
- added description and link to the Integration Card
(`integrations/providers`) of the vector stores when they are missed.
_Thank you to the LangChain team for the great project and in advance
for your review. Let me know if I can provide any other additional
information or do things differently in the future to make your lives
easier 🙏 _
@hwchase17 please let me know if you're not the right person to review 😄
This PR enables LangChain to access the Konko API via the chat_models
API wrapper.
Konko API is a fully managed API designed to help application
developers:
1. Select the right LLM(s) for their application
2. Prototype with various open-source and proprietary LLMs
3. Move to production in-line with their security, privacy, throughput,
latency SLAs without infrastructure set-up or administration using Konko
AI's SOC 2 compliant infrastructure
_Note on integration tests:_
We added 14 integration tests. They will all fail unless you export the
right API keys. 13 will pass with a KONKO_API_KEY provided and the other
one will pass with a OPENAI_API_KEY provided. When both are provided,
all 14 integration tests pass. If you would like to test this yourself,
please let me know and I can provide some temporary keys.
### Installation and Setup
1. **First you'll need an API key**
2. **Install Konko AI's Python SDK**
1. Enable a Python3.8+ environment
`pip install konko`
3. **Set API Keys**
**Option 1:** Set Environment Variables
You can set environment variables for
1. KONKO_API_KEY (Required)
2. OPENAI_API_KEY (Optional)
In your current shell session, use the export command:
`export KONKO_API_KEY={your_KONKO_API_KEY_here}`
`export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional`
Alternatively, you can add the above lines directly to your shell
startup script (such as .bashrc or .bash_profile for Bash shell and
.zshrc for Zsh shell) to have them set automatically every time a new
shell session starts.
**Option 2:** Set API Keys Programmatically
If you prefer to set your API keys directly within your Python script or
Jupyter notebook, you can use the following commands:
```python
konko.set_api_key('your_KONKO_API_KEY_here')
konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional
```
### Calling a model
Find a model on the [[Konko Introduction
page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models)
For example, for this [[LLama 2
model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat).
The model id would be: `"meta-llama/Llama-2-13b-chat-hf"`
Another way to find the list of models running on the Konko instance is
through this
[[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels).
From here, we can initialize our model:
```python
chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf')
```
And run it:
```python
msg = HumanMessage(content="Hi")
chat_response = chat_instance([msg])
```
Various improvements to the Model I/O section of the documentation
- Changed "Chat Model" to "chat model" in a few spots for internal
consistency
- Minor spelling & grammar fixes to improve readability & comprehension
This fixes the exampe import line in the general "cassandra" doc page
mdx file. (it was erroneously a copy of the chat message history import
statement found below).
Hi there!
I'm excited to open this PR to add support for using 'Tencent Cloud
VectorDB' as a vector store.
Tencent Cloud VectorDB is a fully-managed, self-developed,
enterprise-level distributed database service designed for storing,
retrieving, and analyzing multi-dimensional vector data. The database
supports multiple index types and similarity calculation methods, with a
single index supporting vector scales up to 1 billion and capable of
handling millions of QPS with millisecond-level query latency. Tencent
Cloud VectorDB not only provides external knowledge bases for large
models to improve their accuracy, but also has wide applications in AI
fields such as recommendation systems, NLP services, computer vision,
and intelligent customer service.
The PR includes:
Implementation of Vectorstore.
I have read your [contributing
guidelines](72b7d76d79/.github/CONTRIBUTING.md).
And I have passed the tests below
make format
make lint
make coverage
make test
* Added links to the AI Network
* Made title consistent to other tool kits
* Added `integrations/providers/` integration card page
* **No changes** in the example code!
- Fixed a broken link in the `integrations/providers/infino.mdx`
- Fixed a title in the `integration/collbacks/infino.ipynb` example
- Updated text format in this example.
Improve internal consistency in LangChain documentation
- Change occurrences of eg and eg. to e.g.
- Fix headers containing unnecessary capital letters.
- Change instances of "few shot" to "few-shot".
- Add periods to end of sentences where missing.
- Minor spelling and grammar fixes.
- Improved docs
- Improved performance in multiple ways through batching, threading,
etc.
- fixed error message
- Added support for metadata filtering during similarity search.
@baskaryan PTAL
[Epsilla](https://github.com/epsilla-cloud/vectordb) vectordb is an
open-source vector database that leverages the advanced academic
parallel graph traversal techniques for vector indexing.
This PR adds basic integration with
[pyepsilla](https://github.com/epsilla-cloud/epsilla-python-client)(Epsilla
vectordb python client) as a vectorstore.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: @baskaryan
- Twitter handle: symbldotai
---------
Co-authored-by: toshishjawale <toshish@symbl.ai>
Todo:
- [x] Connection options (cloud, localhost url, es_connection) support
- [x] Logging support
- [x] Customisable field support
- [x] Distance Similarity support
- [x] Metadata support
- [x] Metadata Filter support
- [x] Retrieval Strategies
- [x] Approx
- [x] Approx with Hybrid
- [x] Exact
- [x] Custom
- [x] ELSER (excluding hybrid as we are working on RRF support)
- [x] integration tests
- [x] Documentation
👋 this is a contribution to improve Elasticsearch integration with
Langchain. Its based loosely on the changes that are in master but with
some notable changes:
## Package name & design improvements
The import name is now `ElasticsearchStore`, to aid discoverability of
the VectorStore.
```py
## Before
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch, ElasticKnnSearch
## Now
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```
## Retrieval Strategy support
Before we had a number of classes, depending on the strategy you wanted.
`ElasticKnnSearch` for approx, `ElasticVectorSearch` for exact / brute
force.
With `ElasticsearchStore` we have retrieval strategies:
### Approx Example
Default strategy for the vast majority of developers who use
Elasticsearch will be inferring the embeddings from outside of
Elasticsearch. Uses KNN functionality of _search.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index"
)
output = docsearch.similarity_search("foo", k=1)
```
### Approx, with hybrid
Developers who want to search, using both the embedding and the text
bm25 match. Its simple to enable.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(hybrid=True)
)
output = docsearch.similarity_search("foo", k=1)
```
### Approx, with `query_model_id`
Developers who want to infer within Elasticsearch, using the model
loaded in the ml node.
This relies on the developer to setup the pipeline and index if they
wish to embed the text in Elasticsearch. Example of this in the test.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(
query_model_id="sentence-transformers__all-minilm-l6-v2"
),
)
output = docsearch.similarity_search("foo", k=1)
```
### I want to provide my own custom Elasticsearch Query
You might want to have more control over the query, to perform
multi-phase retrieval such as LTR, linearly boosting on document
parameters like recently updated or geo-distance. You can do this with
`custom_query_fn`
```py
def my_custom_query(query_body: dict, query: str) -> dict:
return {"query": {"match": {"text": {"query": "bar"}}}}
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts, FakeEmbeddings(), **elasticsearch_connection, index_name=index_name
)
docsearch.similarity_search("foo", k=1, custom_query=my_custom_query)
```
### Exact Example
Developers who have a small dataset in Elasticsearch, dont want the cost
of indexing the dims vs tradeoff on cost at query time. Uses
script_score.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ExactRetrievalStrategy(),
)
output = docsearch.similarity_search("foo", k=1)
```
### ELSER Example
Elastic provides its own sparse vector model called ELSER. With these
changes, its really easy to use. The vector store creates a pipeline and
index thats setup for ELSER. All the developer needs to do is configure,
ingest and query via langchain tooling.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.SparseVectorStrategy(),
)
output = docsearch.similarity_search("foo", k=1)
```
## Architecture
In future, we can introduce new strategies and allow us to not break bwc
as we evolve the index / query strategy.
## Credit
On release, could you credit @elastic and @phoey1 please? Thank you!
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>