docs: `providers` update 7 (#18620)

Added missed providers. Added missed integrations. Formatted to the
consistent form. Fixed outdated imports.
pull/18756/head
Leonid Ganeline 4 months ago committed by GitHub
parent 1f50274df7
commit 7c8c4e5743
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,9 +1,10 @@
# Apache Cassandra
# Cassandra
> [Apache Cassandra®](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.
> Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html).
The integrations outlined in this page can be used with Cassandra as well as other CQL-compatible databases, i.e. those using the Cassandra Query Language protocol.
The integrations outlined in this page can be used with `Cassandra` as well as other CQL-compatible databases,
i.e. those using the `Cassandra Query Language` protocol.
### Setup

@ -23,32 +23,31 @@ Elastic Cloud is a managed Elasticsearch service. Signup for a [free trial](http
### Install Client
```bash
pip install elasticsearch
pip install langchain-elasticsearch
```
## Vector Store
## Embedding models
The vector store is a simple wrapper around Elasticsearch. It provides a simple interface to store and retrieve vectors.
See a [usage example](/docs/integrations/text_embedding/elasticsearch).
```python
from langchain_elasticsearch import ElasticsearchStore
from langchain_elasticsearch.embeddings import ElasticsearchEmbeddings
```
from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
## Vector store
loader = TextLoader("./state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
See a [usage example](/docs/integrations/vectorstores/elasticsearch).
embeddings = OpenAIEmbeddings()
```python
from langchain_elasticsearch.vectorstores import ElasticsearchStore
```
db = ElasticsearchStore.from_documents(
docs, embeddings, es_url="http://localhost:9200", index_name="test-basic",
)
db.client.indices.refresh(index="test-basic")
## Memory
query = "What did the president say about Ketanji Brown Jackson"
results = db.similarity_search(query)
See a [usage example](/docs/integrations/memory/elasticsearch_chat_message_history).
```python
from langchain_elasticsearch.chat_history import ElasticsearchChatMessageHistory
```

@ -48,12 +48,18 @@ set_llm_cache(MomentoCache(cache_client, cache_name))
Momento can be used as a distributed memory store for LLMs.
### Chat Message History Memory
See [this notebook](/docs/integrations/memory/momento_chat_message_history) for a walkthrough of how to use Momento as a memory store for chat message history.
```python
from langchain.memory import MomentoChatMessageHistory
```
## Vector Store
Momento Vector Index (MVI) can be used as a vector store.
See [this notebook](/docs/integrations/vectorstores/momento_vector_index) for a walkthrough of how to use MVI as a vector store.
```python
from langchain_community.vectorstores import MomentoVectorIndex
```

@ -1,37 +1,31 @@
# Neo4j
This page covers how to use the Neo4j ecosystem within LangChain.
>What is `Neo4j`?
What is Neo4j?
>- Neo4j is an `open-source database management system` that specializes in graph database technology.
>- Neo4j allows you to represent and store data in nodes and edges, making it ideal for handling connected data and relationships.
>- Neo4j provides a `Cypher Query Language`, making it easy to interact with and query your graph data.
>- With Neo4j, you can achieve high-performance `graph traversals and queries`, suitable for production-level systems.
**Neo4j in a nutshell:**
- Neo4j is an open-source database management system that specializes in graph database technology.
- Neo4j allows you to represent and store data in nodes and edges, making it ideal for handling connected data and relationships.
- Neo4j provides a Cypher Query Language, making it easy to interact with and query your graph data.
- With Neo4j, you can achieve high-performance graph traversals and queries, suitable for production-level systems.
- Get started quickly with Neo4j by visiting [their website](https://neo4j.com/).
>Get started with Neo4j by visiting [their website](https://neo4j.com/).
## Installation and Setup
- Install the Python SDK with `pip install neo4j`
## Wrappers
### VectorStore
## VectorStore
There exists a wrapper around Neo4j vector index, allowing you to use it as a vectorstore,
The Neo4j vector index is used as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain_community.vectorstores import Neo4jVector
```
For a more detailed walkthrough of the Neo4j vector index wrapper, see [documentation](/docs/integrations/vectorstores/neo4jvector)
See a [usage example](/docs/integrations/vectorstores/neo4jvector)
### GraphCypherQAChain
## GraphCypherQAChain
There exists a wrapper around Neo4j graph database that allows you to generate Cypher statements based on the user input
and use them to retrieve relevant information from the database.
@ -41,9 +35,9 @@ from langchain_community.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
```
For a more detailed walkthrough of Cypher generating chain, see [documentation](/docs/use_cases/graph/graph_cypher_qa)
See a [usage example](/docs/use_cases/graph/graph_cypher_qa)
### Constructing a knowledge graph from text
## Constructing a knowledge graph from text
Text data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications.
Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.
@ -55,4 +49,12 @@ from langchain_community.graphs import Neo4jGraph
from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
```
For a more detailed walkthrough generating graphs from text, see [documentation](/docs/use_cases/graph/diffbot_graphtransformer)
See a [usage example](/docs/use_cases/graph/diffbot_graphtransformer)
## Memory
See a [usage example](/docs/integrations/memory/neo4j_chat_message_history).
```python
from langchain.memory import Neo4jChatMessageHistory
```

@ -35,18 +35,18 @@ And to start it again:
docker start langchain-redis
```
## Wrappers
### Connections
All wrappers need a redis url connection string to connect to the database support either a stand alone Redis server
We need a redis url connection string to connect to the database support either a stand alone Redis server
or a High-Availability setup with Replication and Redis Sentinels.
### Redis Standalone connection url
#### Redis Standalone connection url
For standalone `Redis` server, the official redis connection url formats can be used as describe in the python redis modules
"from_url()" method [Redis.from_url](https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url)
Example: `redis_url = "redis://:secret-pass@localhost:6379/0"`
### Redis Sentinel connection url
#### Redis Sentinel connection url
For [Redis sentinel setups](https://redis.io/docs/management/sentinel/) the connection scheme is "redis+sentinel".
This is an unofficial extensions to the official IANA registered protocol schemes as long as there is no connection url
@ -61,20 +61,19 @@ The service-name is the redis server monitoring group name as configured within
The current url format limits the connection string to one sentinel host only (no list can be given) and
booth Redis server and sentinel must have the same password set (if used).
### Redis Cluster connection url
#### Redis Cluster connection url
Redis cluster is not supported right now for all methods requiring a "redis_url" parameter.
The only way to use a Redis Cluster is with LangChain classes accepting a preconfigured Redis client like `RedisCache`
(example below).
### Cache
## Cache
The Cache wrapper allows for [Redis](https://redis.io) to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.
#### Standard Cache
### Standard Cache
The standard cache is the Redis bread & butter of use case in production for both [open-source](https://redis.io) and [enterprise](https://redis.com) users globally.
To import this cache:
```python
from langchain.cache import RedisCache
```
@ -88,10 +87,9 @@ redis_client = redis.Redis.from_url(...)
set_llm_cache(RedisCache(redis_client))
```
#### Semantic Cache
### Semantic Cache
Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.
To import this cache:
```python
from langchain.cache import RedisSemanticCache
```
@ -112,27 +110,29 @@ set_llm_cache(RedisSemanticCache(
))
```
### VectorStore
## VectorStore
The vectorstore wrapper turns Redis into a low-latency [vector database](https://redis.com/solutions/use-cases/vector-database/) for semantic search or LLM content retrieval.
To import this vectorstore:
```python
from langchain_community.vectorstores import Redis
```
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis).
### Retriever
## Retriever
The Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call `.as_retriever()` on the base vectorstore class.
The Redis vector store retriever wrapper generalizes the vectorstore class to perform
low-latency document retrieval. To create the retriever, simply
call `.as_retriever()` on the base vectorstore class.
## Memory
### Memory
Redis can be used to persist LLM conversations.
#### Vector Store Retriever Memory
### Vector Store Retriever Memory
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory).
#### Chat Message History Memory
### Chat Message History Memory
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history).

@ -0,0 +1,15 @@
# Remembrall
>[Remembrall](https://remembrall.dev/) is a platform that gives a language model
> long-term memory, retrieval augmented generation, and complete observability.
## Installation and Setup
To get started, [sign in with Github on the Remembrall platform](https://remembrall.dev/login)
and copy your [API key from the settings page](https://remembrall.dev/dashboard/settings).
## Memory
See a [usage example](/docs/integrations/memory/remembrall).

@ -18,3 +18,11 @@ See a [usage example](/docs/integrations/vectorstores/singlestoredb).
```python
from langchain_community.vectorstores import SingleStoreDB
```
## Memory
See a [usage example](/docs/integrations/memory/singlestoredb_chat_message_history).
```python
from langchain.memory import SingleStoreDBChatMessageHistory
```

@ -0,0 +1,31 @@
# SQLite
>[SQLite](https://en.wikipedia.org/wiki/SQLite) is a database engine written in the
> C programming language. It is not a standalone app; rather, it is a library that
> software developers embed in their apps. As such, it belongs to the family of
> embedded databases. It is the most widely deployed database engine, as it is
> used by several of the top web browsers, operating systems, mobile phones, and other embedded systems.
## Installation and Setup
We need to install the `SQLAlchemy` python package.
```bash
pip install SQLAlchemy
```
## Vector Store
See a [usage example](/docs/integrations/vectorstores/sqlitevss).
```python
from langchain_community.vectorstores import SQLiteVSS
```
## Memory
See a [usage example](/docs/integrations/memory/sqlite).
```python
from langchain_community.chat_message_histories import SQLChatMessageHistory
```

@ -13,10 +13,18 @@ pip install streamlit
```
## Memory
See a [usage example](/docs/integrations/memory/streamlit_chat_message_history).
```python
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/streamlit).
```python
from langchain.callbacks import StreamlitCallbackHandler
from langchain_community.callbacks import StreamlitCallbackHandler
```

@ -0,0 +1,21 @@
# TiDB
> [TiDB](https://github.com/pingcap/tidb) is an open-source, cloud-native,
> distributed, MySQL-Compatible database for elastic scale and real-time analytics.
## Installation and Setup
We need to install the `sqlalchemy` Python package:
```bash
pip install sqlalchemy
```
## Memory
See a [usage example](/docs/integrations/memory/tidb_chat_message_history).
```python
from langchain_community.chat_message_histories import TiDBChatMessageHistory
```

@ -36,7 +36,11 @@ langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
```
### Memory
Upstash Redis can be used to persist LLM conversations.
#### Chat Message History Memory
An example of Upstash Redis for caching conversation message history can be seen in [this notebook](/docs/integrations/memory/upstash_redis_chat_message_history).
See a [usage example](/docs/integrations/memory/upstash_redis_chat_message_history).
```python
from langchain_community.chat_message_histories import (
UpstashRedisChatMessageHistory,
)
```

@ -26,3 +26,11 @@ See a [usage example](/docs/integrations/vectorstores/xata).
from langchain_community.vectorstores import XataVectorStore
```
### Memory
See a [usage example](/docs/integrations/memory/xata_chat_message_history).
```python
from langchain_community.chat_message_histories import XataChatMessageHistory
```

@ -1,20 +1,21 @@
# Zep
## [Fast, Scalable Building Blocks for LLM Apps](http://www.getzep.com)
Zep is an open source platform for productionizing LLM apps. Go from a prototype
>[Zep](http://www.getzep.com) is an open source platform for productionizing LLM apps. Go from a prototype
built in LangChain or LlamaIndex, or a custom app, to production in minutes without
rewriting code.
Key Features:
>Key Features:
>
>- **Fast!** Zep operates independently of the your chat loop, ensuring a snappy user experience.
>- **Chat History Memory, Archival, and Enrichment**, populate your prompts with relevant chat history, sumamries, named entities, intent data, and more.
>- **Vector Search over Chat History and Documents** Automatic embedding of documents, chat histories, and summaries. Use Zep's similarity or native MMR Re-ranked search to find the most relevant.
>- **Manage Users and their Chat Sessions** Users and their Chat Sessions are first-class citizens in Zep, allowing you to manage user interactions with your bots or agents easily.
>- **Records Retention and Privacy Compliance** Comply with corporate and regulatory mandates for records retention while ensuring compliance with privacy regulations such as CCPA and GDPR. Fulfill *Right To Be Forgotten* requests with a single API call
- **Fast!** Zep operates independently of the your chat loop, ensuring a snappy user experience.
- **Chat History Memory, Archival, and Enrichment**, populate your prompts with relevant chat history, sumamries, named entities, intent data, and more.
- **Vector Search over Chat History and Documents** Automatic embedding of documents, chat histories, and summaries. Use Zep's similarity or native MMR Re-ranked search to find the most relevant.
- **Manage Users and their Chat Sessions** Users and their Chat Sessions are first-class citizens in Zep, allowing you to manage user interactions with your bots or agents easily.
- **Records Retention and Privacy Compliance** Comply with corporate and regulatory mandates for records retention while ensuring compliance with privacy regulations such as CCPA and GDPR. Fulfill *Right To Be Forgotten* requests with a single API call
>Zep project: [https://github.com/getzep/zep](https://github.com/getzep/zep)
>
>Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
Zep project: [https://github.com/getzep/zep](https://github.com/getzep/zep)
Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
## Installation and Setup
@ -26,7 +27,7 @@ Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
pip install zep_python
```
## Zep Memory
## Memory
Zep's [Memory API](https://docs.getzep.com/sdk/chat_history/) persists your app's chat history and metadata to a Session, enriches the memory, automatically generates summaries, and enables vector similarity search over historical chat messages and summaries.
@ -43,7 +44,7 @@ from langchain.memory import ZepMemory
See a [RAG App Example here](/docs/integrations/memory/zep_memory).
## Memory Retriever
## Retriever
Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve messages from a Zep Session and use them to construct your prompt.
@ -54,10 +55,10 @@ Zep's Memory Retriever supports both similarity search and [Maximum Marginal Rel
See a [usage example](/docs/integrations/retrievers/zep_memorystore).
```python
from langchain.retrievers import ZepRetriever
from langchain_community.retrievers import ZepRetriever
```
## Zep VectorStore
## Vector store
Zep's [Document VectorStore API](https://docs.getzep.com/sdk/documents/) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand
distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest.
@ -66,7 +67,7 @@ Zep supports both similarity search and [Maximum Marginal Relevance (MMR) rerank
MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other.
```python
from langchain_community.vectorstores.zep import ZepVectorStore
from langchain_community.vectorstores import ZepVectorStore
```
See a [usage example](/docs/integrations/vectorstores/zep).
Loading…
Cancel
Save