docs `ecosystem/integrations` update 4 (#5590)

# docs `ecosystem/integrations` update 4

Added missed integrations. Fixed inconsistencies. 

## Who can review?

@hwchase17 
@dev2049
searx_updates
Leonid Ganeline 12 months ago committed by GitHub
parent ae3611730a
commit b201cfaa0f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,18 @@
# Annoy
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.
## Installation and Setup
```bash
pip install annoy
```
## Vectorstore
See a [usage example](../modules/indexes/vectorstores/examples/annoy.ipynb).
```python
from langchain.vectorstores import Annoy
```

@ -26,3 +26,11 @@ See a [usage example](../modules/indexes/document_loaders/examples/arxiv.ipynb).
```python
from langchain.document_loaders import ArxivLoader
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/arxiv.ipynb).
```python
from langchain.retrievers import ArxivRetriever
```

@ -0,0 +1,24 @@
# Azure Cognitive Search
>[Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
>Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
>- A search engine for full text search over a search index containing user-owned content
>- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
>- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
>- Programmability through REST APIs and client libraries in Azure SDKs
>- Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
## Installation and Setup
See [set up instructions](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/azure_cognitive_search.ipynb).
```python
from langchain.retrievers import AzureCognitiveSearchRetriever
```

@ -0,0 +1,23 @@
# Cassandra
>[Cassandra](https://en.wikipedia.org/wiki/Apache_Cassandra) is a free and open-source, distributed, wide-column
> store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
> providing high availability with no single point of failure. `Cassandra` offers support for clusters spanning
> multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
> `Cassandra` was designed to implement a combination of `Amazon's Dynamo` distributed storage and replication
> techniques combined with `Google's Bigtable` data and storage engine model.
## Installation and Setup
```bash
pip install cassandra-drive
```
## Memory
See a [usage example](../modules/memory/examples/cassandra_chat_message_history.ipynb).
```python
from langchain.memory import CassandraChatMessageHistory
```

@ -1,20 +1,29 @@
# Chroma
This page covers how to use the Chroma ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Chroma wrappers.
>[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.
## Installation and Setup
- Install the Python package with `pip install chromadb`
## Wrappers
### VectorStore
```bash
pip install chromadb
```
## VectorStore
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/chroma_self_query.ipynb).
```python
from langchain.retrievers import SelfQueryRetriever
```

@ -1,25 +1,38 @@
# Cohere
This page covers how to use the Cohere ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Cohere wrappers.
>[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models
> that help companies improve human-machine interactions.
## Installation and Setup
- Install the Python SDK with `pip install cohere`
- Get an Cohere api key and set it as an environment variable (`COHERE_API_KEY`)
- Install the Python SDK :
```bash
pip install cohere
```
Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environment variable (`COHERE_API_KEY`)
## Wrappers
### LLM
## LLM
There exists an Cohere LLM wrapper, which you can access with
See a [usage example](../modules/models/llms/integrations/cohere.ipynb).
```python
from langchain.llms import Cohere
```
### Embeddings
## Text Embedding Model
There exists an Cohere Embeddings wrapper, which you can access with
There exists an Cohere Embedding model, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/cohere.ipynb)
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/cohere-reranker.ipynb).
```python
from langchain.retrievers.document_compressors import CohereRerank
```

@ -1,25 +1,17 @@
# Databerry
This page covers how to use the [Databerry](https://databerry.ai) within LangChain.
>[Databerry](https://databerry.ai) is an [open source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
## What is Databerry?
Databerry is an [open source](https://github.com/gmpetrov/databerry) document retrievial platform that helps to connect your personal data with Large Language Models.
## Installation and Setup
![Databerry](../_static/DataberryDashboard.png)
We need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url.
We need the [API Key](https://docs.databerry.ai/api-reference/authentication).
## Quick start
## Retriever
Retrieving documents stored in Databerry from LangChain is very easy!
See a [usage example](../modules/indexes/retrievers/examples/databerry.ipynb).
```python
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://api.databerry.ai/query/clg1xg2h80000l708dymr0fxc",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
docs = retriever.get_relevant_documents("What's Databerry?")
```

@ -0,0 +1,24 @@
# Elasticsearch
>[Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine.
> It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free
> JSON documents.
## Installation and Setup
```bash
pip install elasticsearch
```
## Retriever
>In information retrieval, [Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25) (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
>The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
See a [usage example](../modules/indexes/retrievers/examples/elastic_search_bm25.ipynb).
```python
from langchain.retrievers import ElasticSearchBM25Retriever
```

@ -1,20 +1,21 @@
# Momento
>[Momento Cache](https://docs.momentohq.com/) is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero
> capability, and blazing-fast performance.
> With Momento Cache, you grab the SDK, you get an end point, input a few lines into your code, and you're off and running.
This page covers how to use the [Momento](https://gomomento.com) ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Momento wrappers.
## Installation and Setup
- Sign up for a free account [here](https://docs.momentohq.com/getting-started) and get an auth token
- Install the Momento Python SDK with `pip install momento`
## Wrappers
### Cache
## Cache
The Cache wrapper allows for [Momento](https://gomomento.com) to be used as a serverless, distributed, low-latency cache for LLM prompts and responses.
#### Standard Cache
The standard cache is the go-to use case for [Momento](https://gomomento.com) users in any environment.
@ -44,10 +45,10 @@ cache_name = "langchain"
langchain.llm_cache = MomentoCache(cache_client, cache_name)
```
### Memory
## Memory
Momento can be used as a distributed memory store for LLMs.
#### Chat Message History Memory
### Chat Message History Memory
See [this notebook](../modules/memory/examples/momento_chat_message_history.ipynb) for a walkthrough of how to use Momento as a memory store for chat message history.

@ -71,3 +71,11 @@ See a [usage example](../modules/indexes/document_loaders/examples/chatgpt_loade
```python
from langchain.document_loaders.chatgpt import ChatGPTLoader
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/chatgpt-plugin.ipynb).
```python
from langchain.retrievers import ChatGPTPluginRetriever
```

@ -4,17 +4,19 @@ This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
## Installation and Setup
- Install the Python SDK with `pip install pinecone-client`
## Wrappers
Install the Python SDK:
```bash
pip install pinecone-client
```
### VectorStore
## Vectorstore
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)

@ -0,0 +1,17 @@
# Roam
>[ROAM](https://roamresearch.com/) is a note-taking tool for networked thought, designed to create a personal knowledge base.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/roam.ipynb).
```python
from langchain.document_loaders import RoamLoader
```

@ -0,0 +1,17 @@
# Slack
>[Slack](https://slack.com/) is an instant messaging program.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/slack.ipynb).
```python
from langchain.document_loaders import SlackDirectoryLoader
```

@ -0,0 +1,20 @@
# spaCy
>[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
## Installation and Setup
```bash
pip install spacy
```
## Text Splitter
See a [usage example](../modules/indexes/text_splitters/examples/spacy.ipynb).
```python
from langchain.llms import SpacyTextSplitter
```

@ -0,0 +1,15 @@
# Spreedly
>[Spreedly](https://docs.spreedly.com/) is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at `Spreedly`, allowing you to independently store a card and then pass that card to different end points based on your business requirements.
## Installation and Setup
See [setup instructions](../modules/indexes/document_loaders/examples/spreedly.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/spreedly.ipynb).
```python
from langchain.document_loaders import SpreedlyLoader
```

@ -0,0 +1,16 @@
# Stripe
>[Stripe](https://stripe.com/en-ca) is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
## Installation and Setup
See [setup instructions](../modules/indexes/document_loaders/examples/stripe.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/stripe.ipynb).
```python
from langchain.document_loaders import StripeLoader
```

@ -0,0 +1,17 @@
# Telegram
>[Telegram Messenger](https://web.telegram.org/a/) is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
## Installation and Setup
See [setup instructions](../modules/indexes/document_loaders/examples/telegram.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/telegram.ipynb).
```python
from langchain.document_loaders import TelegramChatFileLoader
from langchain.document_loaders import TelegramChatApiLoader
```

@ -0,0 +1,16 @@
# 2Markdown
>[2markdown](https://2markdown.com/) service transforms website content into structured markdown files.
## Installation and Setup
We need the `API key`. See [instructions how to get it](https://2markdown.com/login).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/tomarkdown.ipynb).
```python
from langchain.document_loaders import ToMarkdownLoader
```

@ -0,0 +1,22 @@
# Trello
>[Trello](https://www.atlassian.com/software/trello) is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.
>The TrelloLoader allows us to load cards from a `Trello` board.
## Installation and Setup
```bash
pip install py-trello beautifulsoup4
```
See [setup instructions](../modules/indexes/document_loaders/examples/trello.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/trello.ipynb).
```python
from langchain.document_loaders import TrelloLoader
```

@ -0,0 +1,21 @@
# Twitter
>[Twitter](https://twitter.com/) is an online social media and social networking service.
## Installation and Setup
```bash
pip install tweepy
```
We must initialize the loader with the `Twitter API` token, and we need to set up the Twitter `username`.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/twitter.ipynb).
```python
from langchain.document_loaders import TwitterTweetLoader
```

@ -0,0 +1,21 @@
# Vespa
>[Vespa](https://vespa.ai/) is a fully featured search engine and vector database.
> It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
## Installation and Setup
```bash
pip install pyvespa
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/vespa.ipynb).
```python
from langchain.retrievers import VespaRetriever
```

@ -0,0 +1,21 @@
# Weather
>[OpenWeatherMap](https://openweathermap.org/) is an open source weather service provider.
## Installation and Setup
```bash
pip install pyowm
```
We must set up the `OpenWeatherMap API token`.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/weather.ipynb).
```python
from langchain.document_loaders import WeatherDataLoader
```

@ -0,0 +1,18 @@
# WhatsApp
>[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/whatsapp_chat.ipynb).
```python
from langchain.document_loaders import WhatsAppChatLoader
```

@ -0,0 +1,28 @@
# Wikipedia
>[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.
## Installation and Setup
```bash
pip install wikipedia
```
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/wikipedia.ipynb).
```python
from langchain.document_loaders import WikipediaLoader
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/wikipedia.ipynb).
```python
from langchain.retrievers import WikipediaRetriever
```

@ -0,0 +1,22 @@
# YouTube
>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by Google.
> We download the `YouTube` transcripts and video information.
## Installation and Setup
```bash
pip install youtube-transcript-api
pip install pytube
```
See a [usage example](../modules/indexes/document_loaders/examples/youtube_transcript.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/youtube_transcript.ipynb).
```python
from langchain.document_loaders import YoutubeLoader
from langchain.document_loaders import GoogleApiYoutubeLoader
```

@ -0,0 +1,28 @@
# Zep
>[Zep](https://docs.getzep.com/) - A long-term memory store for LLM applications.
>`Zep` stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.
>- Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
>- Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
>- Vector search over memories, with messages automatically embedded on creation.
>- Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
>- Python and JavaScript SDKs.
`Zep` [project](https://github.com/getzep/zep)
## Installation and Setup
```bash
pip install zep_python
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/zep_memorystore.ipynb).
```python
from langchain.retrievers import ZepRetriever
```

@ -1,19 +1,20 @@
# Zilliz
This page covers how to use the Zilliz Cloud ecosystem within LangChain.
Zilliz uses the Milvus integration.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
>[Zilliz Cloud](https://zilliz.com/doc/quick_start) is a fully managed service on cloud for `LF AI Milvus®`,
## Installation and Setup
- Install the Python SDK with `pip install pymilvus`
## Wrappers
### VectorStore
Install the Python SDK:
```bash
pip install pymilvus
```
## Vectorstore
There exists a wrapper around Zilliz indexes, allowing you to use it as a vectorstore,
A wrapper around Zilliz indexes allows you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Milvus
```

@ -54,7 +54,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3e64cac2",
"metadata": {},
@ -117,7 +116,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.10.6"
}
},
"nbformat": 4,

@ -171,7 +171,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.6"
},
"vscode": {
"interpreter": {

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### WhatsApp Chat\n",
"# WhatsApp Chat\n",
"\n",
">[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.\n",
"\n",

@ -5,7 +5,16 @@
"id": "1edb9e6b",
"metadata": {},
"source": [
"# Azure Cognitive Search Retriever\n",
"# Azure Cognitive Search\n",
"\n",
">[Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n",
"\n",
">Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:\n",
">- A search engine for full text search over a search index containing user-owned content\n",
">- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation\n",
">- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more\n",
">- Programmability through REST APIs and client libraries in Azure SDKs\n",
">- Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)\n",
"\n",
"This notebook shows how to use Azure Cognitive Search (ACS) within LangChain."
]
@ -120,7 +129,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.6"
}
},
"nbformat": 4,

@ -2,21 +2,15 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Zep Memory\n",
"# Zep\n",
"\n",
"## Retriever Example\n",
"\n",
"This notebook demonstrates how to search historical chat message histories using the [Zep Long-term Memory Store](https://getzep.github.io/).\n",
">[Zep](https://docs.getzep.com/) - A long-term memory store for LLM applications.\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to the Zep memory store.\n",
"2. Vector search over the conversation history.\n",
"More on `Zep`:\n",
"\n",
"More on Zep:\n",
"\n",
"Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.\n",
"`Zep` stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.\n",
"\n",
"Key Features:\n",
"\n",
@ -28,15 +22,37 @@
"\n",
"Zep's Go Extractor model is easily extensible, with a simple, clean interface available to build new enrichment functionality, such as summarizers, entity extractors, embedders, and more.\n",
"\n",
"Zep project: [https://github.com/getzep/zep](https://github.com/getzep/zep)\n"
],
"metadata": {
"collapsed": false
}
"`Zep` project: [https://github.com/getzep/zep](https://github.com/getzep/zep)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retriever Example\n",
"\n",
"This notebook demonstrates how to search historical chat message histories using the [Zep Long-term Memory Store](https://getzep.github.io/).\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to the Zep memory store.\n",
"2. Vector search over the conversation history.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:27.863217Z",
"start_time": "2023-05-25T15:03:25.690273Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.memory.chat_message_histories import ZepChatMessageHistory\n",
@ -45,29 +61,30 @@
"\n",
"# Set this to your Zep server URL\n",
"ZEP_API_URL = \"http://localhost:8000\""
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:27.863217Z",
"start_time": "2023-05-25T15:03:25.690273Z"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and add a chat message history to the memory store\n",
"\n",
"**NOTE:** Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A `session_id` is required when instantiating the Retriever."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:29.118416Z",
"start_time": "2023-05-25T15:03:29.022464Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"session_id = str(uuid4()) # This is a unique identifier for the user/session\n",
@ -77,18 +94,21 @@
" session_id=session_id,\n",
" url=ZEP_API_URL,\n",
")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:29.118416Z",
"start_time": "2023-05-25T15:03:29.022464Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:30.271181Z",
"start_time": "2023-05-25T15:03:30.180442Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.\n",
@ -157,35 +177,42 @@
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:30.271181Z",
"start_time": "2023-05-25T15:03:30.180442Z"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the Zep Retriever to vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory. Embedding happens automatically.\n",
"\n",
"NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:32.979155Z",
"start_time": "2023-05-25T15:03:32.590310Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": "[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),\n Document(page_content='Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),\n Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]"
"text/plain": [
"[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n",
" Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n",
" Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),\n",
" Document(page_content='Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),\n",
" Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]"
]
},
"execution_count": 4,
"metadata": {},
@ -202,31 +229,38 @@
")\n",
"\n",
"await zep_retriever.aget_relevant_documents(\"Who wrote Parable of the Sower?\")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:32.979155Z",
"start_time": "2023-05-25T15:03:32.590310Z"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the Zep sync API to retrieve results:"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:34.713354Z",
"start_time": "2023-05-25T15:03:34.577974Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": "[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),\n Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),\n Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]"
"text/plain": [
"[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),\n",
" Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),\n",
" Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n",
" Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n",
" Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]"
]
},
"execution_count": 5,
"metadata": {},
@ -235,48 +269,44 @@
],
"source": [
"zep_retriever.get_relevant_documents(\"Who wrote Parable of the Sower?\")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:34.713354Z",
"start_time": "2023-05-25T15:03:34.577974Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": 5,
"outputs": [],
"source": [],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-18T20:09:21.298710Z",
"start_time": "2023-05-18T20:09:21.297169Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
}
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 4
}

Loading…
Cancel
Save