docs: chains & memory fixes (#9895)

Various improvements to the Chains & Memory sections of the
documentation including formatting, spelling, and grammar fixes to
improve readability.
pull/10155/head
seamusp 1 year ago committed by GitHub
parent 4dc47bd3ac
commit abd8681341
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -3,7 +3,7 @@ sidebar_position: 2
---
# Documents
These are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These chains all implement a common interface:

@ -3,10 +3,10 @@ sidebar_position: 1
---
# Refine
The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
The Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context.
The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain.
There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
![refine_diagram](/img/refine.jpg)
![refine_diagram](/img/refine.jpg)

@ -1,11 +1,11 @@
# LLM
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
## Get started
import Example from "@snippets/modules/chains/foundational/llm_chain.mdx"
<Example/>
<Example/>

@ -4,7 +4,7 @@
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario.. There are two types of sequential chains:
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:
- `SimpleSequentialChain`: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
- `SequentialChain`: A more general form of sequential chains, allowing for multiple inputs/outputs.

@ -30,4 +30,4 @@ Chains allow us to combine multiple components together to create a single, cohe
import GetStarted from "@snippets/modules/chains/get_started.mdx"
<GetStarted/>
<GetStarted/>

@ -8,10 +8,10 @@ Head to [Integrations](/docs/integrations/memory/) for documentation on built-in
:::
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.
This is a super lightweight wrapper which provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.
You may want to use this class directly if you are managing memory outside of a chain.
import GetStarted from "@snippets/modules/memory/chat_messages/get_started.mdx"
<GetStarted/>
<GetStarted/>

@ -1,6 +1,6 @@
# Conversation Buffer Window
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large.
Let's first explore the basic functionality of this type of memory.

@ -1,6 +1,6 @@
# Entity
Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
Entity memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
Let's first walk through using this functionality.

@ -1,7 +1,7 @@
---
sidebar_position: 2
---
# Memory Types
# Memory types
There are many different types of memory.
Each has their own parameters, their own return types, and is useful in different scenarios.

@ -1,6 +1,6 @@
# Backed by a Vector Store
`VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.
`VectorStoreRetrieverMemory` stores memories in a vector store and queries the top-K most "salient" docs every time it is called.
This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.

@ -11,8 +11,8 @@
"\n",
"Router chains are made up of two components:\n",
"\n",
"- The RouterChain itself (responsible for selecting the next chain to call)\n",
"- destination_chains: chains that the router chain can route to\n",
"- The `RouterChain` itself (responsible for selecting the next chain to call)\n",
"- `destination_chains`: chains that the router chain can route to\n",
"\n",
"\n",
"In this notebook, we will focus on the different types of routing chains. We will show these routing chains used in a `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt."
@ -241,7 +241,7 @@
"source": [
"## EmbeddingRouterChain\n",
"\n",
"The EmbeddingRouterChain uses embeddings and similarity to route between destination chains."
"The `EmbeddingRouterChain` uses embeddings and similarity to route between destination chains."
]
},
{

@ -9,7 +9,7 @@
"\n",
"This notebook showcases using a generic transformation chain.\n",
"\n",
"As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those."
"As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an `LLMChain` to summarize those."
]
},
{

@ -36,7 +36,7 @@
"metadata": {},
"source": [
"## Getting structured outputs\n",
"We can take advantage of OpenAI functions to try and force the model to return a particular kind of structured output. We'll use the `create_structured_output_chain` to create our chain, which takes the desired structured output either as a Pydantic class or as JsonSchema.\n",
"We can take advantage of OpenAI functions to try and force the model to return a particular kind of structured output. We'll use `create_structured_output_chain` to create our chain, which takes the desired structured output either as a Pydantic class or as JsonSchema.\n",
"\n",
"See here for relevant [reference docs](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_structured_output_chain.html)."
]
@ -178,7 +178,7 @@
"source": [
"### Using JsonSchema\n",
"\n",
"We can also pass in JsonSchema instead of Pydantic classes to specify the desired structure. When we do this, our chain will output json corresponding to the properties described in the JsonSchema, instead of a Pydantic class."
"We can also pass in JsonSchema instead of Pydantic classes to specify the desired structure. When we do this, our chain will output JSON corresponding to the properties described in the JsonSchema, instead of a Pydantic class."
]
},
{
@ -409,7 +409,7 @@
"id": "403ea5dd",
"metadata": {},
"source": [
"If we pass in multiple Python functions or OpenAI functions, then the returned output will be of the form\n",
"If we pass in multiple Python functions or OpenAI functions, then the returned output will be of the form:\n",
"```python\n",
"{\"name\": \"<<function_name>>\", \"arguments\": {<<function_arguments>>}}\n",
"```"
@ -471,7 +471,7 @@
"id": "5f93686b",
"metadata": {},
"source": [
"## Other Chains using OpenAI functions\n",
"## Other Chains using OpenAI functions\n",
"\n",
"There are a number of more specific chains that use OpenAI functions.\n",
"- [Extraction](/docs/modules/chains/additional/extraction): very similar to structured output chain, intended for information/entity extraction specifically.\n",

@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Serialization\n",
"This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.\n"
"This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.\n"
]
},
{
@ -15,7 +15,7 @@
"metadata": {},
"source": [
"## Saving a chain to disk\n",
"First, let's go over how to save a chain to disk. This can be done with the `.save` method, and specifying a file path with a json or yaml extension."
"First, let's go over how to save a chain to disk. This can be done with the `.save` method, and specifying a file path with a `.json` or `.yaml` extension."
]
},
{
@ -49,7 +49,7 @@
"id": "ea82665d",
"metadata": {},
"source": [
"Let's now take a look at what's inside this saved file"
"Let's now take a look at what's inside this saved file:"
]
},
{
@ -167,7 +167,7 @@
"metadata": {},
"source": [
"## Saving components separately\n",
"In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify `llm_path` instead of the `llm` component, and `prompt_path` instead of the `prompt` component."
"In the above example, we can see that the prompt and LLM configuration information is saved in the same JSON as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify `llm_path` instead of the `llm` component, and `prompt_path` instead of the `prompt` component."
]
},
{
@ -296,7 +296,7 @@
"id": "662731c0",
"metadata": {},
"source": [
"We can then load it in the same way"
"We can then load it in the same way:"
]
},
{

@ -9,7 +9,8 @@
"source": [
"# Memory in LLMChain\n",
"\n",
"This notebook goes over how to use the Memory class with an LLMChain. \n",
"This notebook goes over how to use the Memory class with an `LLMChain`. \n",
"\n",
"We will add the [ConversationBufferMemory](https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html#langchain.memory.buffer.ConversationBufferMemory) class, although this can be any memory class."
]
@ -34,7 +35,7 @@
"id": "4b066ced",
"metadata": {},
"source": [
"The most important step is setting up the prompt correctly. In the below prompt, we have two input keys: one for the actual input, another for the input from the Memory class. Importantly, we make sure the keys in the PromptTemplate and the ConversationBufferMemory match up (`chat_history`)."
"The most important step is setting up the prompt correctly. In the below prompt, we have two input keys: one for the actual input, another for the input from the Memory class. Importantly, we make sure the keys in the `PromptTemplate` and the `ConversationBufferMemory` match up (`chat_history`)."
]
},
{
@ -162,7 +163,7 @@
"id": "33978824-0048-4e75-9431-1b2c02c169b0",
"metadata": {},
"source": [
"## Adding Memory to a Chat Model-based LLMChain\n",
"## Adding Memory to a chat model-based `LLMChain`\n",
"\n",
"The above works for completion-style `LLM`s, but if you are using a chat model, you will likely get better performance using structured chat messages. Below is an example."
]
@ -188,9 +189,9 @@
"source": [
"We will use the [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html) class to set up the chat prompt.\n",
"\n",
"The [from_messages](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html#langchain.prompts.chat.ChatPromptTemplate.from_messages) method creates a ChatPromptTemplate from a list of messages (e.g., SystemMessage, HumanMessage, AIMessage, ChatMessage, etc.) or message templates, such as the [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html#langchain.prompts.chat.MessagesPlaceholder) below.\n",
"The [from_messages](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html#langchain.prompts.chat.ChatPromptTemplate.from_messages) method creates a `ChatPromptTemplate` from a list of messages (e.g., `SystemMessage`, `HumanMessage`, `AIMessage`, `ChatMessage`, etc.) or message templates, such as the [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html#langchain.prompts.chat.MessagesPlaceholder) below.\n",
"\n",
"The configuration below makes it so the memory will be injected to the middle of the chat prompt, in the \"chat_history\" key, and the user's inputs will be added in a human/user message to the end of the chat prompt."
"The configuration below makes it so the memory will be injected to the middle of the chat prompt, in the `chat_history` key, and the user's inputs will be added in a human/user message to the end of the chat prompt."
]
},
{

@ -14,8 +14,8 @@
"\n",
"In order to add a memory to an agent we are going to the the following steps:\n",
"\n",
"1. We are going to create an LLMChain with memory.\n",
"2. We are going to use that LLMChain to create a custom Agent.\n",
"1. We are going to create an `LLMChain` with memory.\n",
"2. We are going to use that `LLMChain` to create a custom Agent.\n",
"\n",
"For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the `ConversationBufferMemory` class."
]
@ -55,7 +55,7 @@
"id": "4ad2e708",
"metadata": {},
"source": [
"Notice the usage of the `chat_history` variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory."
"Notice the usage of the `chat_history` variable in the `PromptTemplate`, which matches up with the dynamic key name in the `ConversationBufferMemory`."
]
},
{
@ -86,7 +86,7 @@
"id": "0021675b",
"metadata": {},
"source": [
"We can now construct the LLMChain, with the Memory object, and then create the agent."
"We can now construct the `LLMChain`, with the Memory object, and then create the agent."
]
},
{

@ -63,7 +63,7 @@
"id": "4ad2e708",
"metadata": {},
"source": [
"Notice the usage of the `chat_history` variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory."
"Notice the usage of the `chat_history` variable in the `PromptTemplate`, which matches up with the dynamic key name in the `ConversationBufferMemory`."
]
},
{
@ -93,7 +93,7 @@
"id": "6d60bbd5",
"metadata": {},
"source": [
"Now we can create the ChatMessageHistory backed by the database."
"Now we can create the `RedisChatMessageHistory` backed by the database."
]
},
{
@ -117,7 +117,7 @@
"id": "0021675b",
"metadata": {},
"source": [
"We can now construct the LLMChain, with the Memory object, and then create the agent."
"We can now construct the `LLMChain`, with the Memory object, and then create the agent."
]
},
{

@ -30,7 +30,7 @@
"id": "fe3cd3e9",
"metadata": {},
"source": [
"## AI Prefix\n",
"## AI prefix\n",
"\n",
"The first way to do so is by changing the AI prefix in the conversation summary. By default, this is set to \"AI\", but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let's walk through an example of that in the example below."
]
@ -238,7 +238,7 @@
"id": "0517ccf8",
"metadata": {},
"source": [
"## Human Prefix\n",
"## Human prefix\n",
"\n",
"The next way to do so is by changing the Human prefix in the conversation summary. By default, this is set to \"Human\", but you can set this to be anything you want. Note that if you change this, you should also change the prompt used in the chain to reflect this naming change. Let's walk through an example of that in the example below."
]

@ -36,11 +36,11 @@
"id": "9489e5e1",
"metadata": {},
"source": [
"In this example, we will write a custom memory class that uses spacy to extract entities and save information about them in a simple hash table. Then, during the conversation, we will look at the input text, extract any entities, and put any information about them into the context.\n",
"In this example, we will write a custom memory class that uses spaCy to extract entities and save information about them in a simple hash table. Then, during the conversation, we will look at the input text, extract any entities, and put any information about them into the context.\n",
"\n",
"* Please note that this implementation is pretty simple and brittle and probably not useful in a production setting. Its purpose is to showcase that you can add custom memory implementations.\n",
"\n",
"For this, we will need spacy."
"For this, we will need spaCy."
]
},
{
@ -91,7 +91,7 @@
"\n",
" def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n",
" \"\"\"Load the memory variables, in this case the entity key.\"\"\"\n",
" # Get the input text and run through spacy\n",
" # Get the input text and run through spaCy\n",
" doc = nlp(inputs[list(inputs.keys())[0]])\n",
" # Extract known information about entities, if they exist.\n",
" entities = [\n",
@ -102,7 +102,7 @@
"\n",
" def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n",
" \"\"\"Save context from this conversation to buffer.\"\"\"\n",
" # Get the input text and run through spacy\n",
" # Get the input text and run through spaCy\n",
" text = inputs[list(inputs.keys())[0]]\n",
" doc = nlp(text)\n",
" # For each entity that was mentioned, save this information to the dictionary.\n",
@ -119,7 +119,7 @@
"id": "429ba264",
"metadata": {},
"source": [
"We now define a prompt that takes in information about entities as well as user input"
"We now define a prompt that takes in information about entities as well as user input."
]
},
{

@ -109,7 +109,7 @@
"id": "dc956b0e",
"metadata": {},
"source": [
"We can also more modularly get current entities from a new message (will use previous messages as context.)"
"We can also more modularly get current entities from a new message (will use previous messages as context)."
]
},
{
@ -138,7 +138,7 @@
"id": "e8749134",
"metadata": {},
"source": [
"We can also more modularly get knowledge triplets from a new message (will use previous messages as context.)"
"We can also more modularly get knowledge triplets from a new message (will use previous messages as context)."
]
},
{

@ -10,7 +10,7 @@
"`ConversationSummaryBufferMemory` combines the two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. \n",
"It uses token length rather than number of interactions to determine when to flush interactions.\n",
"\n",
"Let's first walk through how to use the utilities"
"Let's first walk through how to use the utilities."
]
},
{

@ -9,7 +9,7 @@
"\n",
"`ConversationTokenBufferMemory` keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.\n",
"\n",
"Let's first walk through how to use the utilities"
"Let's first walk through how to use the utilities."
]
},
{

@ -19,7 +19,7 @@ llm_chain("colorful socks")
</CodeOutputBlock>
## Additional ways of running LLM Chain
## Additional ways of running `LLMChain`
Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic:
@ -139,7 +139,7 @@ llm_chain.predict_and_parse()
## Initialize from string
You can also construct an LLMChain from a string template directly.
You can also construct an `LLMChain` from a string template directly.
```python

@ -89,7 +89,7 @@ print(review)
## Sequential Chain
Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.
Of particular importance is how we name the input/output variable names. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
Of particular importance is how we name the input/output variables. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
```python
@ -158,7 +158,7 @@ overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian Engla
### Memory in Sequential Chains
Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using `SimpleMemory` is a convenient way to do manage this and clean up your chains.
For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as `input_variables`, or we can add a `SimpleMemory` to the chain to manage this context:
For example, using the previous playwright `SequentialChain`, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as `input_variables`, or we can add a `SimpleMemory` to the chain to manage this context:

@ -1,5 +1,5 @@
Let's take a look at how to use ConversationBufferMemory in chains.
ConversationBufferMemory is an extremely simple form of memory that just keeps a list of chat messages in a buffer
Let's take a look at how to use `ConversationBufferMemory` in chains.
`ConversationBufferMemory` is an extremely simple form of memory that just keeps a list of chat messages in a buffer
and passes those into the prompt template.
```python
@ -16,7 +16,7 @@ Each individual memory type may very well have its own parameters and concepts t
### What variables get returned from memory
Before going into the chain, various variables are read from memory.
This have specific names which need to align with the variables the chain expects.
These have specific names which need to align with the variables the chain expects.
You can see what these variables are by calling `memory.load_memory_variables({})`.
Note that the empty dictionary that we pass in is just a placeholder for real variables.
If the memory type you are using is dependent upon the input variables, you may need to pass some in.
@ -34,7 +34,7 @@ memory.load_memory_variables({})
</CodeOutputBlock>
In this case, you can see that `load_memory_variables` returns a single key, `history`.
This means that your chain (and likely your prompt) should expect and input named `history`.
This means that your chain (and likely your prompt) should expect an input named `history`.
You can usually control this variable through parameters on the memory class.
For example, if you want the memory variables to be returned in the key `chat_history` you can do:
@ -51,12 +51,12 @@ memory.chat_memory.add_ai_message("whats up?")
</CodeOutputBlock>
The parameter name to control these keys may vary per memory type, but it's important to understand that (1) this is controllable, (2) how to control it.
The parameter name to control these keys may vary per memory type, but it's important to understand that (1) this is controllable, and (2) how to control it.
### Whether memory is a string or a list of messages
One of the most common types of memory involves returning a list of chat messages.
These can either be returned as a single string, all concatenated together (useful when they will be passed in LLMs)
These can either be returned as a single string, all concatenated together (useful when they will be passed into LLMs)
or a list of ChatMessages (useful when passed into ChatModels).
By default, they are returned as a single string.
@ -81,13 +81,13 @@ memory.chat_memory.add_ai_message("whats up?")
Often times chains take in or return multiple input/output keys.
In these cases, how can we know which keys we want to save to the chat message history?
This is generally controllable by `input_key` and `output_key` parameters on the memory types.
These default to None - and if there is only one input/output key it is known to just use that.
However, if there are multiple input/output keys then you MUST specify the name of which one to use
These default to `None` - and if there is only one input/output key it is known to just use that.
However, if there are multiple input/output keys then you MUST specify the name of which one to use.
### End to end example
Finally, let's take a look at using this in a chain.
We'll use an LLMChain, and show working with both an LLM and a ChatModel.
We'll use an `LLMChain`, and show working with both an LLM and a ChatModel.
#### Using an LLM

@ -153,5 +153,3 @@ conversation.predict(input="Tell me about yourself.")
```
</CodeOutputBlock>
And that's it for the getting started! There are plenty of different types of memory, check out our examples to see them all

@ -62,7 +62,7 @@ memory.predict_new_summary(messages, previous_summary)
## Initializing with messages/existing summary
If you have messages outside this class, you can easily initialize the class with ChatMessageHistory. During loading, a summary will be calculated.
If you have messages outside this class, you can easily initialize the class with `ChatMessageHistory`. During loading, a summary will be calculated.
```python

@ -7,9 +7,9 @@ from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
```
### Initialize your VectorStore
### Initialize your vector store
Depending on the store you choose, this step may look different. Consult the relevant VectorStore documentation for more details.
Depending on the store you choose, this step may look different. Consult the relevant vector store documentation for more details.
```python
@ -25,9 +25,9 @@ embedding_fn = OpenAIEmbeddings().embed_query
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
```
### Create your the VectorStoreRetrieverMemory
### Create your `VectorStoreRetrieverMemory`
The memory object is instantiated from any VectorStoreRetriever.
The memory object is instantiated from any vector store retriever.
```python

Loading…
Cancel
Save