mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
docs: memory types
menu (#9949)
The [Memory Types](https://python.langchain.com/docs/modules/memory/types/) menu is clogged with unnecessary wording. I've made it more concise by simplifying titles of the example notebooks. As results, menu is shorter and better for comprehend.
This commit is contained in:
commit
f7cc125cac
@ -1,4 +1,4 @@
|
|||||||
# Conversation buffer memory
|
# Conversation Buffer
|
||||||
|
|
||||||
This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing of messages and then extracts the messages in a variable.
|
This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing of messages and then extracts the messages in a variable.
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Conversation buffer window memory
|
# Conversation Buffer Window
|
||||||
|
|
||||||
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large
|
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Entity memory
|
# Entity
|
||||||
|
|
||||||
Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
|
Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
|
||||||
|
|
||||||
|
@ -4,5 +4,5 @@ sidebar_position: 2
|
|||||||
# Memory Types
|
# Memory Types
|
||||||
|
|
||||||
There are many different types of memory.
|
There are many different types of memory.
|
||||||
Each have their own parameters, their own return types, and are useful in different scenarios.
|
Each has their own parameters, their own return types, and is useful in different scenarios.
|
||||||
Please see their individual page for more detail on each one.
|
Please see their individual page for more detail on each one.
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Conversation summary memory
|
# Conversation Summary
|
||||||
Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.
|
Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.
|
||||||
Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.
|
Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Vector store-backed memory
|
# Backed by a Vector Store
|
||||||
|
|
||||||
`VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.
|
`VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called.
|
||||||
|
|
||||||
|
@ -5,11 +5,17 @@
|
|||||||
"id": "44c9933a",
|
"id": "44c9933a",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Conversation Knowledge Graph Memory\n",
|
"# Conversation Knowledge Graph\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This type of memory uses a knowledge graph to recreate memory.\n",
|
"This type of memory uses a knowledge graph to recreate memory.\n"
|
||||||
"\n",
|
]
|
||||||
"Let's first walk through how to use the utilities"
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0c798006-ca04-4de3-83eb-cf167fb2bd01",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Using memory with LLM"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -162,6 +168,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Using in a chain\n",
|
"## Using in a chain\n",
|
||||||
|
"\n",
|
||||||
"Let's now use this in a chain!"
|
"Let's now use this in a chain!"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -348,7 +355,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -5,13 +5,22 @@
|
|||||||
"id": "ff4be5f3",
|
"id": "ff4be5f3",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# ConversationSummaryBufferMemory\n",
|
"# Conversation Summary Buffer\n",
|
||||||
"\n",
|
"\n",
|
||||||
"`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n",
|
"`ConversationSummaryBufferMemory` combines the two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. \n",
|
||||||
|
"It uses token length rather than number of interactions to determine when to flush interactions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let's first walk through how to use the utilities"
|
"Let's first walk through how to use the utilities"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0309636e-a530-4d2a-ba07-0916ea18bb20",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Using memory with LLM"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": 1,
|
||||||
@ -320,7 +329,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -5,13 +5,21 @@
|
|||||||
"id": "ff4be5f3",
|
"id": "ff4be5f3",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# ConversationTokenBufferMemory\n",
|
"# Conversation Token Buffer\n",
|
||||||
"\n",
|
"\n",
|
||||||
"`ConversationTokenBufferMemory` keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.\n",
|
"`ConversationTokenBufferMemory` keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let's first walk through how to use the utilities"
|
"Let's first walk through how to use the utilities"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0e528ef0-7b04-4a4a-8ff2-493c02027e83",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Using memory with LLM"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": 1,
|
||||||
@ -286,7 +294,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
Loading…
Reference in New Issue
Block a user