diff --git a/docs/docs_skeleton/docs/modules/memory/types/buffer.mdx b/docs/docs_skeleton/docs/modules/memory/types/buffer.mdx index 51ef409d75..d417b63174 100644 --- a/docs/docs_skeleton/docs/modules/memory/types/buffer.mdx +++ b/docs/docs_skeleton/docs/modules/memory/types/buffer.mdx @@ -1,4 +1,4 @@ -# Conversation buffer memory +# Conversation Buffer This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing of messages and then extracts the messages in a variable. diff --git a/docs/docs_skeleton/docs/modules/memory/types/buffer_window.mdx b/docs/docs_skeleton/docs/modules/memory/types/buffer_window.mdx index fab7ed42ba..465e918b0a 100644 --- a/docs/docs_skeleton/docs/modules/memory/types/buffer_window.mdx +++ b/docs/docs_skeleton/docs/modules/memory/types/buffer_window.mdx @@ -1,4 +1,4 @@ -# Conversation buffer window memory +# Conversation Buffer Window `ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large diff --git a/docs/docs_skeleton/docs/modules/memory/types/entity_summary_memory.mdx b/docs/docs_skeleton/docs/modules/memory/types/entity_summary_memory.mdx index 5387cf575a..e3dc63b6dd 100644 --- a/docs/docs_skeleton/docs/modules/memory/types/entity_summary_memory.mdx +++ b/docs/docs_skeleton/docs/modules/memory/types/entity_summary_memory.mdx @@ -1,4 +1,4 @@ -# Entity memory +# Entity Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM). diff --git a/docs/docs_skeleton/docs/modules/memory/types/index.mdx b/docs/docs_skeleton/docs/modules/memory/types/index.mdx index c9f29673f6..5caaa4e3f7 100644 --- a/docs/docs_skeleton/docs/modules/memory/types/index.mdx +++ b/docs/docs_skeleton/docs/modules/memory/types/index.mdx @@ -4,5 +4,5 @@ sidebar_position: 2 # Memory Types There are many different types of memory. -Each have their own parameters, their own return types, and are useful in different scenarios. +Each has their own parameters, their own return types, and is useful in different scenarios. Please see their individual page for more detail on each one. diff --git a/docs/docs_skeleton/docs/modules/memory/types/summary.mdx b/docs/docs_skeleton/docs/modules/memory/types/summary.mdx index 330bd2b59b..7d39b44e2b 100644 --- a/docs/docs_skeleton/docs/modules/memory/types/summary.mdx +++ b/docs/docs_skeleton/docs/modules/memory/types/summary.mdx @@ -1,4 +1,4 @@ -# Conversation summary memory +# Conversation Summary Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens. diff --git a/docs/docs_skeleton/docs/modules/memory/types/vectorstore_retriever_memory.mdx b/docs/docs_skeleton/docs/modules/memory/types/vectorstore_retriever_memory.mdx index 6f71e624b3..76c05cd95d 100644 --- a/docs/docs_skeleton/docs/modules/memory/types/vectorstore_retriever_memory.mdx +++ b/docs/docs_skeleton/docs/modules/memory/types/vectorstore_retriever_memory.mdx @@ -1,4 +1,4 @@ -# Vector store-backed memory +# Backed by a Vector Store `VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most "salient" docs every time it is called. diff --git a/docs/extras/modules/memory/types/kg.ipynb b/docs/extras/modules/memory/types/kg.ipynb index 3c0f45d076..1d8c27ba20 100644 --- a/docs/extras/modules/memory/types/kg.ipynb +++ b/docs/extras/modules/memory/types/kg.ipynb @@ -5,11 +5,17 @@ "id": "44c9933a", "metadata": {}, "source": [ - "# Conversation Knowledge Graph Memory\n", + "# Conversation Knowledge Graph\n", "\n", - "This type of memory uses a knowledge graph to recreate memory.\n", - "\n", - "Let's first walk through how to use the utilities" + "This type of memory uses a knowledge graph to recreate memory.\n" + ] + }, + { + "cell_type": "markdown", + "id": "0c798006-ca04-4de3-83eb-cf167fb2bd01", + "metadata": {}, + "source": [ + "## Using memory with LLM" ] }, { @@ -162,6 +168,7 @@ "metadata": {}, "source": [ "## Using in a chain\n", + "\n", "Let's now use this in a chain!" ] }, @@ -348,7 +355,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/modules/memory/types/summary_buffer.ipynb b/docs/extras/modules/memory/types/summary_buffer.ipynb index 570361e080..b27122905e 100644 --- a/docs/extras/modules/memory/types/summary_buffer.ipynb +++ b/docs/extras/modules/memory/types/summary_buffer.ipynb @@ -5,13 +5,22 @@ "id": "ff4be5f3", "metadata": {}, "source": [ - "# ConversationSummaryBufferMemory\n", + "# Conversation Summary Buffer\n", "\n", - "`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n", + "`ConversationSummaryBufferMemory` combines the two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. \n", + "It uses token length rather than number of interactions to determine when to flush interactions.\n", "\n", "Let's first walk through how to use the utilities" ] }, + { + "cell_type": "markdown", + "id": "0309636e-a530-4d2a-ba07-0916ea18bb20", + "metadata": {}, + "source": [ + "## Using memory with LLM" + ] + }, { "cell_type": "code", "execution_count": 1, @@ -320,7 +329,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/modules/memory/types/token_buffer.ipynb b/docs/extras/modules/memory/types/token_buffer.ipynb index ba26ef79ca..73c39b4c84 100644 --- a/docs/extras/modules/memory/types/token_buffer.ipynb +++ b/docs/extras/modules/memory/types/token_buffer.ipynb @@ -5,13 +5,21 @@ "id": "ff4be5f3", "metadata": {}, "source": [ - "# ConversationTokenBufferMemory\n", + "# Conversation Token Buffer\n", "\n", "`ConversationTokenBufferMemory` keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.\n", "\n", "Let's first walk through how to use the utilities" ] }, + { + "cell_type": "markdown", + "id": "0e528ef0-7b04-4a4a-8ff2-493c02027e83", + "metadata": {}, + "source": [ + "## Using memory with LLM" + ] + }, { "cell_type": "code", "execution_count": 1, @@ -286,7 +294,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4,