diff --git a/docs/examples/memory.rst b/docs/examples/memory.rst index ddab6c7b..ecce7c88 100644 --- a/docs/examples/memory.rst +++ b/docs/examples/memory.rst @@ -3,7 +3,9 @@ Memory The examples here all highlight how to use memory in different ways. -`Adding Memory `_: How to add a memory component to any chain. +`Adding Memory `_: How to add a memory component to any single input chain. + +`Adding Memory to Multi-Input Chain `_: How to add a memory component to any multiple input chain. `Conversational Memory Types `_: An overview of the different types of conversation memory you can load and use with a conversation-like chain. diff --git a/docs/examples/memory/conversational_memory.ipynb b/docs/examples/memory/conversational_memory.ipynb index 154ba982..249fa6e4 100644 --- a/docs/examples/memory/conversational_memory.ipynb +++ b/docs/examples/memory/conversational_memory.ipynb @@ -312,7 +312,7 @@ "id": "6eecf9d9", "metadata": {}, "source": [ - "# ConversationBufferWindowMemory\n", + "### ConversationBufferWindowMemory\n", "\n", "`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large\n", "\n", @@ -504,7 +504,7 @@ "id": "a6d2569f", "metadata": {}, "source": [ - "# ConversationSummaryBufferMemory\n", + "### ConversationSummaryBufferMemory\n", "\n", "`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n", "\n", diff --git a/pyproject.toml b/pyproject.toml index 5d833d34..07e1ad26 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "langchain" -version = "0.0.49" +version = "0.0.50" description = "Building applications with LLMs through composability" authors = [] license = "MIT"