You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/use_cases/chatbots/memory_management.ipynb

781 lines
24 KiB
Plaintext

{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Memory management\n",
"\n",
"A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:\n",
"\n",
"- Simply stuffing previous messages into a chat model prompt.\n",
"- The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.\n",
"- More complex modifications like synthesizing summaries for long running conversations.\n",
"\n",
"We'll go into more detail on a few techniques below!\n",
"\n",
"## Setup\n",
"\n",
"You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"import dotenv\n",
"\n",
"dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's also set up a chat model that we'll use for the below examples."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Message passing\n",
"\n",
"The simplest form of memory is simply passing chat history messages into a chain. Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said \"J\\'adore la programmation,\" which means \"I love programming\" in French.')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" ),\n",
" AIMessage(content=\"J'adore la programmation.\"),\n",
" HumanMessage(content=\"What did you just say?\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that by passing the previous conversation into a chain, it can use it as context to answer questions. This is the basic concept underpinning chatbot memory - the rest of the guide will demonstrate convenient techniques for passing or reformatting messages.\n",
"\n",
"## Chat history\n",
"\n",
"It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](/docs/modules/memory/chat_messages/) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n",
"\n",
"Here's an example of the API:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='Translate this sentence from English to French: I love programming.'),\n",
" AIMessage(content=\"J'adore la programmation.\")]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.memory import ChatMessageHistory\n",
"\n",
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\n",
" \"Translate this sentence from English to French: I love programming.\"\n",
")\n",
"\n",
"demo_ephemeral_chat_history.add_ai_message(\"J'adore la programmation.\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use it directly to store conversation turns for our chain:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"input1 = \"Translate this sentence from English to French: I love programming.\"\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(input1)\n",
"\n",
"response = chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" }\n",
")\n",
"\n",
"demo_ephemeral_chat_history.add_ai_message(response)\n",
"\n",
"input2 = \"What did I just ask you?\"\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(input2)\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Automatic history management\n",
"\n",
"The previous examples pass messages to the chain explicitly. This is a completely acceptable approach, but it does require external management of new messages. LangChain also includes an wrapper for LCEL chains that can handle this process automatically called `RunnableWithMessageHistory`.\n",
"\n",
"To show how it works, let's slightly modify the above prompt to take a final `input` variable that populates a `HumanMessage` template after the chat history. This means that we will expect a `chat_history` parameter that contains all messages BEFORE the current messages instead of all messages:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" We'll pass the latest input to the conversation here and let the `RunnableWithMessageHistory` class wrap our chain and do the work of appending that `input` variable to the chat history.\n",
" \n",
" Next, let's declare our wrapped chain:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"demo_ephemeral_chat_history_for_chain = ChatMessageHistory()\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history_for_chain,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This class takes a few parameters in addition to the chain that we want to wrap:\n",
"\n",
"- A factory function that returns a message history for a given session id. This allows your chain to handle multiple users at once by loading different messages for different conversations.\n",
"- An `input_messages_key` that specifies which part of the input should be tracked and stored in the chat history. In this example, we want to track the string passed in as `input`.\n",
"- A `history_messages_key` that specifies what the previous messages should be injected into the prompt as. Our prompt has a `MessagesPlaceholder` named `chat_history`, so we specify this property to match.\n",
"- (For chains with multiple outputs) an `output_messages_key` which specifies which output to store as history. This is the inverse of `input_messages_key`.\n",
"\n",
"We can invoke this new chain as normal, with an additional `configurable` field that specifies the particular `session_id` to pass to the factory function. This is unused for the demo, but in real-world chains, you'll want to return a chat history corresponding to the passed session:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" in French is \"J\\'adore la programmation.\"')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_message_history.invoke(\n",
" {\"input\": \"Translate this sentence from English to French: I love programming.\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_message_history.invoke(\n",
" {\"input\": \"What did I just ask you?\"}, {\"configurable\": {\"session_id\": \"unused\"}}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Modifying chat history\n",
"\n",
"Modifying stored chat messages can help your chatbot handle a variety of situations. Here are some examples:\n",
"\n",
"### Trimming messages\n",
"\n",
"LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is to only load and store the most recent `n` messages. Let's use an example history with some preloaded messages:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n",
" AIMessage(content='Hello!'),\n",
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!')]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"Hey there! I'm Nemo.\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Hello!\")\n",
"demo_ephemeral_chat_history.add_user_message(\"How are you today?\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Fine thanks!\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's use this message history with the `RunnableWithMessageHistory` chain we declared above:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Nemo.')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")\n",
"\n",
"chain_with_message_history.invoke(\n",
" {\"input\": \"What's my name?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see the chain remembers the preloaded name.\n",
"\n",
"But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the `clear` method to remove messages and re-add them to the history. We don't have to, but let's put this method at the front of our chain to ensure it's always called:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"def trim_messages(chain_input):\n",
" stored_messages = demo_ephemeral_chat_history.messages\n",
" if len(stored_messages) <= 2:\n",
" return False\n",
"\n",
" demo_ephemeral_chat_history.clear()\n",
"\n",
" for message in stored_messages[-2:]:\n",
" demo_ephemeral_chat_history.add_message(message)\n",
"\n",
" return True\n",
"\n",
"\n",
"chain_with_trimming = (\n",
" RunnablePassthrough.assign(messages_trimmed=trim_messages)\n",
" | chain_with_message_history\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's call this new chain and check the messages afterwards:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
" {\"input\": \"Where does P. Sherman live?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.'),\n",
" HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And we can see that our history has removed the two oldest messages while still adding the most recent conversation at the end. The next time the chain is called, `trim_messages` will be called again, and only the two most recent messages will be passed to the model. In this case, this means that the model will forget the name we gave it the next time we invoke it:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
" {\"input\": \"What is my name?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\"),\n",
" HumanMessage(content='What is my name?'),\n",
" AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Summary memory\n",
"\n",
"We can use this same pattern in other ways too. For example, we could use an additional LLM call to generate a summary of the conversation before calling our chain. Let's recreate our chat history and chatbot chain:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n",
" AIMessage(content='Hello!'),\n",
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!')]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"Hey there! I'm Nemo.\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Hello!\")\n",
"demo_ephemeral_chat_history.add_user_message(\"How are you today?\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Fine thanks!\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll slightly modify the prompt to make the LLM aware that will receive a condensed summary instead of a chat history:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now, let's create a function that will distill previous interactions into a summary. We can add this one to the front of the chain too:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"def summarize_messages(chain_input):\n",
" stored_messages = demo_ephemeral_chat_history.messages\n",
" if len(stored_messages) == 0:\n",
" return False\n",
" summarization_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\n",
" \"user\",\n",
" \"Distill the above chat messages into a single summary message. Include as many specific details as you can.\",\n",
" ),\n",
" ]\n",
" )\n",
" summarization_chain = summarization_prompt | chat\n",
"\n",
" summary_message = summarization_chain.invoke({\"chat_history\": stored_messages})\n",
"\n",
" demo_ephemeral_chat_history.clear()\n",
"\n",
" demo_ephemeral_chat_history.add_message(summary_message)\n",
"\n",
" return True\n",
"\n",
"\n",
"chain_with_summarization = (\n",
" RunnablePassthrough.assign(messages_summarized=summarize_messages)\n",
" | chain_with_message_history\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see if it remembers the name we gave it:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='You introduced yourself as Nemo. How can I assist you today, Nemo?')"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_summarization.invoke(\n",
" {\"input\": \"What did I say my name was?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content='The conversation is between Nemo and an AI. Nemo introduces himself and the AI responds with a greeting. Nemo then asks the AI how it is doing, and the AI responds that it is fine.'),\n",
" HumanMessage(content='What did I say my name was?'),\n",
" AIMessage(content='You introduced yourself as Nemo. How can I assist you today, Nemo?')]"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that invoking the chain again will generate another summary generated from the initial summary plus new messages and so on. You could also design a hybrid approach where a certain number of messages are retained in chat history while others are summarized."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}