mirror of https://github.com/hwchase17/langchain
You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
329 lines
9.7 KiB
Plaintext
329 lines
9.7 KiB
Plaintext
2 years ago
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
2 years ago
|
"id": "ff4be5f3",
|
||
2 years ago
|
"metadata": {},
|
||
|
"source": [
|
||
2 years ago
|
"# ConversationSummaryBufferMemory\n",
|
||
2 years ago
|
"\n",
|
||
|
"`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n",
|
||
|
"\n",
|
||
|
"Let's first walk through how to use the utilities"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
2 years ago
|
"execution_count": 1,
|
||
|
"id": "da3384db",
|
||
2 years ago
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.memory import ConversationSummaryBufferMemory\n",
|
||
|
"from langchain.llms import OpenAI\n",
|
||
1 year ago
|
"\n",
|
||
2 years ago
|
"llm = OpenAI()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
2 years ago
|
"execution_count": 2,
|
||
|
"id": "e00d4938",
|
||
2 years ago
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)\n",
|
||
2 years ago
|
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
|
||
|
"memory.save_context({\"input\": \"not much you\"}, {\"output\": \"not much\"})"
|
||
2 years ago
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
2 years ago
|
"execution_count": 3,
|
||
|
"id": "2fe28a28",
|
||
2 years ago
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
2 years ago
|
"{'history': 'System: \\nThe human says \"hi\", and the AI responds with \"whats up\".\\nHuman: not much you\\nAI: not much'}"
|
||
2 years ago
|
]
|
||
|
},
|
||
2 years ago
|
"execution_count": 3,
|
||
2 years ago
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"memory.load_memory_variables({})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
2 years ago
|
"id": "cf57b97a",
|
||
2 years ago
|
"metadata": {},
|
||
|
"source": [
|
||
|
"We can also get the history as a list of messages (this is useful if you are using this with a chat model)."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
2 years ago
|
"execution_count": 4,
|
||
|
"id": "3422a3a8",
|
||
2 years ago
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
1 year ago
|
"memory = ConversationSummaryBufferMemory(\n",
|
||
|
" llm=llm, max_token_limit=10, return_messages=True\n",
|
||
|
")\n",
|
||
2 years ago
|
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
|
||
|
"memory.save_context({\"input\": \"not much you\"}, {\"output\": \"not much\"})"
|
||
2 years ago
|
]
|
||
|
},
|
||
2 years ago
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "a1dcaaee",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"We can also utilize the `predict_new_summary` method directly."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 5,
|
||
|
"id": "fd7d7d6b",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"'\\nThe human and AI state that they are not doing much.'"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 5,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"messages = memory.chat_memory.messages\n",
|
||
|
"previous_summary = \"\"\n",
|
||
|
"memory.predict_new_summary(messages, previous_summary)"
|
||
|
]
|
||
|
},
|
||
2 years ago
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "a6d2569f",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
2 years ago
|
"## Using in a chain\n",
|
||
2 years ago
|
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
2 years ago
|
"execution_count": 6,
|
||
2 years ago
|
"id": "ebd68c10",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
|
||
|
"Prompt after formatting:\n",
|
||
|
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
|
||
|
"\n",
|
||
|
"Current conversation:\n",
|
||
|
"\n",
|
||
|
"Human: Hi, what's up?\n",
|
||
|
"AI:\u001b[0m\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
2 years ago
|
"\" Hi there! I'm doing great. I'm learning about the latest advances in artificial intelligence. What about you?\""
|
||
2 years ago
|
]
|
||
|
},
|
||
2 years ago
|
"execution_count": 6,
|
||
2 years ago
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from langchain.chains import ConversationChain\n",
|
||
1 year ago
|
"\n",
|
||
2 years ago
|
"conversation_with_summary = ConversationChain(\n",
|
||
1 year ago
|
" llm=llm,\n",
|
||
2 years ago
|
" # We set a very low max_token_limit for the purposes of testing.\n",
|
||
|
" memory=ConversationSummaryBufferMemory(llm=OpenAI(), max_token_limit=40),\n",
|
||
1 year ago
|
" verbose=True,\n",
|
||
2 years ago
|
")\n",
|
||
|
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 12,
|
||
|
"id": "86207a61",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
|
||
|
"Prompt after formatting:\n",
|
||
|
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
|
||
|
"\n",
|
||
|
"Current conversation:\n",
|
||
|
"Human: Hi, what's up?\n",
|
||
|
"AI: Hi there! I'm doing great. I'm spending some time learning about the latest developments in AI technology. How about you?\n",
|
||
|
"Human: Just working on writing some documentation!\n",
|
||
|
"AI:\u001b[0m\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"' That sounds like a great use of your time. Do you have experience with writing documentation?'"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 12,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"conversation_with_summary.predict(input=\"Just working on writing some documentation!\")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 13,
|
||
|
"id": "76a0ab39",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
|
||
|
"Prompt after formatting:\n",
|
||
|
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
|
||
|
"\n",
|
||
|
"Current conversation:\n",
|
||
|
"System: \n",
|
||
|
"The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology.\n",
|
||
|
"Human: Just working on writing some documentation!\n",
|
||
|
"AI: That sounds like a great use of your time. Do you have experience with writing documentation?\n",
|
||
|
"Human: For LangChain! Have you heard of it?\n",
|
||
|
"AI:\u001b[0m\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"\" No, I haven't heard of LangChain. Can you tell me more about it?\""
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 13,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"# We can see here that there is a summary of the conversation and then some previous interactions\n",
|
||
|
"conversation_with_summary.predict(input=\"For LangChain! Have you heard of it?\")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 14,
|
||
|
"id": "8c669db1",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
|
||
|
"Prompt after formatting:\n",
|
||
|
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
|
||
|
"\n",
|
||
|
"Current conversation:\n",
|
||
|
"System: \n",
|
||
|
"The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology. The human then mentioned they were writing documentation, to which the AI responded that it sounded like a great use of their time and asked if they had experience with writing documentation.\n",
|
||
|
"Human: For LangChain! Have you heard of it?\n",
|
||
|
"AI: No, I haven't heard of LangChain. Can you tell me more about it?\n",
|
||
|
"Human: Haha nope, although a lot of people confuse it for that\n",
|
||
|
"AI:\u001b[0m\n",
|
||
|
"\n",
|
||
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"' Oh, okay. What is LangChain?'"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 14,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"# We can see here that the summary and the buffer are updated\n",
|
||
1 year ago
|
"conversation_with_summary.predict(\n",
|
||
|
" input=\"Haha nope, although a lot of people confuse it for that\"\n",
|
||
|
")"
|
||
2 years ago
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "8c09a239",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": []
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3 (ipykernel)",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.9.1"
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 5
|
||
|
}
|