Harrison/memory refactor (#1478)

moves memory to own module, factors out common stuff
fix-searx
Harrison Chase 1 year ago committed by GitHub
parent df6865cd52
commit 7bec461782
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -308,7 +308,7 @@
],
"source": [
"from langchain.chains import SequentialChain\n",
"from langchain.chains.base import SimpleMemory\n",
"from langchain.memory import SimpleMemory\n",
"\n",
"llm = OpenAI(temperature=.7)\n",
"template = \"\"\"You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.\n",

@ -0,0 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9a9350a6",
"metadata": {},
"source": [
"# Memory\n",
"This notebook goes over how to use Memory with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "110935ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import (\n",
" ChatPromptTemplate, \n",
" MessagesPlaceholder, \n",
" SystemMessagePromptTemplate, \n",
" HumanMessagePromptTemplate\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "161b6629",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages([\n",
" SystemMessagePromptTemplate.from_template(\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" HumanMessagePromptTemplate.from_template(\"{input}\")\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4976fbda",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "12a0bea6",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "f6edcd6a",
"metadata": {},
"source": [
"We can now initialize the memory. Note that we set `return_messages=True` To denote that this should return a list of messages when appropriate"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f55bea38",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True)"
]
},
{
"cell_type": "markdown",
"id": "737e8c78",
"metadata": {},
"source": [
"We can now use this in the rest of the chain."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "80152db7",
"metadata": {},
"outputs": [],
"source": [
"conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ac68e766",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello! How can I assist you today?'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "babb33d0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"I'm doing well! Just having a conversation with an AI.\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "36f8a1dc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Tell me about yourself.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "79fb460b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -2,18 +2,23 @@ Memory
==========================
By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently.
meaning that they treat each incoming query independently (as are the underlying LLMs and chat models).
In some applications (chatbots being a GREAT example) it is highly important
to remember previous interactions, both at a short term but also at a long term level.
The concept of “Memory” exists to do exactly that.
LangChain provides memory components in two forms.
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
These are designed to be modular and useful regardless of how they are used.
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
The following sections of documentation are provided:
- `Getting Started <./memory/getting_started.html>`_: An overview of how to get started with different types of memory.
- `Key Concepts <./memory/key_concepts.html>`_: A conceptual guide going over the various concepts related to memory.
- `How-To Guides <./memory/how_to_guides.html>`_: A collection of how-to guides. These highlight how to work with different types of memory, as well as how to customize memory.
- `How-To Guides <./memory/how_to_guides.html>`_: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains.

@ -17,7 +17,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain import OpenAI, LLMChain, PromptTemplate"
]
},
@ -167,7 +167,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -72,7 +72,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 1,
"id": "d3dc4ed5",
"metadata": {},
"outputs": [],
@ -80,7 +80,7 @@
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory"
"from langchain.memory import ConversationBufferMemory"
]
},
{

@ -22,13 +22,13 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 1,
"id": "8db95912",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain import OpenAI, LLMChain\n",
"from langchain.utilities import GoogleSearchAPIWrapper"
]
@ -316,7 +316,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "a99acd89",
"metadata": {},
"outputs": [
@ -24,9 +24,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -36,20 +36,19 @@
"\n",
"\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"```\n",
"$ pwd\n",
"/\n",
"/home/user\n",
"```\n"
]
}
],
"source": [
"from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate\n",
"from langchain.chains.conversation.memory import ConversationalBufferWindowMemory\n",
"from langchain.memory import ConversationBufferWindowMemory\n",
"\n",
"\n",
"template = \"\"\"Assistant is a large language model trained by OpenAI.\n",
@ -74,7 +73,7 @@
" llm=OpenAI(temperature=0), \n",
" prompt=prompt, \n",
" verbose=True, \n",
" memory=ConversationalBufferWindowMemory(k=2),\n",
" memory=ConversationBufferWindowMemory(k=2),\n",
")\n",
"\n",
"output = chatgpt_chain.predict(human_input=\"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\")\n",
@ -93,9 +92,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -110,9 +109,9 @@
"/\n",
"```\n",
"Human: ls ~\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"```\n",
"$ ls ~\n",
@ -138,9 +137,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -161,9 +160,9 @@
"Desktop Documents Downloads Music Pictures Public Templates Videos\n",
"```\n",
"Human: cd ~\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
" \n",
"```\n",
"$ cd ~\n",
@ -190,9 +189,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -214,9 +213,9 @@
"/home/user\n",
"```\n",
"Human: {Please make a file jokes.txt inside and put some jokes inside}\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -245,9 +244,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -272,9 +271,9 @@
"$ echo \"Why did the scarecrow win the Nobel Prize? Because he was outstanding in his field!\" >> jokes.txt\n",
"```\n",
"Human: echo -e \"x=lambda y:y*5+3;print('Result:' + str(x(6)))\" > run.py && python3 run.py\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -304,9 +303,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -332,9 +331,9 @@
"Result: 33\n",
"```\n",
"Human: echo -e \"print(list(filter(lambda x: all(x%d for d in range(2,x)),range(2,3**10)))[:10])\" > run.py && python3 run.py\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -362,9 +361,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -391,9 +390,9 @@
"Human: echo -e \"echo 'Hello from Docker\" > entrypoint.sh && echo -e \"FROM ubuntu:20.04\n",
"COPY entrypoint.sh entrypoint.sh\n",
"ENTRYPOINT [\"/bin/sh\",\"entrypoint.sh\"]\">Dockerfile && docker build . -t my_docker_image && docker run -t my_docker_image\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -426,9 +425,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -459,9 +458,9 @@
"Hello from Docker\n",
"```\n",
"Human: nvidia-smi\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -502,9 +501,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -548,9 +547,9 @@
"|=============================================================================|\n",
"\n",
"Human: ping bbc.com\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -584,9 +583,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -630,9 +629,9 @@
"round-trip min/avg/max/stddev = 14.945/14.945/14.945/0.000 ms\n",
"```\n",
"Human: curl -fsSL \"https://api.github.com/repos/pytorch/pytorch/releases/latest\" | jq -r '.tag_name' | sed 's/[^0-9\\.\\-]*//g'\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -659,9 +658,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -691,9 +690,9 @@
"1.8.1\n",
"```\n",
"Human: lynx https://www.deepmind.com/careers\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -726,9 +725,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -757,9 +756,9 @@
"Explore our current openings and apply today. We look forward to hearing from you.\n",
"```\n",
"Human: curl https://chat.openai.com/chat\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
" \n",
"\n",
"```\n",
@ -799,9 +798,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -843,9 +842,9 @@
"</html>\n",
"```\n",
"Human: curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"What is artificial intelligence?\"}' https://chat.openai.com/chat\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
"\n",
"\n",
"```\n",
@ -875,9 +874,9 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mAssistant is a large language model trained by OpenAI.\n",
"\u001b[32;1m\u001b[1;3mAssistant is a large language model trained by OpenAI.\n",
"\n",
"Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
@ -916,9 +915,9 @@
"}\n",
"```\n",
"Human: curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\n",
"Assistant:\u001B[0m\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
" \n",
"\n",
"```\n",
@ -961,7 +960,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -20,7 +20,7 @@
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain import OpenAI\n",
"from langchain.utilities import GoogleSearchAPIWrapper\n",
"from langchain.agents import initialize_agent"
@ -76,12 +76,12 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: Do I need to use a tool? No\n",
"AI: Hi Bob, nice to meet you! How can I help you today?\u001B[0m\n",
"AI: Hi Bob, nice to meet you! How can I help you today?\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -111,12 +111,12 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: Do I need to use a tool? No\n",
"AI: Your name is Bob!\u001B[0m\n",
"AI: Your name is Bob!\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -146,12 +146,12 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: Do I need to use a tool? No\n",
"AI: If you like Thai food, some great dinner options this week could include Thai green curry, Pad Thai, or a Thai-style stir-fry. You could also try making a Thai-style soup or salad. Enjoy!\u001B[0m\n",
"AI: If you like Thai food, some great dinner options this week could include Thai green curry, Pad Thai, or a Thai-style stir-fry. You could also try making a Thai-style soup or salad. Enjoy!\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -181,16 +181,16 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: Do I need to use a tool? Yes\n",
"Action: Current Search\n",
"Action Input: Who won the World Cup in 1978\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mThe Cup was won by the host nation, Argentina, who defeated the Netherlands 31 in the final, after extra time. The final was held at River Plate's home stadium ... Amid Argentina's celebrations, there was sympathy for the Netherlands, runners-up for the second tournament running, following a 3-1 final defeat at the Estadio ... The match was won by the Argentine squad in extra time by a score of 31. Mario Kempes, who finished as the tournament's top scorer, was named the man of the ... May 21, 2022 ... Argentina won the World Cup for the first time in their history, beating Netherlands 3-1 in the final. This edition of the World Cup was full of ... The adidas Golden Ball is presented to the best player at each FIFA World Cup finals. Those who finish as runners-up in the vote receive the adidas Silver ... Holders West Germany failed to beat Holland and Italy and were eliminated when Berti Vogts' own goal gave Austria a 3-2 victory. Holland thrashed the Austrians ... Jun 14, 2018 ... On a clear afternoon on 1 June 1978 at the revamped El Monumental stadium in Buenos Aires' Belgrano barrio, several hundred children in white ... Dec 15, 2022 ... The tournament couldn't have gone better for the ruling junta. Argentina went on to win the championship, defeating the Netherlands, 3-1, in the ... Nov 9, 2022 ... Host: Argentina Teams: 16. Format: Group stage, second round, third-place playoff, final. Matches: 38. Goals: 102. Winner: Argentina Feb 19, 2009 ... Argentina sealed their first World Cup win on home soil when they defeated the Netherlands in an exciting final that went to extra-time. For the ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m Do I need to use a tool? No\n",
"AI: The last letter in your name is 'b'. Argentina won the World Cup in 1978.\u001B[0m\n",
"Action Input: Who won the World Cup in 1978\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe Cup was won by the host nation, Argentina, who defeated the Netherlands 31 in the final, after extra time. The final was held at River Plate's home stadium ... Amid Argentina's celebrations, there was sympathy for the Netherlands, runners-up for the second tournament running, following a 3-1 final defeat at the Estadio ... The match was won by the Argentine squad in extra time by a score of 31. Mario Kempes, who finished as the tournament's top scorer, was named the man of the ... May 21, 2022 ... Argentina won the World Cup for the first time in their history, beating Netherlands 3-1 in the final. This edition of the World Cup was full of ... The adidas Golden Ball is presented to the best player at each FIFA World Cup finals. Those who finish as runners-up in the vote receive the adidas Silver ... Holders West Germany failed to beat Holland and Italy and were eliminated when Berti Vogts' own goal gave Austria a 3-2 victory. Holland thrashed the Austrians ... Jun 14, 2018 ... On a clear afternoon on 1 June 1978 at the revamped El Monumental stadium in Buenos Aires' Belgrano barrio, several hundred children in white ... Dec 15, 2022 ... The tournament couldn't have gone better for the ruling junta. Argentina went on to win the championship, defeating the Netherlands, 3-1, in the ... Nov 9, 2022 ... Host: Argentina Teams: 16. Format: Group stage, second round, third-place playoff, final. Matches: 38. Goals: 102. Winner: Argentina Feb 19, 2009 ... Argentina sealed their first World Cup win on home soil when they defeated the Netherlands in an exciting final that went to extra-time. For the ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Do I need to use a tool? No\n",
"AI: The last letter in your name is 'b'. Argentina won the World Cup in 1978.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -220,16 +220,16 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: Do I need to use a tool? Yes\n",
"Action: Current Search\n",
"Action Input: Current temperature in Pomfret\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mA mixture of rain and snow showers. High 39F. Winds NNW at 5 to 10 mph. Chance of precip 50%. Snow accumulations less than one inch. Pomfret, CT Weather Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. Pomfret Center Weather Forecasts. ... Pomfret Center, CT Weather Conditionsstar_ratehome ... Tomorrow's temperature is forecast to be COOLER than today. It is 46 degrees fahrenheit, or 8 degrees celsius and feels like 46 degrees fahrenheit. The barometric pressure is 29.78 - measured by inch of mercury units - ... Pomfret Weather Forecasts. ... Pomfret, MD Weather Conditionsstar_ratehome ... Tomorrow's temperature is forecast to be MUCH COOLER than today. Additional Headlines. En Español · Share |. Current conditions at ... Pomfret CT. Tonight ... Past Weather Information · Interactive Forecast Map. Pomfret MD detailed current weather report for 20675 in Charles county, Maryland. ... Pomfret, MD weather condition is Mostly Cloudy and 43°F. Mostly Cloudy. Hazardous Weather Conditions. Hazardous Weather Outlook · En Español · Share |. Current conditions at ... South Pomfret VT. Tonight. Pomfret Center, CT Weather. Current Report for Thu Jan 5 2023. As of 2:00 PM EST. 5-Day Forecast | Road Conditions. 45°F 7°c. Feels Like 44°F. Pomfret Center CT. Today. Today: Areas of fog before 9am. Otherwise, cloudy, with a ... Otherwise, cloudy, with a temperature falling to around 33 by 5pm.\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m Do I need to use a tool? No\n",
"AI: The current temperature in Pomfret is 45°F (7°C) and it feels like 44°F.\u001B[0m\n",
"Action Input: Current temperature in Pomfret\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mA mixture of rain and snow showers. High 39F. Winds NNW at 5 to 10 mph. Chance of precip 50%. Snow accumulations less than one inch. Pomfret, CT Weather Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. Pomfret Center Weather Forecasts. ... Pomfret Center, CT Weather Conditionsstar_ratehome ... Tomorrow's temperature is forecast to be COOLER than today. It is 46 degrees fahrenheit, or 8 degrees celsius and feels like 46 degrees fahrenheit. The barometric pressure is 29.78 - measured by inch of mercury units - ... Pomfret Weather Forecasts. ... Pomfret, MD Weather Conditionsstar_ratehome ... Tomorrow's temperature is forecast to be MUCH COOLER than today. Additional Headlines. En Español · Share |. Current conditions at ... Pomfret CT. Tonight ... Past Weather Information · Interactive Forecast Map. Pomfret MD detailed current weather report for 20675 in Charles county, Maryland. ... Pomfret, MD weather condition is Mostly Cloudy and 43°F. Mostly Cloudy. Hazardous Weather Conditions. Hazardous Weather Outlook · En Español · Share |. Current conditions at ... South Pomfret VT. Tonight. Pomfret Center, CT Weather. Current Report for Thu Jan 5 2023. As of 2:00 PM EST. 5-Day Forecast | Road Conditions. 45°F 7°c. Feels Like 44°F. Pomfret Center CT. Today. Today: Areas of fog before 9am. Otherwise, cloudy, with a ... Otherwise, cloudy, with a temperature falling to around 33 by 5pm.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Do I need to use a tool? No\n",
"AI: The current temperature in Pomfret is 45°F (7°C) and it feels like 44°F.\u001b[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -272,7 +272,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -19,7 +19,7 @@
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationChain\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"from langchain.memory import ConversationBufferMemory\n",
"\n",
"\n",
"llm = OpenAI(temperature=0)"
@ -379,7 +379,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -25,7 +25,7 @@
"outputs": [],
"source": [
"from langchain import OpenAI, ConversationChain\n",
"from langchain.chains.base import Memory\n",
"from langchain.schema import BaseMemory\n",
"from pydantic import BaseModel\n",
"from typing import List, Dict, Any"
]
@ -71,7 +71,7 @@
"metadata": {},
"outputs": [],
"source": [
"class SpacyEntityMemory(Memory, BaseModel):\n",
"class SpacyEntityMemory(BaseMemory, BaseModel):\n",
" \"\"\"Memory class for storing information about entities.\"\"\"\n",
"\n",
" # Define dictionary to store information about entities.\n",
@ -290,7 +290,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -11,7 +11,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 1,
"id": "7d7de430",
"metadata": {},
"outputs": [],
@ -19,7 +19,8 @@
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import ConversationChain\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory, ConversationSummaryMemory, CombinedMemory\n",
"from langchain.memory import ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory\n",
"\n",
"\n",
"conv_memory = ConversationBufferMemory(\n",
" memory_key=\"chat_history_lines\",\n",
@ -159,7 +160,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -7,570 +7,212 @@
"source": [
"# Getting Started\n",
"\n",
"This notebook walks through the different types of memory you can use with the `ConversationChain`."
]
},
{
"cell_type": "markdown",
"id": "d051c1da",
"metadata": {},
"source": [
"## ConversationBufferMemory (default)\n",
"By default, the `ConversationChain` uses `ConversationBufferMemory`: a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let's take a look at using this chain (setting `verbose=True` so we can see the prompt)."
"This notebook walks through how LangChain thinks about memory. \n",
"\n",
"Memory involves keeping a concept of state around throughout a user's interactions with an language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.\n",
"\n",
"In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain. \n",
"\n",
"Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages.\n",
"\n",
"In this notebook, we will walk through the simplest form of memory: \"buffer\" memory, which just involves keeping a buffer of all prior messages. We will show how to use the modular utility functions here, then show how it can be used in a chain (both returning a string as well as a list of messages).\n",
"\n",
"## ChatMessageHistory\n",
"One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class. This is a super lightweight wrapper which exposes convienence methods for saving Human messages, AI messages, and then fetching them all. \n",
"\n",
"You may want to use this class directly if you are managing memory outside of a chain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "54301321",
"id": "356e7c73",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationChain\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"conversation = ConversationChain(\n",
" llm=llm, \n",
" verbose=True, \n",
" memory=ConversationBufferMemory()\n",
")"
"from langchain.memory import ChatMessageHistory"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "ae046bff",
"id": "54f90bd7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi there!\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! It's nice to meet you. How can I help you today?\""
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"conversation.predict(input=\"Hi there!\")"
"history = ChatMessageHistory()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d8e2a6ff",
"id": "3884ff81",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi there!\n",
"AI: Hi there! It's nice to meet you. How can I help you today?\n",
"Human: I'm doing well! Just having a conversation with an AI.\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"conversation.predict(input=\"I'm doing well! Just having a conversation with an AI.\")"
"history.add_user_message(\"hi!\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "15eda316",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi there!\n",
"AI: Hi there! It's nice to meet you. How can I help you today?\n",
"Human: I'm doing well! Just having a conversation with an AI.\n",
"AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\n",
"Human: Tell me about yourself.\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers.\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Tell me about yourself.\")"
]
},
{
"cell_type": "markdown",
"id": "4fad9448",
"metadata": {},
"source": [
"## ConversationSummaryMemory\n",
"Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.\n",
"\n",
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f60a2fe8",
"id": "b48d5844",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.conversation.memory import ConversationSummaryMemory"
"history.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b7274f2c",
"execution_count": 5,
"id": "495dcff0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi, what's up?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\""
"[HumanMessage(content='hi!', additional_kwargs={}),\n",
" AIMessage(content='whats up?', additional_kwargs={})]"
]
},
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary = ConversationChain(\n",
" llm=llm, \n",
" memory=ConversationSummaryMemory(llm=OpenAI()),\n",
" verbose=True\n",
")\n",
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
"history.messages"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a6b6b88f",
"cell_type": "markdown",
"id": "46196aa3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"The human greets the AI and the AI responds, saying it is doing well and is currently helping a customer with a technical issue.\n",
"Human: Tell me more about it!\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Sure! The customer is having trouble with their computer not connecting to the internet. I'm helping them troubleshoot the issue and figure out what the problem is. So far, we've tried resetting the router and checking the network settings, but the issue still persists. We're currently looking into other possible causes.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"Tell me more about it!\")"
"## ConversationBufferMemory\n",
"\n",
"We now show how to use this simple concept in a chain. We first showcase `ConversationBufferMemory` which is just a wrapper around ChatMessageHistory that extracts the messages in a variable.\n",
"\n",
"We can first extract it as a string."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "dad869fe",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"\n",
"The human greets the AI and the AI responds, saying it is doing well and is currently helping a customer with a technical issue. The customer is having trouble with their computer not connecting to the internet, and the AI is helping them troubleshoot the issue by resetting the router and checking the network settings. They are still looking into other possible causes.\n",
"Human: Very cool -- what is the scope of the project?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"' The scope of the project is to help the customer troubleshoot the issue with their computer not connecting to the internet. We are currently resetting the router and checking the network settings, and we are looking into other possible causes.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"Very cool -- what is the scope of the project?\")"
]
},
{
"cell_type": "markdown",
"id": "6eecf9d9",
"execution_count": 7,
"id": "3bac84f3",
"metadata": {},
"outputs": [],
"source": [
"## ConversationBufferWindowMemory\n",
"\n",
"`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large\n",
"\n",
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "2dac7769",
"execution_count": 10,
"id": "cef35e7f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.conversation.memory import ConversationBufferWindowMemory"
"memory = ConversationBufferMemory()\n",
"memory.chat_memory.add_user_message(\"hi!\")\n",
"memory.chat_memory.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "0b9da4cd",
"execution_count": 12,
"id": "2c9b39af",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi, what's up?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\""
"{'history': 'Human: hi!\\nAI: whats up?'}"
]
},
"execution_count": 10,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary = ConversationChain(\n",
" llm=llm, \n",
" # We set a low k=2, to only keep the last 2 interactions in memory\n",
" memory=ConversationBufferWindowMemory(k=2), \n",
" verbose=True\n",
")\n",
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "90f73431",
"cell_type": "markdown",
"id": "567f7c16",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi, what's up?\n",
"AI: Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\n",
"Human: What's their issues?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" The customer is having trouble connecting to their Wi-Fi network. I'm helping them troubleshoot the issue and get them connected.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"What's their issues?\")"
"We can also get the history as a list of messages"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "cbb499e7",
"execution_count": 13,
"id": "a481a415",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi, what's up?\n",
"AI: Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\n",
"Human: What's their issues?\n",
"AI: The customer is having trouble connecting to their Wi-Fi network. I'm helping them troubleshoot the issue and get them connected.\n",
"Human: Is it going well?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Yes, it's going well so far. We've already identified the problem and are now working on a solution.\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"conversation_with_summary.predict(input=\"Is it going well?\")"
"memory = ConversationBufferMemory(return_messages=True)\n",
"memory.chat_memory.add_user_message(\"hi!\")\n",
"memory.chat_memory.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "0d209cfe",
"execution_count": 14,
"id": "86a56348",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: What's their issues?\n",
"AI: The customer is having trouble connecting to their Wi-Fi network. I'm helping them troubleshoot the issue and get them connected.\n",
"Human: Is it going well?\n",
"AI: Yes, it's going well so far. We've already identified the problem and are now working on a solution.\n",
"Human: What's the solution?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" The solution is to reset the router and reconfigure the settings. We're currently in the process of doing that.\""
"{'history': [HumanMessage(content='hi!', additional_kwargs={}),\n",
" AIMessage(content='whats up?', additional_kwargs={})]}"
]
},
"execution_count": 13,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Notice here that the first interaction does not appear.\n",
"conversation_with_summary.predict(input=\"What's the solution?\")"
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "a6d2569f",
"metadata": {},
"source": [
"## ConversationSummaryBufferMemory\n",
"\n",
"`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n",
"\n",
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e583a661",
"id": "d051c1da",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.conversation.memory import ConversationSummaryBufferMemory"
"Finally, let's take a look at using this in a chain (setting `verbose=True` so we can see the prompt)."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "ebd68c10",
"id": "54301321",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi, what's up?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\""
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"conversation_with_summary = ConversationChain(\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationChain\n",
"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"conversation = ConversationChain(\n",
" llm=llm, \n",
" # We set a very low max_token_limit for the purposes of testing.\n",
" memory=ConversationSummaryBufferMemory(llm=OpenAI(), max_token_limit=40),\n",
" verbose=True\n",
")\n",
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
" verbose=True, \n",
" memory=ConversationBufferMemory()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "86207a61",
"id": "ae046bff",
"metadata": {},
"outputs": [
{
@ -579,23 +221,22 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi, what's up?\n",
"AI: Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\n",
"Human: Just working on writing some documentation!\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
"Human: Hi there!\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' That sounds like a lot of work. What kind of documentation are you writing?'"
"\" Hi there! It's nice to meet you. How can I help you today?\""
]
},
"execution_count": 16,
@ -604,13 +245,13 @@
}
],
"source": [
"conversation_with_summary.predict(input=\"Just working on writing some documentation!\")"
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "76a0ab39",
"id": "d8e2a6ff",
"metadata": {},
"outputs": [
{
@ -619,25 +260,23 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi there!\n",
"AI: Hi there! It's nice to meet you. How can I help you today?\n",
"Human: I'm doing well! Just having a conversation with an AI.\n",
"AI:\u001b[0m\n",
"\n",
"The human asked the AI what it was up to, and the AI responded that it was helping a customer with a technical issue.\n",
"Human: Just working on writing some documentation!\n",
"AI: That sounds like a lot of work. What kind of documentation are you writing?\n",
"Human: For LangChain! Have you heard of it?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Yes, I have heard of LangChain. It is a blockchain-based language learning platform. Can you tell me more about the documentation you are writing?'"
"\" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\""
]
},
"execution_count": 17,
@ -646,14 +285,13 @@
}
],
"source": [
"# We can see here that there is a summary of the conversation and then some previous interactions\n",
"conversation_with_summary.predict(input=\"For LangChain! Have you heard of it?\")"
"conversation.predict(input=\"I'm doing well! Just having a conversation with an AI.\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "8c669db1",
"id": "15eda316",
"metadata": {},
"outputs": [
{
@ -662,24 +300,25 @@
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi there!\n",
"AI: Hi there! It's nice to meet you. How can I help you today?\n",
"Human: I'm doing well! Just having a conversation with an AI.\n",
"AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\n",
"Human: Tell me about yourself.\n",
"AI:\u001b[0m\n",
"\n",
"The human asked the AI what it was up to, and the AI responded that it was helping a customer with a technical issue. The human then mentioned they were writing documentation for LangChain, a blockchain-based language learning platform, and the AI revealed they had heard of it and asked the human to tell them more about the documentation they were writing.\n",
"\n",
"Human: Haha nope, although a lot of people confuse it for that\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished ConversationChain chain.\u001B[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Oh, I see. So, what kind of documentation are you writing for LangChain?'"
"\" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers.\""
]
},
"execution_count": 18,
@ -688,194 +327,21 @@
}
],
"source": [
"# We can see here that the summary and the buffer are updated\n",
"conversation_with_summary.predict(input=\"Haha nope, although a lot of people confuse it for that\")"
"conversation.predict(input=\"Tell me about yourself.\")"
]
},
{
"cell_type": "markdown",
"id": "44c9933a",
"metadata": {},
"source": [
"## Conversation Knowledge Graph Memory\n",
"\n",
"This type of memory uses a knowledge graph to recreate memory."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "f71f40ba",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.conversation.memory import ConversationKGMemory"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "b462baf1",
"id": "bd0146c2",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"template = \"\"\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"{history}\n",
"\n",
"Conversation:\n",
"Human: {input}\n",
"AI:\"\"\"\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"], template=template\n",
")\n",
"conversation_with_kg = ConversationChain(\n",
" llm=llm, \n",
" verbose=True, \n",
" prompt=prompt,\n",
" memory=ConversationKGMemory(llm=llm)\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "97efaf38",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"\n",
"\n",
"Conversation:\n",
"Human: Hi, what's up?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently in the process of learning about the world around me. I'm learning about different cultures, languages, and customs. It's really fascinating! How about you?\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_kg.predict(input=\"Hi, what's up?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "55b5bcad",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"\n",
"\n",
"Conversation:\n",
"Human: My name is James and I'm helping Will. He's an engineer.\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi James, it's nice to meet you. I'm an AI and I understand you're helping Will, the engineer. What kind of engineering does he do?\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_kg.predict(input=\"My name is James and I'm helping Will. He's an engineer.\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "9981e219",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new ConversationChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"On Will: Will is an engineer.\n",
"\n",
"Conversation:\n",
"Human: What do you know about Will?\n",
"AI:\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"' Will is an engineer.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_kg.predict(input=\"What do you know about Will?\")"
"And that's it for the getting started! There are plenty of different types of memory, check out our examples to see them all"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c09a239",
"id": "447c138d",
"metadata": {},
"outputs": [],
"source": []

@ -1,15 +1,24 @@
How-To Guides
=============
The first set of examples all highlight different types of memory.
`Entity Memory <./types/entity_summary_memory.html>`_: How to use a type of memory that organizes information by entity.
.. toctree::
:maxdepth: 1
:glob:
:hidden:
./types/*
The examples here all highlight how to use memory in different ways.
`Adding Memory <./examples/adding_memory.html>`_: How to add a memory component to any single input chain.
`ChatGPT Clone <./examples/chatgpt_clone.html>`_: How to recreate ChatGPT with LangChain prompting + memory components.
`Entity Memory <./examples/entity_summary_memory.html>`_: How to use a type of memory that organizes information by entity.
`Adding Memory to Multi-Input Chain <./examples/adding_memory_chain_multiple_inputs.html>`_: How to add a memory component to any multiple input chain.
`Conversational Memory Customization <./examples/conversational_customization.html>`_: How to customize existing conversation memory components.

@ -16,4 +16,4 @@ There are a few different ways to accomplish this:
A more complex form of memory is remembering information about specific entities in the conversation.
This is a more direct and organized way of remembering information over time.
Putting it a more structured form also has the benefit of allowing easy inspection of what is known about specific entities.
For a guide on how to use this type of memory, see [this notebook](./examples/entity_summary_memory.ipynb).
For a guide on how to use this type of memory, see [this notebook](types/entity_summary_memory.ipynb).

@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "46196aa3",
"metadata": {},
"source": [
"## ConversationBufferMemory\n",
"\n",
"This notebook shows how to use `ConversationBufferMemory`. This memory allows for storing of messages and then extracts the messages in a variable.\n",
"\n",
"We can first extract it as a string."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3bac84f3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cef35e7f",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory()\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2c9b39af",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'Human: hi\\nAI: whats up'}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "567f7c16",
"metadata": {},
"source": [
"We can also get the history as a list of messages (this is useful if you are using this with a chat model)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a481a415",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True)\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "86a56348",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='hi', additional_kwargs={}),\n",
" AIMessage(content='whats up', additional_kwargs={})]}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "d051c1da",
"metadata": {},
"source": [
"Finally, let's take a look at using this in a chain (setting `verbose=True` so we can see the prompt)."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "54301321",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationChain\n",
"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"conversation = ConversationChain(\n",
" llm=llm, \n",
" verbose=True, \n",
" memory=ConversationBufferMemory()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ae046bff",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi there!\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! It's nice to meet you. How can I help you today?\""
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "d8e2a6ff",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi there!\n",
"AI: Hi there! It's nice to meet you. How can I help you today?\n",
"Human: I'm doing well! Just having a conversation with an AI.\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\""
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"I'm doing well! Just having a conversation with an AI.\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "15eda316",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi there!\n",
"AI: Hi there! It's nice to meet you. How can I help you today?\n",
"Human: I'm doing well! Just having a conversation with an AI.\n",
"AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\n",
"Human: Tell me about yourself.\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers.\""
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Tell me about yourself.\")"
]
},
{
"cell_type": "markdown",
"id": "bd0146c2",
"metadata": {},
"source": [
"And that's it for the getting started! There are plenty of different types of memory, check out our examples to see them all"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "447c138d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,310 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "716c8cab",
"metadata": {},
"source": [
"## ConversationBufferWindowMemory\n",
"\n",
"`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large\n",
"\n",
"Let's first explore the basic functionality of this type of memory."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "dc9d10b0",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferWindowMemory"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2dac7769",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferWindowMemory( k=1)\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})\n",
"memory.save_context({\"input\": \"not much you\"}, {\"ouput\": \"not much\"})"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "40664f10",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'Human: not much you\\nAI: not much'}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "b1932b49",
"metadata": {},
"source": [
"We can also get the history as a list of messages (this is useful if you are using this with a chat model)."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5fd077d5",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferWindowMemory( k=1, return_messages=True)\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})\n",
"memory.save_context({\"input\": \"not much you\"}, {\"ouput\": \"not much\"})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "b94b750f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='not much you', additional_kwargs={}),\n",
" AIMessage(content='not much', additional_kwargs={})]}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "ac59a682",
"metadata": {},
"source": [
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0b9da4cd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi, what's up?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationChain\n",
"conversation_with_summary = ConversationChain(\n",
" llm=OpenAI(temperature=0), \n",
" # We set a low k=2, to only keep the last 2 interactions in memory\n",
" memory=ConversationBufferWindowMemory(k=2), \n",
" verbose=True\n",
")\n",
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "90f73431",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi, what's up?\n",
"AI: Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\n",
"Human: What's their issues?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" The customer is having trouble connecting to their Wi-Fi network. I'm helping them troubleshoot the issue and get them connected.\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"What's their issues?\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "cbb499e7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi, what's up?\n",
"AI: Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\n",
"Human: What's their issues?\n",
"AI: The customer is having trouble connecting to their Wi-Fi network. I'm helping them troubleshoot the issue and get them connected.\n",
"Human: Is it going well?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Yes, it's going well so far. We've already identified the problem and are now working on a solution.\""
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"Is it going well?\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "0d209cfe",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: What's their issues?\n",
"AI: The customer is having trouble connecting to their Wi-Fi network. I'm helping them troubleshoot the issue and get them connected.\n",
"Human: Is it going well?\n",
"AI: Yes, it's going well so far. We've already identified the problem and are now working on a solution.\n",
"Human: What's the solution?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" The solution is to reset the router and reconfigure the settings. We're currently in the process of doing that.\""
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Notice here that the first interaction does not appear.\n",
"conversation_with_summary.predict(input=\"What's the solution?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c09a239",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -6,31 +6,129 @@
"metadata": {},
"source": [
"# Entity Memory\n",
"This notebook shows how to work with a memory module that remembers things about specific entities. It extracts information on entities (using LLMs) and builds up its knowledge about that entity over time (also using LLMs)."
"This notebook shows how to work with a memory module that remembers things about specific entities. It extracts information on entities (using LLMs) and builds up its knowledge about that entity over time (also using LLMs).\n",
"\n",
"Let's first walk through using this functionality."
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 1,
"id": "7b165e71",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.memory import ConversationEntityMemory\n",
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "25354d39",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationEntityMemory(llm=llm)\n",
"_input = {\"input\": \"Deven & Sam are working on a hackathon project\"}\n",
"memory.load_memory_variables(_input)\n",
"memory.save_context(\n",
" _input,\n",
" {\"ouput\": \" That sounds like a great project! What kind of project are they working on?\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "71c75295",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'Human: Deven & Sam are working on a hackathon project\\nAI: That sounds like a great project! What kind of project are they working on?',\n",
" 'entities': {'Sam': 'Sam is working on a hackathon project with Deven.'}}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({\"input\": 'who is Sam'})"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bb8a7943",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationEntityMemory(llm=llm, return_messages=True)\n",
"_input = {\"input\": \"Deven & Sam are working on a hackathon project\"}\n",
"memory.load_memory_variables(_input)\n",
"memory.save_context(\n",
" _input,\n",
" {\"ouput\": \" That sounds like a great project! What kind of project are they working on?\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a37ac236",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='Deven & Sam are working on a hackathon project', additional_kwargs={}),\n",
" AIMessage(content=' That sounds like a great project! What kind of project are they working on?', additional_kwargs={})],\n",
" 'entities': {'Sam': 'Sam is working on a hackathon project with Deven.'}}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({\"input\": 'who is Sam'})"
]
},
{
"cell_type": "markdown",
"id": "655ab2c4",
"metadata": {},
"source": [
"Let's now use it in a chain!"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "13471fbd",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, ConversationChain\n",
"from langchain.chains.conversation.memory import ConversationEntityMemory\n",
"from langchain.chains.conversation.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE\n",
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationEntityMemory\n",
"from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE\n",
"from pydantic import BaseModel\n",
"from typing import List, Dict, Any"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 7,
"id": "183346e2",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"conversation = ConversationChain(\n",
" llm=llm, \n",
" verbose=True,\n",
@ -41,7 +139,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 8,
"id": "7eb1460a",
"metadata": {},
"outputs": [
@ -79,7 +177,7 @@
"' That sounds like a great project! What kind of project are they working on?'"
]
},
"execution_count": 3,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -90,7 +188,29 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 9,
"id": "dc1c0d5e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'Deven': 'Deven is working on a hackathon project with Sam.',\n",
" 'Sam': 'Sam is working on a hackathon project with Deven.'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.memory.store"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "46324ca8",
"metadata": {},
"outputs": [
@ -129,7 +249,7 @@
"' That sounds like an interesting project! What kind of memory structures are they trying to add?'"
]
},
"execution_count": 4,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@ -140,7 +260,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 11,
"id": "ff2ebf6b",
"metadata": {},
"outputs": [
@ -161,7 +281,7 @@
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Deven': 'Deven is working on a hackathon project with Sam to add more complex memory structures to Langchain.', 'Sam': 'Sam is working on a hackathon project with Deven to add more complex memory structures to Langchain.', 'Langchain': 'Langchain is a project that seeks to add more complex memory structures.', 'Key-Value Store': ''}\n",
"{'Deven': 'Deven is working on a hackathon project with Sam, attempting to add more complex memory structures to Langchain.', 'Sam': 'Sam is working on a hackathon project with Deven, trying to add more complex memory structures to Langchain.', 'Langchain': 'Langchain is a project that is trying to add more complex memory structures.', 'Key-Value Store': ''}\n",
"\n",
"Current conversation:\n",
"Human: Deven & Sam are working on a hackathon project\n",
@ -181,7 +301,7 @@
"' That sounds like a great idea! How will the key-value store work?'"
]
},
"execution_count": 5,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -192,7 +312,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 12,
"id": "56cfd4ba",
"metadata": {},
"outputs": [
@ -213,7 +333,7 @@
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Deven': 'Deven is working on a hackathon project with Sam to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.', 'Sam': 'Sam is working on a hackathon project with Deven to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.'}\n",
"{'Deven': 'Deven is working on a hackathon project with Sam, attempting to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.', 'Sam': 'Sam is working on a hackathon project with Deven, trying to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.'}\n",
"\n",
"Current conversation:\n",
"Human: Deven & Sam are working on a hackathon project\n",
@ -232,10 +352,10 @@
{
"data": {
"text/plain": [
"' Deven and Sam are working on a hackathon project to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation. They seem to be very motivated and passionate about their project, and are working hard to make it a success.'"
"' Deven and Sam are working on a hackathon project together, attempting to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.'"
]
},
"execution_count": 6,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -263,19 +383,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'Deven': 'Deven is working on a hackathon project with Sam to add more '\n",
" 'complex memory structures to Langchain, including a key-value store '\n",
" 'for entities mentioned so far in the conversation.',\n",
" 'Key-Value Store': 'Key-Value Store: A data structure that stores values '\n",
" 'associated with a unique key, allowing for efficient '\n",
" 'retrieval of values. Deven and Sam are adding a key-value '\n",
" 'store for entities mentioned so far in the conversation.',\n",
" 'Langchain': 'Langchain is a project that seeks to add more complex memory '\n",
" 'structures, including a key-value store for entities mentioned '\n",
" 'so far in the conversation.',\n",
" 'Sam': 'Sam is working on a hackathon project with Deven to add more complex '\n",
" 'memory structures to Langchain, including a key-value store for '\n",
" 'entities mentioned so far in the conversation.'}\n"
"{'Deven': 'Deven is working on a hackathon project with Sam, attempting to add '\n",
" 'more complex memory structures to Langchain, including a key-value '\n",
" 'store for entities mentioned so far in the conversation.',\n",
" 'Key-Value Store': 'A key-value store that stores entities mentioned in the '\n",
" 'conversation.',\n",
" 'Langchain': 'Langchain is a project that is trying to add more complex '\n",
" 'memory structures, including a key-value store for entities '\n",
" 'mentioned so far in the conversation.',\n",
" 'Sam': 'Sam is working on a hackathon project with Deven, attempting to add '\n",
" 'more complex memory structures to Langchain, including a key-value '\n",
" 'store for entities mentioned so far in the conversation.'}\n"
]
}
],
@ -451,7 +569,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -0,0 +1,300 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "44c9933a",
"metadata": {},
"source": [
"## Conversation Knowledge Graph Memory\n",
"\n",
"This type of memory uses a knowledge graph to recreate memory.\n",
"\n",
"Let's first walk through how to use the utilities"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f71f40ba",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationKGMemory\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1d9a5b18",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"memory = ConversationKGMemory(llm=llm)\n",
"memory.save_context({\"input\": \"say hi to sam\"}, {\"ouput\": \"who is sam\"})\n",
"memory.save_context({\"input\": \"sam is a friend\"}, {\"ouput\": \"okay\"})"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3f5da288",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'On Sam: Sam is friend.'}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({\"input\": 'who is sam'})"
]
},
{
"cell_type": "markdown",
"id": "2ab23035",
"metadata": {},
"source": [
"We can also get the history as a list of messages (this is useful if you are using this with a chat model)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "43dbbc43",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationKGMemory(llm=llm, return_messages=True)\n",
"memory.save_context({\"input\": \"say hi to sam\"}, {\"ouput\": \"who is sam\"})\n",
"memory.save_context({\"input\": \"sam is a friend\"}, {\"ouput\": \"okay\"})"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ef6a4f60",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [SystemMessage(content='On Sam: Sam is friend.', additional_kwargs={})]}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({\"input\": 'who is sam'})"
]
},
{
"cell_type": "markdown",
"id": "5ba0dde9",
"metadata": {},
"source": [
"Let's now use this in a chain!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "b462baf1",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"from langchain.prompts.prompt import PromptTemplate\n",
"from langchain.chains import ConversationChain\n",
"\n",
"template = \"\"\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"{history}\n",
"\n",
"Conversation:\n",
"Human: {input}\n",
"AI:\"\"\"\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"], template=template\n",
")\n",
"conversation_with_kg = ConversationChain(\n",
" llm=llm, \n",
" verbose=True, \n",
" prompt=prompt,\n",
" memory=ConversationKGMemory(llm=llm)\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "97efaf38",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"\n",
"\n",
"Conversation:\n",
"Human: Hi, what's up?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently in the process of learning about the world around me. I'm learning about different cultures, languages, and customs. It's really fascinating! How about you?\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_kg.predict(input=\"Hi, what's up?\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "55b5bcad",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"\n",
"\n",
"Conversation:\n",
"Human: My name is James and I'm helping Will. He's an engineer.\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi James, it's nice to meet you. I'm an AI and I understand you're helping Will, the engineer. What kind of engineering does he do?\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_kg.predict(input=\"My name is James and I'm helping Will. He's an engineer.\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "9981e219",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \n",
"If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the \"Relevant Information\" section and does not hallucinate.\n",
"\n",
"Relevant Information:\n",
"\n",
"On Will: Will is an engineer.\n",
"\n",
"Conversation:\n",
"Human: What do you know about Will?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Will is an engineer.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_kg.predict(input=\"What do you know about Will?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c09a239",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,262 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5d3f2f0f",
"metadata": {},
"source": [
"## ConversationSummaryMemory\n",
"Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.\n",
"\n",
"Let's first explore the basic functionality of this type of memory."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a36c34d6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationSummaryMemory\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "89646c34",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationSummaryMemory(llm=OpenAI(temperature=0))\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "dea210aa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': '\\nThe human greets the AI, to which the AI responds.'}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "3838fe93",
"metadata": {},
"source": [
"We can also get the history as a list of messages (this is useful if you are using this with a chat model)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "114dbef6",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationSummaryMemory(llm=OpenAI(temperature=0), return_messages=True)\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "39c8c106",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [SystemMessage(content='\\nThe human greets the AI and the AI responds with a casual greeting.', additional_kwargs={})]}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "4fad9448",
"metadata": {},
"source": [
"Let's walk through an example of using this in a chain, again setting `verbose=True` so we can see the prompt."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b7274f2c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi, what's up?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationChain\n",
"llm = OpenAI(temperature=0)\n",
"conversation_with_summary = ConversationChain(\n",
" llm=llm, \n",
" memory=ConversationSummaryMemory(llm=OpenAI()),\n",
" verbose=True\n",
")\n",
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a6b6b88f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue.\n",
"Human: Tell me more about it!\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Sure! The customer is having trouble with their computer not connecting to the internet. I'm helping them troubleshoot the issue and figure out what the problem is. So far, we've tried resetting the router and checking the network settings, but the issue still persists. We're currently looking into other possible solutions.\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"Tell me more about it!\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "dad869fe",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue where their computer was not connecting to the internet. The AI was troubleshooting the issue and had already tried resetting the router and checking the network settings, but the issue still persisted and they were looking into other possible solutions.\n",
"Human: Very cool -- what is the scope of the project?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" The scope of the project is to troubleshoot the customer's computer issue and find a solution that will allow them to connect to the internet. We are currently exploring different possibilities and have already tried resetting the router and checking the network settings, but the issue still persists.\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"Very cool -- what is the scope of the project?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c09a239",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,290 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2e90817b",
"metadata": {},
"source": [
"## ConversationSummaryBufferMemory\n",
"\n",
"`ConversationSummaryBufferMemory` combines the last two ideas. It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.\n",
"\n",
"Let's first walk through how to use the utilities"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b8aec547",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationSummaryBufferMemory\n",
"from langchain.llms import OpenAI\n",
"llm = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2594c8f1",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})\n",
"memory.save_context({\"input\": \"not much you\"}, {\"ouput\": \"not much\"})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "a25087e0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'System: \\nThe human greets the AI, to which the AI responds inquiring what is up.\\nHuman: not much you\\nAI: not much'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "3a6e7905",
"metadata": {},
"source": [
"We can also get the history as a list of messages (this is useful if you are using this with a chat model)."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e439451f",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10, return_messages=True)\n",
"memory.save_context({\"input\": \"hi\"}, {\"ouput\": \"whats up\"})\n",
"memory.save_context({\"input\": \"not much you\"}, {\"ouput\": \"not much\"})"
]
},
{
"cell_type": "markdown",
"id": "a6d2569f",
"metadata": {},
"source": [
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "ebd68c10",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: Hi, what's up?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Hi there! I'm doing great. I'm spending some time learning about the latest developments in AI technology. How about you?\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationChain\n",
"conversation_with_summary = ConversationChain(\n",
" llm=llm, \n",
" # We set a very low max_token_limit for the purposes of testing.\n",
" memory=ConversationSummaryBufferMemory(llm=OpenAI(), max_token_limit=40),\n",
" verbose=True\n",
")\n",
"conversation_with_summary.predict(input=\"Hi, what's up?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "86207a61",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Human: Hi, what's up?\n",
"AI: Hi there! I'm doing great. I'm spending some time learning about the latest developments in AI technology. How about you?\n",
"Human: Just working on writing some documentation!\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' That sounds like a great use of your time. Do you have experience with writing documentation?'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation_with_summary.predict(input=\"Just working on writing some documentation!\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "76a0ab39",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"System: \n",
"The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology.\n",
"Human: Just working on writing some documentation!\n",
"AI: That sounds like a great use of your time. Do you have experience with writing documentation?\n",
"Human: For LangChain! Have you heard of it?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" No, I haven't heard of LangChain. Can you tell me more about it?\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We can see here that there is a summary of the conversation and then some previous interactions\n",
"conversation_with_summary.predict(input=\"For LangChain! Have you heard of it?\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "8c669db1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"System: \n",
"The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology. The human then mentioned they were writing documentation, to which the AI responded that it sounded like a great use of their time and asked if they had experience with writing documentation.\n",
"Human: For LangChain! Have you heard of it?\n",
"AI: No, I haven't heard of LangChain. Can you tell me more about it?\n",
"Human: Haha nope, although a lot of people confuse it for that\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Oh, okay. What is LangChain?'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We can see here that the summary and the buffer are updated\n",
"conversation_with_summary.predict(input=\"Haha nope, although a lot of people confuse it for that\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c09a239",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -5,64 +5,12 @@ from pathlib import Path
from typing import Any, Dict, List, Optional, Union
import yaml
from pydantic import BaseModel, Extra, Field, validator
from pydantic import BaseModel, Field, validator
import langchain
from langchain.callbacks import get_callback_manager
from langchain.callbacks.base import BaseCallbackManager
class Memory(BaseModel, ABC):
"""Base interface for memory in chains."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
@abstractmethod
def memory_variables(self) -> List[str]:
"""Input keys this memory class will load dynamically."""
@abstractmethod
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return key-value pairs given the text input to the chain.
If None, return all memories
"""
@abstractmethod
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save the context of this model run to memory."""
@abstractmethod
def clear(self) -> None:
"""Clear memory contents."""
class SimpleMemory(Memory, BaseModel):
"""Simple memory for storing context or other bits of information that shouldn't
ever change between prompts.
"""
memories: Dict[str, Any] = dict()
@property
def memory_variables(self) -> List[str]:
return list(self.memories.keys())
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
return self.memories
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Nothing should be saved or changed, my memory is set in stone."""
pass
def clear(self) -> None:
"""Nothing to clear, got a memory like a vault."""
pass
from langchain.schema import BaseMemory
def _get_verbosity() -> bool:
@ -72,7 +20,7 @@ def _get_verbosity() -> bool:
class Chain(BaseModel, ABC):
"""Base interface that all chains should implement."""
memory: Optional[Memory] = None
memory: Optional[BaseMemory] = None
callback_manager: BaseCallbackManager = Field(
default_factory=get_callback_manager, exclude=True
)

@ -3,11 +3,11 @@ from typing import Dict, List
from pydantic import BaseModel, Extra, Field, root_validator
from langchain.chains.base import Memory
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.chains.conversation.prompt import PROMPT
from langchain.chains.llm import LLMChain
from langchain.memory.buffer import ConversationBufferMemory
from langchain.prompts.base import BasePromptTemplate
from langchain.schema import BaseMemory
class ConversationChain(LLMChain, BaseModel):
@ -20,7 +20,7 @@ class ConversationChain(LLMChain, BaseModel):
conversation = ConversationChain(llm=OpenAI())
"""
memory: Memory = Field(default_factory=ConversationBufferMemory)
memory: BaseMemory = Field(default_factory=ConversationBufferMemory)
"""Default memory store."""
prompt: BasePromptTemplate = PROMPT
"""Default conversation prompt to use."""

@ -1,493 +1,21 @@
"""Memory modules for conversation prompts."""
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field, root_validator
from langchain.chains.base import Memory
from langchain.chains.conversation.prompt import (
ENTITY_EXTRACTION_PROMPT,
ENTITY_SUMMARIZATION_PROMPT,
KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,
SUMMARY_PROMPT,
)
from langchain.chains.llm import LLMChain
from langchain.graphs.networkx_graph import (
NetworkxEntityGraph,
get_entities,
parse_triples,
)
from langchain.llms.base import BaseLLM
from langchain.prompts.base import BasePromptTemplate
def _get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) -> str:
# "stop" is a special key that can be passed as input but is not used to
# format the prompt.
prompt_input_keys = list(set(inputs).difference(memory_variables + ["stop"]))
if len(prompt_input_keys) != 1:
raise ValueError(f"One input key expected got {prompt_input_keys}")
return prompt_input_keys[0]
class CombinedMemory(Memory, BaseModel):
"""Class for combining multiple memories' data together."""
memories: List[Memory]
"""For tracking all the memories that should be accessed."""
@property
def memory_variables(self) -> List[str]:
"""All the memory variables that this instance provides."""
"""Collected from the all the linked memories."""
memory_variables = []
for memory in self.memories:
memory_variables.extend(memory.memory_variables)
return memory_variables
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Load all vars from sub-memories."""
memory_data: Dict[str, Any] = {}
# Collect vars from all sub-memories
for memory in self.memories:
data = memory.load_memory_variables(inputs)
memory_data = {
**memory_data,
**data,
}
return memory_data
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this session for every memory."""
# Save context for all sub-memories
for memory in self.memories:
memory.save_context(inputs, outputs)
def clear(self) -> None:
"""Clear context from this session for every memory."""
for memory in self.memories:
memory.clear()
class ConversationBufferMemory(Memory, BaseModel):
"""Buffer for storing conversation memory."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
buffer: str = ""
output_key: Optional[str] = None
input_key: Optional[str] = None
memory_key: str = "history" #: :meta private:
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
return {self.memory_key: self.buffer}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
human = f"{self.human_prefix}: " + inputs[prompt_input_key]
ai = f"{self.ai_prefix}: " + outputs[output_key]
self.buffer += "\n" + "\n".join([human, ai])
def clear(self) -> None:
"""Clear memory contents."""
self.buffer = ""
class ConversationBufferWindowMemory(Memory, BaseModel):
"""Buffer for storing conversation memory."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
buffer: List[str] = Field(default_factory=list)
memory_key: str = "history" #: :meta private:
output_key: Optional[str] = None
input_key: Optional[str] = None
k: int = 5
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
return {self.memory_key: "\n".join(self.buffer[-self.k :])}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
human = f"{self.human_prefix}: " + inputs[prompt_input_key]
ai = f"{self.ai_prefix}: " + outputs[output_key]
self.buffer.append("\n".join([human, ai]))
def clear(self) -> None:
"""Clear memory contents."""
self.buffer = []
# For legacy naming reasons
ConversationalBufferWindowMemory = ConversationBufferWindowMemory
class ConversationSummaryMemory(Memory, BaseModel):
"""Conversation summarizer to memory."""
buffer: str = ""
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
llm: BaseLLM
prompt: BasePromptTemplate = SUMMARY_PROMPT
memory_key: str = "history" #: :meta private:
output_key: Optional[str] = None
input_key: Optional[str] = None
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
return {self.memory_key: self.buffer}
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
prompt_variables = values["prompt"].input_variables
expected_keys = {"summary", "new_lines"}
if expected_keys != set(prompt_variables):
raise ValueError(
"Got unexpected prompt input variables. The prompt expects "
f"{prompt_variables}, but it should have {expected_keys}."
)
return values
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
human = f"{self.human_prefix}: {inputs[prompt_input_key]}"
ai = f"{self.ai_prefix}: {outputs[output_key]}"
new_lines = "\n".join([human, ai])
chain = LLMChain(llm=self.llm, prompt=self.prompt)
self.buffer = chain.predict(summary=self.buffer, new_lines=new_lines)
def clear(self) -> None:
"""Clear memory contents."""
self.buffer = ""
class ConversationEntityMemory(Memory, BaseModel):
"""Entity extractor & summarizer to memory."""
buffer: List[str] = []
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
llm: BaseLLM
entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT
entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT
output_key: Optional[str] = None
input_key: Optional[str] = None
store: Dict[str, Optional[str]] = {}
entity_cache: List[str] = []
k: int = 3
chat_history_key: str = "history"
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return ["entities", self.chat_history_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
output = chain.predict(
history="\n".join(self.buffer[-self.k :]),
input=inputs[prompt_input_key],
)
if output.strip() == "NONE":
entities = []
else:
entities = [w.strip() for w in output.split(",")]
entity_summaries = {}
for entity in entities:
entity_summaries[entity] = self.store.get(entity, "")
self.entity_cache = entities
return {
self.chat_history_key: "\n".join(self.buffer[-self.k :]),
"entities": entity_summaries,
}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
if inputs is None:
raise ValueError(
"Inputs must be provided to save context from "
"ConversationEntityMemory."
)
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
human = f"{self.human_prefix}: " + inputs[prompt_input_key]
ai = f"{self.ai_prefix}: " + outputs[output_key]
for entity in self.entity_cache:
chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt)
# key value store for entity
existing_summary = self.store.get(entity, "")
output = chain.predict(
summary=existing_summary,
history="\n".join(self.buffer[-self.k :]),
input=inputs[prompt_input_key],
entity=entity,
)
self.store[entity] = output.strip()
new_lines = "\n".join([human, ai])
self.buffer.append(new_lines)
def clear(self) -> None:
"""Clear memory contents."""
self.buffer = []
self.store = {}
class ConversationSummaryBufferMemory(Memory, BaseModel):
"""Buffer with summarizer for storing conversation memory."""
buffer: List[str] = Field(default_factory=list)
max_token_limit: int = 2000
moving_summary_buffer: str = ""
llm: BaseLLM
prompt: BasePromptTemplate = SUMMARY_PROMPT
memory_key: str = "history"
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
output_key: Optional[str] = None
input_key: Optional[str] = None
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
if self.moving_summary_buffer == "":
return {self.memory_key: "\n".join(self.buffer)}
memory_val = self.moving_summary_buffer + "\n" + "\n".join(self.buffer)
return {self.memory_key: memory_val}
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
prompt_variables = values["prompt"].input_variables
expected_keys = {"summary", "new_lines"}
if expected_keys != set(prompt_variables):
raise ValueError(
"Got unexpected prompt input variables. The prompt expects "
f"{prompt_variables}, but it should have {expected_keys}."
)
return values
def get_num_tokens_list(self, arr: List[str]) -> List[int]:
"""Get list of number of tokens in each string in the input array."""
return [self.llm.get_num_tokens(x) for x in arr]
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
human = f"{self.human_prefix}: {inputs[prompt_input_key]}"
ai = f"{self.ai_prefix}: {outputs[output_key]}"
new_lines = "\n".join([human, ai])
self.buffer.append(new_lines)
# Prune buffer if it exceeds max token limit
curr_buffer_length = sum(self.get_num_tokens_list(self.buffer))
if curr_buffer_length > self.max_token_limit:
pruned_memory = []
while curr_buffer_length > self.max_token_limit:
pruned_memory.append(self.buffer.pop(0))
curr_buffer_length = sum(self.get_num_tokens_list(self.buffer))
chain = LLMChain(llm=self.llm, prompt=self.prompt)
self.moving_summary_buffer = chain.predict(
summary=self.moving_summary_buffer, new_lines=("\n".join(pruned_memory))
)
def clear(self) -> None:
"""Clear memory contents."""
self.buffer = []
self.moving_summary_buffer = ""
class ConversationKGMemory(Memory, BaseModel):
"""Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
"""
k: int = 2
buffer: List[str] = Field(default_factory=list)
kg: NetworkxEntityGraph = Field(default_factory=NetworkxEntityGraph)
knowledge_extraction_prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT
entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT
llm: BaseLLM
"""Number of previous utterances to include in the context."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
output_key: Optional[str] = None
input_key: Optional[str] = None
memory_key: str = "history" #: :meta private:
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
entities = self._get_current_entities(inputs)
summaries = {}
for entity in entities:
knowledge = self.kg.get_entity_knowledge(entity)
if knowledge:
summaries[entity] = ". ".join(knowledge) + "."
if summaries:
summary_strings = [
f"On {entity}: {summary}" for entity, summary in summaries.items()
]
context_str = "\n".join(summary_strings)
else:
context_str = ""
return {self.memory_key: context_str}
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:
"""Get the input key for the prompt."""
if self.input_key is None:
return _get_prompt_input_key(inputs, self.memory_variables)
return self.input_key
def _get_prompt_output_key(self, outputs: Dict[str, Any]) -> str:
"""Get the output key for the prompt."""
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
return list(outputs.keys())[0]
return self.output_key
def _get_current_entities(self, inputs: Dict[str, Any]) -> List[str]:
"""Get the current entities in the conversation."""
prompt_input_key = self._get_prompt_input_key(inputs)
chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)
output = chain.predict(
history="\n".join(self.buffer[-self.k :]),
input=inputs[prompt_input_key],
)
return get_entities(output)
def _get_and_update_kg(self, inputs: Dict[str, Any]) -> None:
"""Get and update knowledge graph from the conversation history."""
chain = LLMChain(llm=self.llm, prompt=self.knowledge_extraction_prompt)
prompt_input_key = self._get_prompt_input_key(inputs)
output = chain.predict(
history="\n".join(self.buffer[-self.k :]),
input=inputs[prompt_input_key],
verbose=True,
)
knowledge = parse_triples(output)
for triple in knowledge:
self.kg.add_triple(triple)
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
self._get_and_update_kg(inputs)
prompt_input_key = self._get_prompt_input_key(inputs)
output_key = self._get_prompt_output_key(outputs)
human = f"{self.human_prefix}: {inputs[prompt_input_key]}"
ai = f"{self.ai_prefix}: {outputs[output_key]}"
new_lines = "\n".join([human.strip(), ai.strip()])
self.buffer.append(new_lines)
def clear(self) -> None:
"""Clear memory contents."""
return self.kg.clear()
from langchain.memory.buffer import ConversationBufferMemory
from langchain.memory.buffer_window import ConversationBufferWindowMemory
from langchain.memory.combined import CombinedMemory
from langchain.memory.entity import ConversationEntityMemory
from langchain.memory.kg import ConversationKGMemory
from langchain.memory.summary import ConversationSummaryMemory
from langchain.memory.summary_buffer import ConversationSummaryBufferMemory
# This is only for backwards compatibility.
__all__ = [
"ConversationSummaryBufferMemory",
"ConversationSummaryMemory",
"ConversationKGMemory",
"ConversationBufferWindowMemory",
"ConversationEntityMemory",
"ConversationBufferMemory",
"CombinedMemory",
]

@ -1,4 +1,11 @@
# flake8: noqa
from langchain.memory.prompt import (
ENTITY_EXTRACTION_PROMPT,
ENTITY_MEMORY_CONVERSATION_TEMPLATE,
ENTITY_SUMMARIZATION_PROMPT,
KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,
SUMMARY_PROMPT,
)
from langchain.prompts.prompt import PromptTemplate
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
@ -11,165 +18,13 @@ PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
_DEFAULT_ENTITY_MEMORY_CONVERSATION_TEMPLATE = """You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{entities}
Current conversation:
{history}
Last line:
Human: {input}
You:"""
ENTITY_MEMORY_CONVERSATION_TEMPLATE = PromptTemplate(
input_variables=["entities", "history", "input"],
template=_DEFAULT_ENTITY_MEMORY_CONVERSATION_TEMPLATE,
)
_DEFAULT_SUMMARIZER_TEMPLATE = """Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.
EXAMPLE
Current summary:
The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.
New lines of conversation:
Human: Why do you think artificial intelligence is a force for good?
AI: Because artificial intelligence will help humans reach their full potential.
New summary:
The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.
END OF EXAMPLE
Current summary:
{summary}
New lines of conversation:
{new_lines}
New summary:"""
SUMMARY_PROMPT = PromptTemplate(
input_variables=["summary", "new_lines"], template=_DEFAULT_SUMMARIZER_TEMPLATE
)
_DEFAULT_ENTITY_EXTRACTION_TEMPLATE = """You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.
The conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.
Return the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).
# Only for backwards compatibility
EXAMPLE
Conversation history:
Person #1: how's it going today?
AI: "It's going great! How about you?"
Person #1: good! busy working on Langchain. lots to do.
AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"
Last line:
Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.
Output: Langchain
END OF EXAMPLE
EXAMPLE
Conversation history:
Person #1: how's it going today?
AI: "It's going great! How about you?"
Person #1: good! busy working on Langchain. lots to do.
AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"
Last line:
Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Person #2.
Output: Langchain, Person #2
END OF EXAMPLE
Conversation history (for reference only):
{history}
Last line of conversation (for extraction):
Human: {input}
Output:"""
ENTITY_EXTRACTION_PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_ENTITY_EXTRACTION_TEMPLATE
)
_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE = """You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.
The update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.
If there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.
Full conversation history (for context):
{history}
Entity to summarize:
{entity}
Existing summary of {entity}:
{summary}
Last line of conversation:
Human: {input}
Updated summary:"""
ENTITY_SUMMARIZATION_PROMPT = PromptTemplate(
input_variables=["entity", "summary", "history", "input"],
template=_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE,
)
KG_TRIPLE_DELIMITER = "<|>"
_DEFAULT_KNOWLEDGE_TRIPLE_EXTRACTION_TEMPLATE = (
"You are a networked intelligence helping a human track knowledge triples"
" about all relevant people, things, concepts, etc. and integrating"
" them with your knowledge stored within your weights"
" as well as that stored in a knowledge graph."
" Extract all of the knowledge triples from the last line of conversation."
" A knowledge triple is a clause that contains a subject, a predicate,"
" and an object. The subject is the entity being described,"
" the predicate is the property of the subject that is being"
" described, and the object is the value of the property.\n\n"
"EXAMPLE\n"
"Conversation history:\n"
"Person #1: Did you hear aliens landed in Area 51?\n"
"AI: No, I didn't hear that. What do you know about Area 51?\n"
"Person #1: It's a secret military base in Nevada.\n"
"AI: What do you know about Nevada?\n"
"Last line of conversation:\n"
"Person #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\n"
f"Output: (Nevada, is a, state){KG_TRIPLE_DELIMITER}(Nevada, is in, US)"
f"{KG_TRIPLE_DELIMITER}(Nevada, is the number 1 producer of, gold)\n"
"END OF EXAMPLE\n\n"
"EXAMPLE\n"
"Conversation history:\n"
"Person #1: Hello.\n"
"AI: Hi! How are you?\n"
"Person #1: I'm good. How are you?\n"
"AI: I'm good too.\n"
"Last line of conversation:\n"
"Person #1: I'm going to the store.\n\n"
"Output: NONE\n"
"END OF EXAMPLE\n\n"
"EXAMPLE\n"
"Conversation history:\n"
"Person #1: What do you know about Descartes?\n"
"AI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\n"
"Person #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\n"
"AI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\n"
"Person #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\n"
"Last line of conversation:\n"
f"Output: (Descartes, likes to drive, antique scooters){KG_TRIPLE_DELIMITER}(Descartes, plays, mandolin)\n"
"END OF EXAMPLE\n\n"
"Conversation history (for reference only):\n"
"{history}"
"\nLast line of conversation (for extraction):\n"
"Human: {input}\n\n"
"Output:"
)
KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT = PromptTemplate(
input_variables=["history", "input"],
template=_DEFAULT_KNOWLEDGE_TRIPLE_EXTRACTION_TEMPLATE,
)
__all__ = [
"SUMMARY_PROMPT",
"ENTITY_MEMORY_CONVERSATION_TEMPLATE",
"ENTITY_SUMMARIZATION_PROMPT",
"ENTITY_EXTRACTION_PROMPT",
"KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT",
"PROMPT",
]

@ -0,0 +1,21 @@
from langchain.memory.buffer import ConversationBufferMemory
from langchain.memory.buffer_window import ConversationBufferWindowMemory
from langchain.memory.chat_memory import ChatMessageHistory
from langchain.memory.combined import CombinedMemory
from langchain.memory.entity import ConversationEntityMemory
from langchain.memory.kg import ConversationKGMemory
from langchain.memory.simple import SimpleMemory
from langchain.memory.summary import ConversationSummaryMemory
from langchain.memory.summary_buffer import ConversationSummaryBufferMemory
__all__ = [
"CombinedMemory",
"ConversationBufferWindowMemory",
"ConversationBufferMemory",
"SimpleMemory",
"ConversationSummaryBufferMemory",
"ConversationKGMemory",
"ConversationEntityMemory",
"ConversationSummaryMemory",
"ChatMessageHistory",
]

@ -0,0 +1,38 @@
from typing import Any, Dict, List
from pydantic import BaseModel
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.utils import get_buffer_string
class ConversationBufferMemory(BaseChatMemory, BaseModel):
"""Buffer for storing conversation memory."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
memory_key: str = "history" #: :meta private:
@property
def buffer(self) -> Any:
"""String buffer of memory."""
if self.return_messages:
return self.chat_memory.messages
else:
return get_buffer_string(
self.chat_memory.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
return {self.memory_key: self.buffer}

@ -0,0 +1,42 @@
from typing import Any, Dict, List
from pydantic import BaseModel
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.utils import get_buffer_string
from langchain.schema import BaseMessage
class ConversationBufferWindowMemory(BaseChatMemory, BaseModel):
"""Buffer for storing conversation memory."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
memory_key: str = "history" #: :meta private:
k: int = 5
@property
def buffer(self) -> List[BaseMessage]:
"""String buffer of memory."""
return self.chat_memory.messages
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
if self.return_messages:
buffer: Any = self.buffer[-self.k * 2 :]
else:
buffer = get_buffer_string(
self.buffer[-self.k * 2 :],
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
return {self.memory_key: buffer}

@ -0,0 +1,46 @@
from abc import ABC
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
from langchain.memory.utils import get_prompt_input_key
from langchain.schema import AIMessage, BaseMemory, BaseMessage, HumanMessage
class ChatMessageHistory(BaseModel):
messages: List[BaseMessage] = Field(default_factory=list)
def add_user_message(self, message: str) -> None:
self.messages.append(HumanMessage(content=message))
def add_ai_message(self, message: str) -> None:
self.messages.append(AIMessage(content=message))
def clear(self) -> None:
self.messages = []
class BaseChatMemory(BaseMemory, ABC):
chat_memory: ChatMessageHistory = Field(default_factory=ChatMessageHistory)
output_key: Optional[str] = None
input_key: Optional[str] = None
return_messages: bool = False
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
self.chat_memory.add_user_message(inputs[prompt_input_key])
self.chat_memory.add_ai_message(outputs[output_key])
def clear(self) -> None:
"""Clear memory contents."""
self.chat_memory.clear()

@ -0,0 +1,49 @@
from typing import Any, Dict, List
from pydantic import BaseModel
from langchain.schema import BaseMemory
class CombinedMemory(BaseMemory, BaseModel):
"""Class for combining multiple memories' data together."""
memories: List[BaseMemory]
"""For tracking all the memories that should be accessed."""
@property
def memory_variables(self) -> List[str]:
"""All the memory variables that this instance provides."""
"""Collected from the all the linked memories."""
memory_variables = []
for memory in self.memories:
memory_variables.extend(memory.memory_variables)
return memory_variables
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Load all vars from sub-memories."""
memory_data: Dict[str, Any] = {}
# Collect vars from all sub-memories
for memory in self.memories:
data = memory.load_memory_variables(inputs)
memory_data = {
**memory_data,
**data,
}
return memory_data
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this session for every memory."""
# Save context for all sub-memories
for memory in self.memories:
memory.save_context(inputs, outputs)
def clear(self) -> None:
"""Clear context from this session for every memory."""
for memory in self.memories:
memory.clear()

@ -0,0 +1,103 @@
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
from langchain.chains.llm import LLMChain
from langchain.llms.base import BaseLLM
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.prompt import (
ENTITY_EXTRACTION_PROMPT,
ENTITY_SUMMARIZATION_PROMPT,
)
from langchain.memory.utils import get_buffer_string, get_prompt_input_key
from langchain.prompts.base import BasePromptTemplate
from langchain.schema import BaseMessage
class ConversationEntityMemory(BaseChatMemory, BaseModel):
"""Entity extractor & summarizer to memory."""
human_prefix: str = "Human"
ai_prefix: str = "AI"
llm: BaseLLM
entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT
entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT
store: Dict[str, Optional[str]] = {}
entity_cache: List[str] = []
k: int = 3
chat_history_key: str = "history"
@property
def buffer(self) -> List[BaseMessage]:
return self.chat_memory.messages
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return ["entities", self.chat_history_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
buffer_string = get_buffer_string(
self.buffer[-self.k * 2 :],
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
output = chain.predict(
history=buffer_string,
input=inputs[prompt_input_key],
)
if output.strip() == "NONE":
entities = []
else:
entities = [w.strip() for w in output.split(",")]
entity_summaries = {}
for entity in entities:
entity_summaries[entity] = self.store.get(entity, "")
self.entity_cache = entities
if self.return_messages:
buffer: Any = self.buffer[-self.k * 2 :]
else:
buffer = buffer_string
return {
self.chat_history_key: buffer,
"entities": entity_summaries,
}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
super().save_context(inputs, outputs)
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
for entity in self.entity_cache:
chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt)
# key value store for entity
existing_summary = self.store.get(entity, "")
buffer_string = get_buffer_string(
self.buffer[-self.k * 2 :],
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
output = chain.predict(
summary=existing_summary,
history=buffer_string,
input=inputs[prompt_input_key],
entity=entity,
)
self.store[entity] = output.strip()
def clear(self) -> None:
"""Clear memory contents."""
self.chat_memory.clear()
self.store = {}

@ -0,0 +1,122 @@
from typing import Any, Dict, List
from pydantic import BaseModel, Field
from langchain.chains.llm import LLMChain
from langchain.graphs import NetworkxEntityGraph
from langchain.graphs.networkx_graph import get_entities, parse_triples
from langchain.llms.base import BaseLLM
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.prompt import (
ENTITY_EXTRACTION_PROMPT,
KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,
)
from langchain.memory.utils import get_buffer_string, get_prompt_input_key
from langchain.prompts.base import BasePromptTemplate
from langchain.schema import SystemMessage
class ConversationKGMemory(BaseChatMemory, BaseModel):
"""Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
"""
k: int = 2
human_prefix: str = "Human"
ai_prefix: str = "AI"
kg: NetworkxEntityGraph = Field(default_factory=NetworkxEntityGraph)
knowledge_extraction_prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT
entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT
llm: BaseLLM
"""Number of previous utterances to include in the context."""
memory_key: str = "history" #: :meta private:
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
entities = self._get_current_entities(inputs)
summaries = {}
for entity in entities:
knowledge = self.kg.get_entity_knowledge(entity)
if knowledge:
summaries[entity] = ". ".join(knowledge) + "."
if summaries:
summary_strings = [
f"On {entity}: {summary}" for entity, summary in summaries.items()
]
if self.return_messages:
context: Any = [SystemMessage(content=text) for text in summary_strings]
else:
context = "\n".join(summary_strings)
else:
if self.return_messages:
context = []
else:
context = ""
return {self.memory_key: context}
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:
"""Get the input key for the prompt."""
if self.input_key is None:
return get_prompt_input_key(inputs, self.memory_variables)
return self.input_key
def _get_prompt_output_key(self, outputs: Dict[str, Any]) -> str:
"""Get the output key for the prompt."""
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
return list(outputs.keys())[0]
return self.output_key
def _get_current_entities(self, inputs: Dict[str, Any]) -> List[str]:
"""Get the current entities in the conversation."""
prompt_input_key = self._get_prompt_input_key(inputs)
chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)
buffer_string = get_buffer_string(
self.chat_memory.messages[-self.k * 2 :],
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
output = chain.predict(
history=buffer_string,
input=inputs[prompt_input_key],
)
return get_entities(output)
def _get_and_update_kg(self, inputs: Dict[str, Any]) -> None:
"""Get and update knowledge graph from the conversation history."""
chain = LLMChain(llm=self.llm, prompt=self.knowledge_extraction_prompt)
prompt_input_key = self._get_prompt_input_key(inputs)
buffer_string = get_buffer_string(
self.chat_memory.messages[-self.k * 2 :],
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
output = chain.predict(
history=buffer_string,
input=inputs[prompt_input_key],
verbose=True,
)
knowledge = parse_triples(output)
for triple in knowledge:
self.kg.add_triple(triple)
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
super().save_context(inputs, outputs)
self._get_and_update_kg(inputs)
def clear(self) -> None:
"""Clear memory contents."""
super().clear()
self.kg.clear()

@ -0,0 +1,165 @@
# flake8: noqa
from langchain.prompts.prompt import PromptTemplate
_DEFAULT_ENTITY_MEMORY_CONVERSATION_TEMPLATE = """You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{entities}
Current conversation:
{history}
Last line:
Human: {input}
You:"""
ENTITY_MEMORY_CONVERSATION_TEMPLATE = PromptTemplate(
input_variables=["entities", "history", "input"],
template=_DEFAULT_ENTITY_MEMORY_CONVERSATION_TEMPLATE,
)
_DEFAULT_SUMMARIZER_TEMPLATE = """Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.
EXAMPLE
Current summary:
The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.
New lines of conversation:
Human: Why do you think artificial intelligence is a force for good?
AI: Because artificial intelligence will help humans reach their full potential.
New summary:
The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.
END OF EXAMPLE
Current summary:
{summary}
New lines of conversation:
{new_lines}
New summary:"""
SUMMARY_PROMPT = PromptTemplate(
input_variables=["summary", "new_lines"], template=_DEFAULT_SUMMARIZER_TEMPLATE
)
_DEFAULT_ENTITY_EXTRACTION_TEMPLATE = """You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.
The conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.
Return the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).
EXAMPLE
Conversation history:
Person #1: how's it going today?
AI: "It's going great! How about you?"
Person #1: good! busy working on Langchain. lots to do.
AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"
Last line:
Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.
Output: Langchain
END OF EXAMPLE
EXAMPLE
Conversation history:
Person #1: how's it going today?
AI: "It's going great! How about you?"
Person #1: good! busy working on Langchain. lots to do.
AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"
Last line:
Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Person #2.
Output: Langchain, Person #2
END OF EXAMPLE
Conversation history (for reference only):
{history}
Last line of conversation (for extraction):
Human: {input}
Output:"""
ENTITY_EXTRACTION_PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_ENTITY_EXTRACTION_TEMPLATE
)
_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE = """You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.
The update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.
If there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.
Full conversation history (for context):
{history}
Entity to summarize:
{entity}
Existing summary of {entity}:
{summary}
Last line of conversation:
Human: {input}
Updated summary:"""
ENTITY_SUMMARIZATION_PROMPT = PromptTemplate(
input_variables=["entity", "summary", "history", "input"],
template=_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE,
)
KG_TRIPLE_DELIMITER = "<|>"
_DEFAULT_KNOWLEDGE_TRIPLE_EXTRACTION_TEMPLATE = (
"You are a networked intelligence helping a human track knowledge triples"
" about all relevant people, things, concepts, etc. and integrating"
" them with your knowledge stored within your weights"
" as well as that stored in a knowledge graph."
" Extract all of the knowledge triples from the last line of conversation."
" A knowledge triple is a clause that contains a subject, a predicate,"
" and an object. The subject is the entity being described,"
" the predicate is the property of the subject that is being"
" described, and the object is the value of the property.\n\n"
"EXAMPLE\n"
"Conversation history:\n"
"Person #1: Did you hear aliens landed in Area 51?\n"
"AI: No, I didn't hear that. What do you know about Area 51?\n"
"Person #1: It's a secret military base in Nevada.\n"
"AI: What do you know about Nevada?\n"
"Last line of conversation:\n"
"Person #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\n"
f"Output: (Nevada, is a, state){KG_TRIPLE_DELIMITER}(Nevada, is in, US)"
f"{KG_TRIPLE_DELIMITER}(Nevada, is the number 1 producer of, gold)\n"
"END OF EXAMPLE\n\n"
"EXAMPLE\n"
"Conversation history:\n"
"Person #1: Hello.\n"
"AI: Hi! How are you?\n"
"Person #1: I'm good. How are you?\n"
"AI: I'm good too.\n"
"Last line of conversation:\n"
"Person #1: I'm going to the store.\n\n"
"Output: NONE\n"
"END OF EXAMPLE\n\n"
"EXAMPLE\n"
"Conversation history:\n"
"Person #1: What do you know about Descartes?\n"
"AI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\n"
"Person #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\n"
"AI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\n"
"Person #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\n"
"Last line of conversation:\n"
f"Output: (Descartes, likes to drive, antique scooters){KG_TRIPLE_DELIMITER}(Descartes, plays, mandolin)\n"
"END OF EXAMPLE\n\n"
"Conversation history (for reference only):\n"
"{history}"
"\nLast line of conversation (for extraction):\n"
"Human: {input}\n\n"
"Output:"
)
KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT = PromptTemplate(
input_variables=["history", "input"],
template=_DEFAULT_KNOWLEDGE_TRIPLE_EXTRACTION_TEMPLATE,
)

@ -0,0 +1,28 @@
from typing import Any, Dict, List
from pydantic import BaseModel
from langchain.schema import BaseMemory
class SimpleMemory(BaseMemory, BaseModel):
"""Simple memory for storing context or other bits of information that shouldn't
ever change between prompts.
"""
memories: Dict[str, Any] = dict()
@property
def memory_variables(self) -> List[str]:
return list(self.memories.keys())
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
return self.memories
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Nothing should be saved or changed, my memory is set in stone."""
pass
def clear(self) -> None:
"""Nothing to clear, got a memory like a vault."""
pass

@ -0,0 +1,67 @@
from typing import Any, Dict, List
from pydantic import BaseModel, root_validator
from langchain.chains.llm import LLMChain
from langchain.llms.base import BaseLLM
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.prompt import SUMMARY_PROMPT
from langchain.memory.utils import get_buffer_string
from langchain.prompts.base import BasePromptTemplate
from langchain.schema import SystemMessage
class ConversationSummaryMemory(BaseChatMemory, BaseModel):
"""Conversation summarizer to memory."""
buffer: str = ""
human_prefix: str = "Human"
ai_prefix: str = "AI"
llm: BaseLLM
prompt: BasePromptTemplate = SUMMARY_PROMPT
memory_key: str = "history" #: :meta private:
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
if self.return_messages:
buffer: Any = [SystemMessage(content=self.buffer)]
else:
buffer = self.buffer
return {self.memory_key: buffer}
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
prompt_variables = values["prompt"].input_variables
expected_keys = {"summary", "new_lines"}
if expected_keys != set(prompt_variables):
raise ValueError(
"Got unexpected prompt input variables. The prompt expects "
f"{prompt_variables}, but it should have {expected_keys}."
)
return values
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
super().save_context(inputs, outputs)
new_lines = get_buffer_string(
self.chat_memory.messages[-2:],
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
chain = LLMChain(llm=self.llm, prompt=self.prompt)
self.buffer = chain.predict(summary=self.buffer, new_lines=new_lines)
def clear(self) -> None:
"""Clear memory contents."""
super().clear()
self.buffer = ""

@ -0,0 +1,95 @@
from typing import Any, Dict, List
from pydantic import BaseModel, root_validator
from langchain.chains.llm import LLMChain
from langchain.llms.base import BaseLLM
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.prompt import SUMMARY_PROMPT
from langchain.memory.utils import get_buffer_string
from langchain.prompts.base import BasePromptTemplate
from langchain.schema import BaseMessage, SystemMessage
class ConversationSummaryBufferMemory(BaseChatMemory, BaseModel):
"""Buffer with summarizer for storing conversation memory."""
max_token_limit: int = 2000
human_prefix: str = "Human"
ai_prefix: str = "AI"
moving_summary_buffer: str = ""
llm: BaseLLM
prompt: BasePromptTemplate = SUMMARY_PROMPT
memory_key: str = "history"
@property
def buffer(self) -> List[BaseMessage]:
return self.chat_memory.messages
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
buffer = self.buffer
if self.moving_summary_buffer != "":
first_messages: List[BaseMessage] = [
SystemMessage(content=self.moving_summary_buffer)
]
buffer = first_messages + buffer
if self.return_messages:
final_buffer: Any = buffer
else:
final_buffer = get_buffer_string(
buffer, human_prefix=self.human_prefix, ai_prefix=self.ai_prefix
)
return {self.memory_key: final_buffer}
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
prompt_variables = values["prompt"].input_variables
expected_keys = {"summary", "new_lines"}
if expected_keys != set(prompt_variables):
raise ValueError(
"Got unexpected prompt input variables. The prompt expects "
f"{prompt_variables}, but it should have {expected_keys}."
)
return values
def get_num_tokens_list(self, arr: List[BaseMessage]) -> List[int]:
"""Get list of number of tokens in each string in the input array."""
return [self.llm.get_num_tokens(get_buffer_string([x])) for x in arr]
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
super().save_context(inputs, outputs)
# Prune buffer if it exceeds max token limit
buffer = self.chat_memory.messages
curr_buffer_length = sum(self.get_num_tokens_list(buffer))
if curr_buffer_length > self.max_token_limit:
pruned_memory = []
while curr_buffer_length > self.max_token_limit:
pruned_memory.append(buffer.pop(0))
curr_buffer_length = sum(self.get_num_tokens_list(buffer))
chain = LLMChain(llm=self.llm, prompt=self.prompt)
self.moving_summary_buffer = chain.predict(
summary=self.moving_summary_buffer,
new_lines=(
get_buffer_string(
pruned_memory,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
),
)
def clear(self) -> None:
"""Clear memory contents."""
super().clear()
self.moving_summary_buffer = ""

@ -0,0 +1,38 @@
from typing import Any, Dict, List
from langchain.schema import (
AIMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
)
def get_buffer_string(
messages: List[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI"
) -> str:
"""Get buffer string of messages."""
string_messages = []
for m in messages:
if isinstance(m, HumanMessage):
role = human_prefix
elif isinstance(m, AIMessage):
role = ai_prefix
elif isinstance(m, SystemMessage):
role = "System"
elif isinstance(m, ChatMessage):
role = m.role
else:
raise ValueError(f"Got unsupported message type: {m}")
string_messages.append(f"{role}: {m.content}")
return "\n".join(string_messages)
def get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) -> str:
# "stop" is a special key that can be passed as input but is not used to
# format the prompt.
prompt_input_keys = list(set(inputs).difference(memory_variables + ["stop"]))
if len(prompt_input_keys) != 1:
raise ValueError(f"One input key expected got {prompt_input_keys}")
return prompt_input_keys[0]

@ -1,5 +1,13 @@
"""Prompt template classes."""
from langchain.prompts.base import BasePromptTemplate
from langchain.prompts.chat import (
AIMessagePromptTemplate,
ChatMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.few_shot_with_templates import FewShotPromptWithTemplates
from langchain.prompts.loading import load_prompt
@ -12,4 +20,10 @@ __all__ = [
"FewShotPromptTemplate",
"Prompt",
"FewShotPromptWithTemplates",
"ChatPromptTemplate",
"MessagesPlaceholder",
"HumanMessagePromptTemplate",
"AIMessagePromptTemplate",
"SystemMessagePromptTemplate",
"ChatMessagePromptTemplate",
]

@ -20,6 +20,44 @@ from langchain.schema import (
class BaseMessagePromptTemplate(BaseModel, ABC):
@abstractmethod
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""To messages."""
@property
@abstractmethod
def input_variables(self) -> List[str]:
"""Input variables for this prompt template."""
class MessagesPlaceholder(BaseMessagePromptTemplate):
"""Prompt template that assumes variable is already list of messages."""
variable_name: str
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""To a BaseMessage."""
value = kwargs[self.variable_name]
if not isinstance(value, list):
raise ValueError(
f"variable {self.variable_name} should be a list of base messages, "
f"got {value}"
)
for v in value:
if not isinstance(v, BaseMessage):
raise ValueError(
f"variable {self.variable_name} should be a list of base messages,"
f" got {value}"
)
return value
@property
def input_variables(self) -> List[str]:
"""Input variables for this prompt template."""
return [self.variable_name]
class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):
prompt: StringPromptTemplate
additional_kwargs: dict = Field(default_factory=dict)
@ -32,8 +70,15 @@ class BaseMessagePromptTemplate(BaseModel, ABC):
def format(self, **kwargs: Any) -> BaseMessage:
"""To a BaseMessage."""
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
return [self.format(**kwargs)]
@property
def input_variables(self) -> List[str]:
return self.prompt.input_variables
class ChatMessagePromptTemplate(BaseMessagePromptTemplate):
class ChatMessagePromptTemplate(BaseStringMessagePromptTemplate):
role: str
def format(self, **kwargs: Any) -> BaseMessage:
@ -43,19 +88,19 @@ class ChatMessagePromptTemplate(BaseMessagePromptTemplate):
)
class HumanMessagePromptTemplate(BaseMessagePromptTemplate):
class HumanMessagePromptTemplate(BaseStringMessagePromptTemplate):
def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return HumanMessage(content=text, additional_kwargs=self.additional_kwargs)
class AIMessagePromptTemplate(BaseMessagePromptTemplate):
class AIMessagePromptTemplate(BaseStringMessagePromptTemplate):
def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return AIMessage(content=text, additional_kwargs=self.additional_kwargs)
class SystemMessagePromptTemplate(BaseMessagePromptTemplate):
class SystemMessagePromptTemplate(BaseStringMessagePromptTemplate):
def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return SystemMessage(content=text, additional_kwargs=self.additional_kwargs)
@ -105,7 +150,7 @@ class ChatPromptTemplate(BasePromptTemplate, ABC):
) -> ChatPromptTemplate:
input_vars = set()
for message in messages:
input_vars.update(message.prompt.input_variables)
input_vars.update(message.input_variables)
return cls(input_variables=list(input_vars), messages=messages)
def format(self, **kwargs: Any) -> str:
@ -115,12 +160,10 @@ class ChatPromptTemplate(BasePromptTemplate, ABC):
result = []
for message_template in self.messages:
rel_params = {
k: v
for k, v in kwargs.items()
if k in message_template.prompt.input_variables
k: v for k, v in kwargs.items() if k in message_template.input_variables
}
message = message_template.format(**rel_params)
result.append(message)
message = message_template.format_messages(**rel_params)
result.extend(message)
return ChatPromptValue(messages=result)
def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:

@ -4,7 +4,7 @@ from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Any, Dict, List, NamedTuple, Optional
from pydantic import BaseModel, Field, root_validator
from pydantic import BaseModel, Extra, Field, root_validator
class AgentAction(NamedTuple):
@ -133,3 +133,39 @@ class BaseLanguageModel(BaseModel, ABC):
# calculate the number of tokens in the tokenized text
return len(tokenized_text)
class BaseMemory(BaseModel, ABC):
"""Base interface for memory in chains."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
@abstractmethod
def memory_variables(self) -> List[str]:
"""Input keys this memory class will load dynamically."""
@abstractmethod
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return key-value pairs given the text input to the chain.
If None, return all memories
"""
@abstractmethod
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save the context of this model run to memory."""
@abstractmethod
def clear(self) -> None:
"""Clear memory contents."""
# For backwards compatibility
Memory = BaseMemory

@ -1,5 +1,5 @@
"""Test memory functionality."""
from langchain.chains.conversation.memory import ConversationSummaryBufferMemory
from langchain.memory.summary_buffer import ConversationSummaryBufferMemory
from tests.unit_tests.llms.fake_llm import FakeLLM

@ -5,11 +5,12 @@ import pytest
from pydantic import BaseModel
from langchain.callbacks.base import CallbackManager
from langchain.chains.base import Chain, Memory
from langchain.chains.base import Chain
from langchain.schema import BaseMemory
from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
class FakeMemory(Memory, BaseModel):
class FakeMemory(BaseMemory, BaseModel):
"""Fake memory class for testing purposes."""
@property

@ -1,14 +1,12 @@
"""Test conversation chain and memory."""
import pytest
from langchain.chains.base import Memory
from langchain.chains.conversation.base import ConversationChain
from langchain.chains.conversation.memory import (
ConversationBufferMemory,
ConversationBufferWindowMemory,
ConversationSummaryMemory,
)
from langchain.memory.buffer import ConversationBufferMemory
from langchain.memory.buffer_window import ConversationBufferWindowMemory
from langchain.memory.summary import ConversationSummaryMemory
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import BaseMemory
from tests.unit_tests.llms.fake_llm import FakeLLM
@ -16,14 +14,14 @@ def test_memory_ai_prefix() -> None:
"""Test that ai_prefix in the memory component works."""
memory = ConversationBufferMemory(memory_key="foo", ai_prefix="Assistant")
memory.save_context({"input": "bar"}, {"output": "foo"})
assert memory.buffer == "\nHuman: bar\nAssistant: foo"
assert memory.buffer == "Human: bar\nAssistant: foo"
def test_memory_human_prefix() -> None:
"""Test that human_prefix in the memory component works."""
memory = ConversationBufferMemory(memory_key="foo", human_prefix="Friend")
memory.save_context({"input": "bar"}, {"output": "foo"})
assert memory.buffer == "\nFriend: bar\nAI: foo"
assert memory.buffer == "Friend: bar\nAI: foo"
def test_conversation_chain_works() -> None:
@ -60,7 +58,7 @@ def test_conversation_chain_errors_bad_variable() -> None:
ConversationSummaryMemory(llm=FakeLLM(), memory_key="baz"),
],
)
def test_conversation_memory(memory: Memory) -> None:
def test_conversation_memory(memory: BaseMemory) -> None:
"""Test basic conversation memory functionality."""
# This is a good input because the input is not the same as baz.
good_inputs = {"foo": "bar", "baz": "foo"}
@ -92,7 +90,7 @@ def test_conversation_memory(memory: Memory) -> None:
ConversationBufferWindowMemory(memory_key="baz"),
],
)
def test_clearing_conversation_memory(memory: Memory) -> None:
def test_clearing_conversation_memory(memory: BaseMemory) -> None:
"""Test clearing the conversation memory."""
# This is a good input because the input is not the same as baz.
good_inputs = {"foo": "bar", "baz": "foo"}

@ -1,4 +1,4 @@
from langchain.chains.base import SimpleMemory
from langchain.memory.simple import SimpleMemory
def test_simple_memory() -> None:

@ -4,8 +4,9 @@ from typing import Dict, List
import pytest
from pydantic import BaseModel
from langchain.chains.base import Chain, SimpleMemory
from langchain.chains.base import Chain
from langchain.chains.sequential import SequentialChain, SimpleSequentialChain
from langchain.memory.simple import SimpleMemory
class FakeChain(Chain, BaseModel):

Loading…
Cancel
Save