ConversationEntityMemory: Chain which uses an entity extraction & sum… (#678)

…marization prompt to maintain a key-value store of memory information

cc @devennavani

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
harrison/document-split
Samantha Whitmore 1 year ago committed by GitHub
parent 64580259d0
commit 77e3d58922
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,459 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ff31084d",
"metadata": {},
"source": [
"# Entity Memory\n",
"This notebook shows how to work with a memory module that remembers things about specific entities. It extracts information on entities (using LLMs) and builds up its knowledge about that entity over time (also using LLMs)."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "13471fbd",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, ConversationChain\n",
"from langchain.chains.conversation.memory import ConversationEntityMemory\n",
"from langchain.chains.conversation.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE\n",
"from pydantic import BaseModel\n",
"from typing import List, Dict, Any"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "183346e2",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"conversation = ConversationChain(\n",
" llm=llm, \n",
" verbose=True,\n",
" prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,\n",
" memory=ConversationEntityMemory(llm=llm)\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7eb1460a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are an assistant to a human, powered by a large language model trained by OpenAI.\n",
"\n",
"You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
"You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
"\n",
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Deven': '', 'Sam': ''}\n",
"\n",
"Current conversation:\n",
"\n",
"Last line:\n",
"Human: Deven & Sam are working on a hackathon project\n",
"You:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' That sounds like a great project! What kind of project are they working on?'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Deven & Sam are working on a hackathon project\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "46324ca8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are an assistant to a human, powered by a large language model trained by OpenAI.\n",
"\n",
"You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
"You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
"\n",
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Deven': 'Deven is working on a hackathon project with Sam.', 'Sam': 'Sam is working on a hackathon project with Deven.', 'Langchain': ''}\n",
"\n",
"Current conversation:\n",
"Human: Deven & Sam are working on a hackathon project\n",
"AI: That sounds like a great project! What kind of project are they working on?\n",
"Last line:\n",
"Human: They are trying to add more complex memory structures to Langchain\n",
"You:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' That sounds like an interesting project! What kind of memory structures are they trying to add?'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"They are trying to add more complex memory structures to Langchain\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ff2ebf6b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are an assistant to a human, powered by a large language model trained by OpenAI.\n",
"\n",
"You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
"You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
"\n",
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Deven': 'Deven is working on a hackathon project with Sam to add more complex memory structures to Langchain.', 'Sam': 'Sam is working on a hackathon project with Deven to add more complex memory structures to Langchain.', 'Langchain': 'Langchain is a project that seeks to add more complex memory structures.', 'Key-Value Store': ''}\n",
"\n",
"Current conversation:\n",
"Human: Deven & Sam are working on a hackathon project\n",
"AI: That sounds like a great project! What kind of project are they working on?\n",
"Human: They are trying to add more complex memory structures to Langchain\n",
"AI: That sounds like an interesting project! What kind of memory structures are they trying to add?\n",
"Last line:\n",
"Human: They are adding in a key-value store for entities mentioned so far in the conversation.\n",
"You:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' That sounds like a great idea! How will the key-value store work?'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"They are adding in a key-value store for entities mentioned so far in the conversation.\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "56cfd4ba",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are an assistant to a human, powered by a large language model trained by OpenAI.\n",
"\n",
"You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
"You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
"\n",
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Deven': 'Deven is working on a hackathon project with Sam to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.', 'Sam': 'Sam is working on a hackathon project with Deven to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.'}\n",
"\n",
"Current conversation:\n",
"Human: Deven & Sam are working on a hackathon project\n",
"AI: That sounds like a great project! What kind of project are they working on?\n",
"Human: They are trying to add more complex memory structures to Langchain\n",
"AI: That sounds like an interesting project! What kind of memory structures are they trying to add?\n",
"Human: They are adding in a key-value store for entities mentioned so far in the conversation.\n",
"AI: That sounds like a great idea! How will the key-value store work?\n",
"Last line:\n",
"Human: What do you know about Deven & Sam?\n",
"You:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Deven and Sam are working on a hackathon project to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation. They seem to be very motivated and passionate about their project, and are working hard to make it a success.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"What do you know about Deven & Sam?\")"
]
},
{
"cell_type": "markdown",
"id": "4e6df549",
"metadata": {},
"source": [
"## Inspecting the memory store\n",
"We can also inspect the memory store directly. In the following examaples, we look at it directly, and then go through some examples of adding information and watch how it changes."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "038b4d3f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'Deven': 'Deven is working on a hackathon project with Sam to add more '\n",
" 'complex memory structures to Langchain, including a key-value store '\n",
" 'for entities mentioned so far in the conversation.',\n",
" 'Key-Value Store': 'Key-Value Store: A data structure that stores values '\n",
" 'associated with a unique key, allowing for efficient '\n",
" 'retrieval of values. Deven and Sam are adding a key-value '\n",
" 'store for entities mentioned so far in the conversation.',\n",
" 'Langchain': 'Langchain is a project that seeks to add more complex memory '\n",
" 'structures, including a key-value store for entities mentioned '\n",
" 'so far in the conversation.',\n",
" 'Sam': 'Sam is working on a hackathon project with Deven to add more complex '\n",
" 'memory structures to Langchain, including a key-value store for '\n",
" 'entities mentioned so far in the conversation.'}\n"
]
}
],
"source": [
"from pprint import pprint\n",
"pprint(conversation.memory.store)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2df4800e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are an assistant to a human, powered by a large language model trained by OpenAI.\n",
"\n",
"You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
"You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
"\n",
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Daimon': '', 'Sam': 'Sam is working on a hackathon project with Deven to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation.'}\n",
"\n",
"Current conversation:\n",
"Human: They are trying to add more complex memory structures to Langchain\n",
"AI: That sounds like an interesting project! What kind of memory structures are they trying to add?\n",
"Human: They are adding in a key-value store for entities mentioned so far in the conversation.\n",
"AI: That sounds like a great idea! How will the key-value store work?\n",
"Human: What do you know about Deven & Sam?\n",
"AI: Deven and Sam are working on a hackathon project to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation. They seem to be very motivated and passionate about their project, and are working hard to make it a success.\n",
"Last line:\n",
"Human: Sam is the founder of a company called Daimon.\n",
"You:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"\\nThat's impressive! It sounds like Sam is a very successful entrepreneur. What kind of company is Daimon?\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Sam is the founder of a company called Daimon.\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ebe9e36f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'Daimon': 'Daimon is a company founded by Sam.',\n",
" 'Deven': 'Deven is working on a hackathon project with Sam to add more '\n",
" 'complex memory structures to Langchain, including a key-value store '\n",
" 'for entities mentioned so far in the conversation.',\n",
" 'Key-Value Store': 'Key-Value Store: A data structure that stores values '\n",
" 'associated with a unique key, allowing for efficient '\n",
" 'retrieval of values. Deven and Sam are adding a key-value '\n",
" 'store for entities mentioned so far in the conversation.',\n",
" 'Langchain': 'Langchain is a project that seeks to add more complex memory '\n",
" 'structures, including a key-value store for entities mentioned '\n",
" 'so far in the conversation.',\n",
" 'Sam': 'Sam is working on a hackathon project with Deven to add more complex '\n",
" 'memory structures to Langchain, including a key-value store for '\n",
" 'entities mentioned so far in the conversation. He is also the founder '\n",
" 'of a company called Daimon.'}\n"
]
}
],
"source": [
"from pprint import pprint\n",
"pprint(conversation.memory.store)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "dd547144",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are an assistant to a human, powered by a large language model trained by OpenAI.\n",
"\n",
"You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
"\n",
"You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
"\n",
"Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n",
"\n",
"Context:\n",
"{'Sam': 'Sam is working on a hackathon project with Deven to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation. He is also the founder of a company called Daimon.', 'Daimon': 'Daimon is a company founded by Sam.'}\n",
"\n",
"Current conversation:\n",
"Human: They are adding in a key-value store for entities mentioned so far in the conversation.\n",
"AI: That sounds like a great idea! How will the key-value store work?\n",
"Human: What do you know about Deven & Sam?\n",
"AI: Deven and Sam are working on a hackathon project to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation. They seem to be very motivated and passionate about their project, and are working hard to make it a success.\n",
"Human: Sam is the founder of a company called Daimon.\n",
"AI: \n",
"That's impressive! It sounds like Sam is a very successful entrepreneur. What kind of company is Daimon?\n",
"Last line:\n",
"Human: What do you know about Sam?\n",
"You:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Sam is the founder of a company called Daimon. He is also working on a hackathon project with Deven to add more complex memory structures to Langchain, including a key-value store for entities mentioned so far in the conversation. He seems to be very motivated and passionate about his project, and is working hard to make it a success.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"What do you know about Sam?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e00463b5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -7,6 +7,9 @@ The examples here all highlight how to use memory in different ways.
`ChatGPT Clone <./examples/chatgpt_clone.html>`_: How to recreate ChatGPT with LangChain prompting + memory components.
`Entity Memory <./examples/entity_summary_memory.html>`_: How to use a type of memory that organizes information by entity.
`Adding Memory to Multi-Input Chain <./examples/adding_memory_chain_multiple_inputs.html>`_: How to add a memory component to any multiple input chain.
`Conversational Memory Customization <./examples/conversational_customization.html>`_: How to customize existing conversation memory components.

@ -12,3 +12,8 @@ There are a few different ways to accomplish this:
- Summary: This involves summarizing previous conversations and passing that summary in, instead of the raw dialouge itself. Compared to `Buffer`, this compresses information: meaning it is more lossy, but also less likely to run into context length limits.
- Combination: A combination of the above two approaches, where you compute a summary but also pass in some previous interfactions directly!
## Entity Memory
A more complex form of memory is remembering information about specific entities in the conversation.
This is a more direct and organized way of remembering information over time.
Putting it a more structured form also has the benefit of allowing easy inspection of what is known about specific entities.
For a guide on how to use this type of memory, see [this notebook](./examples/entity_summary_memory.ipynb).

@ -4,7 +4,11 @@ from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field, root_validator
from langchain.chains.base import Memory
from langchain.chains.conversation.prompt import SUMMARY_PROMPT
from langchain.chains.conversation.prompt import (
ENTITY_EXTRACTION_PROMPT,
ENTITY_SUMMARIZATION_PROMPT,
SUMMARY_PROMPT,
)
from langchain.chains.llm import LLMChain
from langchain.llms.base import BaseLLM
from langchain.prompts.base import BasePromptTemplate
@ -216,6 +220,89 @@ class ConversationSummaryMemory(Memory, BaseModel):
self.buffer = ""
class ConversationEntityMemory(Memory, BaseModel):
"""Entity extractor & summarizer to memory."""
buffer: List[str] = []
human_prefix: str = "Human"
ai_prefix: str = "AI"
"""Prefix to use for AI generated responses."""
llm: BaseLLM
entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT
entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT
memory_keys: List[str] = ["entities", "history"] #: :meta private:
output_key: Optional[str] = None
input_key: Optional[str] = None
store: Dict[str, Optional[str]] = {}
entity_cache: List[str] = []
k: int = 3
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return ["entities", "history"]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
output = chain.predict(
history="\n".join(self.buffer[-self.k :]),
input=inputs[prompt_input_key],
)
if output.strip() == "NONE":
entities = []
else:
entities = [w.strip() for w in output.split(",")]
entity_summaries = {}
for entity in entities:
entity_summaries[entity] = self.store.get(entity, "")
self.entity_cache = entities
return {
"history": "\n".join(self.buffer[-self.k :]),
"entities": entity_summaries,
}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
if self.input_key is None:
prompt_input_key = _get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}")
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
human = f"{self.human_prefix}: " + inputs[prompt_input_key]
ai = f"{self.ai_prefix}: " + outputs[output_key]
for entity in self.entity_cache:
chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt)
# key value store for entity
existing_summary = self.store.get(entity, "")
output = chain.predict(
summary=existing_summary,
history="\n".join(self.buffer[-self.k :]),
input=inputs[prompt_input_key],
entity=entity,
)
self.store[entity] = output.strip()
new_lines = "\n".join([human, ai])
self.buffer.append(new_lines)
def clear(self) -> None:
"""Clear memory contents."""
self.buffer = []
self.store = {}
class ConversationSummaryBufferMemory(Memory, BaseModel):
"""Buffer with summarizer for storing conversation memory."""

@ -11,6 +11,28 @@ PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
_DEFAULT_ENTITY_MEMORY_CONVERSATION_TEMPLATE = """You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. You are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{entities}
Current conversation:
{history}
Last line:
Human: {input}
You:"""
ENTITY_MEMORY_CONVERSATION_TEMPLATE = PromptTemplate(
input_variables=["entities", "history", "input"],
template=_DEFAULT_ENTITY_MEMORY_CONVERSATION_TEMPLATE,
)
_DEFAULT_SUMMARIZER_TEMPLATE = """Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.
EXAMPLE
@ -35,3 +57,64 @@ New summary:"""
SUMMARY_PROMPT = PromptTemplate(
input_variables=["summary", "new_lines"], template=_DEFAULT_SUMMARIZER_TEMPLATE
)
_DEFAULT_ENTITY_EXTRACTION_TEMPLATE = """You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.
The conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.
Return the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).
EXAMPLE
Conversation history:
Person #1: how's it going today?
AI: "It's going great! How about you?"
Person #1: good! busy working on Langchain. lots to do.
AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"
Last line:
Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.
Output: Langchain
END OF EXAMPLE
EXAMPLE
Conversation history:
Person #1: how's it going today?
AI: "It's going great! How about you?"
Person #1: good! busy working on Langchain. lots to do.
AI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"
Last line:
Person #1: i'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Person #2.
Output: Langchain, Person #2
END OF EXAMPLE
Conversation history (for reference only):
{history}
Last line of conversation (for extraction):
Human: {input}
Output:"""
ENTITY_EXTRACTION_PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_ENTITY_EXTRACTION_TEMPLATE
)
_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE = """You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.
The update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.
If there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.
Full conversation history (for context):
{history}
Entity to summarize:
{entity}
Existing summary of {entity}:
{summary}
Last line of conversation:
Human: {input}
Updated summary:"""
ENTITY_SUMMARIZATION_PROMPT = PromptTemplate(
input_variables=["entity", "summary", "history", "input"],
template=_DEFAULT_ENTITY_SUMMARIZATION_TEMPLATE,
)

Loading…
Cancel
Save