Harrison/memory docs (#195)

update memory docs and change variables
harrison/output_parser^2
Harrison Chase 1 year ago committed by GitHub
parent f18a08f58d
commit 08deed9002
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -46,7 +46,7 @@ However, there are still some challenges going from that to an application runni
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
**Problems solved**
**Problems Solved**
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
@ -59,7 +59,7 @@ LangChain provides several parts to help with that.
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
- Chains: A combination of multiple tools in a deterministic manner.
**Problems solved**
**Problems Solved**
- Standard interface for working with Chains
- Easy way to construct chains of LLMs
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
@ -75,13 +75,24 @@ Depending on the user input, the agent can then decide which, if any, of these t
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
**Problems solved**
**Problems Solved**
- Standard agent interfaces
- A selection of powerful agents to choose from
- Common chains that can be used as tools
### Memory
Coming soon.
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.
**Key Concepts**
- Memory: A class that can be added to an Agent or Chain to (1) pull in memory variables before calling that chain/agent, and (2) create new memories after the chain/agent finishes.
- Memory Variables: Variables returned from a Memory class, to be passed into the chain/agent along with the user input.
**Problems Solved**
- Standard memory interfaces
- A collection of common memory implementations to choose from
- Common chains/agents that use memory (e.g. chatbots)
## 🤖 Developer Guide

@ -7,7 +7,7 @@
"source": [
"# Adding Memory To an LLMChain\n",
"\n",
"This notebook goes over how to use the Memory class with an arbitrary chain. For the purposes of this walkthrough, we will add `ConversationBufferMemory` to a `LLMChain`."
"This notebook goes over how to use the Memory class with an LLMChain. For the purposes of this walkthrough, we will add the `ConversationBufferMemory` class, although this can be any memory class."
]
},
{
@ -46,7 +46,7 @@
" input_variables=[\"chat_history\", \"human_input\"], \n",
" template=template\n",
")\n",
"memory = ConversationBufferMemory(dynamic_key=\"chat_history\")"
"memory = ConversationBufferMemory(memory_key=\"chat_history\")"
]
},
{
@ -90,7 +90,7 @@
{
"data": {
"text/plain": [
"'\\n\\nHi there my friend! Thank you for talking with me.'"
"' Hi there!'"
]
},
"execution_count": 4,
@ -120,9 +120,7 @@
"\n",
"\n",
"Human: Hi there my friend\n",
"AI: \n",
"\n",
"Hi there my friend! Thank you for talking with me.\n",
"AI: Hi there!\n",
"Human: Not to bad - how are you?\n",
"Chatbot:\u001b[0m\n",
"\n",
@ -132,7 +130,7 @@
{
"data": {
"text/plain": [
"\"\\n\\nI'm doing well, thank you for asking. How about you?\""
"\"\\n\\nI'm doing well, thanks for asking. How about you?\""
]
},
"execution_count": 5,

@ -66,7 +66,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 8,
"id": "1d45d429",
"metadata": {},
"outputs": [],
@ -77,21 +77,21 @@
" # Define dictionary to store information about entities.\n",
" entities: dict = {}\n",
" # Define key to pass information about entities into prompt.\n",
" dynamic_key: str = \"entities\"\n",
" memory_key: str = \"entities\"\n",
"\n",
" @property\n",
" def dynamic_keys(self) -> List[str]:\n",
" \"\"\"Define the keys we are providing to the prompt.\"\"\"\n",
" return [self.dynamic_key]\n",
" def memory_variables(self) -> List[str]:\n",
" \"\"\"Define the variables we are providing to the prompt.\"\"\"\n",
" return [self.memory_key]\n",
"\n",
" def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n",
" \"\"\"Load the dynamic keys, in this case the entity key.\"\"\"\n",
" def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n",
" \"\"\"Load the memory variables, in this case the entity key.\"\"\"\n",
" # Get the input text and run through spacy\n",
" doc = nlp(inputs[list(inputs.keys())[0]])\n",
" # Extract known information about entities, if they exist.\n",
" entities = [self.entities[str(ent)] for ent in doc.ents if str(ent) in self.entities]\n",
" # Return combined information about entities to put into context.\n",
" return {self.dynamic_key: \"\\n\".join(entities)}\n",
" return {self.memory_key: \"\\n\".join(entities)}\n",
"\n",
" def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n",
" \"\"\"Save context from this conversation to buffer.\"\"\"\n",
@ -117,7 +117,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"id": "c05159b6",
"metadata": {},
"outputs": [],
@ -147,7 +147,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "f08dc8ed",
"metadata": {},
"outputs": [],
@ -166,7 +166,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 11,
"id": "5b96e836",
"metadata": {},
"outputs": [
@ -196,7 +196,7 @@
"\"\\n\\nThat's really interesting! I'm sure he has a lot of fun with it.\""
]
},
"execution_count": 7,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -215,7 +215,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 12,
"id": "4bca7070",
"metadata": {},
"outputs": [
@ -245,7 +245,7 @@
"\" Harrison's favorite subject in college was machine learning.\""
]
},
"execution_count": 8,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}

@ -30,7 +30,7 @@ However, there are still some challenges going from that to an application runni
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
*Problems solved*
*Problems Solved*
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
@ -46,7 +46,7 @@ LangChain provides several parts to help with that.
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
- Chains: A combination of multiple tools in a deterministic manner.
*Problems solved*
*Problems Solved*
- Standard interface for working with Chains
- Easy way to construct chains of LLMs
@ -65,7 +65,7 @@ Depending on the user input, the agent can then decide which, if any, of these t
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
*Problems solved*
*Problems Solved*
- Standard agent interfaces
- A selection of powerful agents to choose from
@ -73,7 +73,20 @@ Depending on the user input, the agent can then decide which, if any, of these t
**🧠 Memory**
Coming soon.
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.
*Key Concepts*
- Memory: A class that can be added to an Agent or Chain to (1) pull in memory variables before calling that chain/agent, and (2) create new memories after the chain/agent finishes.
- Memory Variables: Variables returned from a Memory class, to be passed into the chain/agent along with the user input.
*Problems Solved*
- Standard memory interfaces
- A collection of common memory implementations to choose from
- Common chains/agents that use memory (e.g. chatbots)
Documentation Structure
=======================

@ -16,11 +16,11 @@ class Memory(BaseModel, ABC):
@property
@abstractmethod
def dynamic_keys(self) -> List[str]:
def memory_variables(self) -> List[str]:
"""Input keys this memory class will load dynamically."""
@abstractmethod
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return key-value pairs given the text input to the chain."""
@abstractmethod
@ -77,7 +77,7 @@ class Chain(BaseModel, ABC):
"""
if self.memory is not None:
external_context = self.memory.load_dynamic_keys(inputs)
external_context = self.memory.load_memory_variables(inputs)
inputs = dict(inputs, **external_context)
self._validate_inputs(inputs)
if self.verbose:

@ -43,7 +43,7 @@ class ConversationChain(LLMChain, BaseModel):
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
memory_keys = values["memory"].dynamic_keys
memory_keys = values["memory"].memory_variables
input_key = values["input_key"]
if input_key in memory_keys:
raise ValueError(

@ -14,23 +14,23 @@ class ConversationBufferMemory(Memory, BaseModel):
"""Buffer for storing conversation memory."""
buffer: str = ""
dynamic_key: str = "history" #: :meta private:
memory_key: str = "history" #: :meta private:
@property
def dynamic_keys(self) -> List[str]:
"""Will always return list of dynamic keys.
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.dynamic_key]
return [self.memory_key]
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
return {self.dynamic_key: self.buffer}
return {self.memory_key: self.buffer}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
prompt_input_keys = list(set(inputs).difference(self.dynamic_keys))
prompt_input_keys = list(set(inputs).difference(self.memory_variables))
if len(prompt_input_keys) != 1:
raise ValueError(f"One input key expected got {prompt_input_keys}")
if len(outputs) != 1:
@ -46,19 +46,19 @@ class ConversationSummaryMemory(Memory, BaseModel):
buffer: str = ""
llm: LLM
prompt: BasePromptTemplate = SUMMARY_PROMPT
dynamic_key: str = "history" #: :meta private:
memory_key: str = "history" #: :meta private:
@property
def dynamic_keys(self) -> List[str]:
"""Will always return list of dynamic keys.
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.dynamic_key]
return [self.memory_key]
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Return history buffer."""
return {self.dynamic_key: self.buffer}
return {self.memory_key: self.buffer}
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
@ -74,7 +74,7 @@ class ConversationSummaryMemory(Memory, BaseModel):
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
prompt_input_keys = list(set(inputs).difference(self.dynamic_keys))
prompt_input_keys = list(set(inputs).difference(self.memory_variables))
if len(prompt_input_keys) != 1:
raise ValueError(f"One input key expected got {prompt_input_keys}")
if len(outputs) != 1:

@ -15,7 +15,7 @@ def test_conversation_chain_works() -> None:
"""Test that conversation chain works in basic setting."""
llm = FakeLLM()
prompt = PromptTemplate(input_variables=["foo", "bar"], template="{foo} {bar}")
memory = ConversationBufferMemory(dynamic_key="foo")
memory = ConversationBufferMemory(memory_key="foo")
chain = ConversationChain(llm=llm, prompt=prompt, memory=memory, input_key="bar")
chain.run("foo")
@ -32,7 +32,7 @@ def test_conversation_chain_errors_bad_variable() -> None:
"""Test that conversation chain works in basic setting."""
llm = FakeLLM()
prompt = PromptTemplate(input_variables=["foo"], template="{foo}")
memory = ConversationBufferMemory(dynamic_key="foo")
memory = ConversationBufferMemory(memory_key="foo")
with pytest.raises(ValueError):
ConversationChain(llm=llm, prompt=prompt, memory=memory, input_key="foo")
@ -40,8 +40,8 @@ def test_conversation_chain_errors_bad_variable() -> None:
@pytest.mark.parametrize(
"memory",
[
ConversationBufferMemory(dynamic_key="baz"),
ConversationSummaryMemory(llm=FakeLLM(), dynamic_key="baz"),
ConversationBufferMemory(memory_key="baz"),
ConversationSummaryMemory(llm=FakeLLM(), memory_key="baz"),
],
)
def test_conversation_memory(memory: Memory) -> None:

Loading…
Cancel
Save