mirror of
https://github.com/hwchase17/langchain
synced 2024-11-18 09:25:54 +00:00
parent
f18a08f58d
commit
08deed9002
19
README.md
19
README.md
@ -46,7 +46,7 @@ However, there are still some challenges going from that to an application runni
|
|||||||
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
|
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
|
||||||
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
|
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
|
||||||
|
|
||||||
**Problems solved**
|
**Problems Solved**
|
||||||
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
|
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
|
||||||
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
|
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
|
||||||
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
|
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
|
||||||
@ -59,7 +59,7 @@ LangChain provides several parts to help with that.
|
|||||||
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
|
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
|
||||||
- Chains: A combination of multiple tools in a deterministic manner.
|
- Chains: A combination of multiple tools in a deterministic manner.
|
||||||
|
|
||||||
**Problems solved**
|
**Problems Solved**
|
||||||
- Standard interface for working with Chains
|
- Standard interface for working with Chains
|
||||||
- Easy way to construct chains of LLMs
|
- Easy way to construct chains of LLMs
|
||||||
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
|
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
|
||||||
@ -75,13 +75,24 @@ Depending on the user input, the agent can then decide which, if any, of these t
|
|||||||
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
|
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
|
||||||
|
|
||||||
|
|
||||||
**Problems solved**
|
**Problems Solved**
|
||||||
- Standard agent interfaces
|
- Standard agent interfaces
|
||||||
- A selection of powerful agents to choose from
|
- A selection of powerful agents to choose from
|
||||||
- Common chains that can be used as tools
|
- Common chains that can be used as tools
|
||||||
|
|
||||||
### Memory
|
### Memory
|
||||||
Coming soon.
|
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
|
||||||
|
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
|
||||||
|
both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.
|
||||||
|
|
||||||
|
**Key Concepts**
|
||||||
|
- Memory: A class that can be added to an Agent or Chain to (1) pull in memory variables before calling that chain/agent, and (2) create new memories after the chain/agent finishes.
|
||||||
|
- Memory Variables: Variables returned from a Memory class, to be passed into the chain/agent along with the user input.
|
||||||
|
|
||||||
|
**Problems Solved**
|
||||||
|
- Standard memory interfaces
|
||||||
|
- A collection of common memory implementations to choose from
|
||||||
|
- Common chains/agents that use memory (e.g. chatbots)
|
||||||
|
|
||||||
## 🤖 Developer Guide
|
## 🤖 Developer Guide
|
||||||
|
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Adding Memory To an LLMChain\n",
|
"# Adding Memory To an LLMChain\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook goes over how to use the Memory class with an arbitrary chain. For the purposes of this walkthrough, we will add `ConversationBufferMemory` to a `LLMChain`."
|
"This notebook goes over how to use the Memory class with an LLMChain. For the purposes of this walkthrough, we will add the `ConversationBufferMemory` class, although this can be any memory class."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -46,7 +46,7 @@
|
|||||||
" input_variables=[\"chat_history\", \"human_input\"], \n",
|
" input_variables=[\"chat_history\", \"human_input\"], \n",
|
||||||
" template=template\n",
|
" template=template\n",
|
||||||
")\n",
|
")\n",
|
||||||
"memory = ConversationBufferMemory(dynamic_key=\"chat_history\")"
|
"memory = ConversationBufferMemory(memory_key=\"chat_history\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -90,7 +90,7 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"'\\n\\nHi there my friend! Thank you for talking with me.'"
|
"' Hi there!'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 4,
|
"execution_count": 4,
|
||||||
@ -120,9 +120,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Human: Hi there my friend\n",
|
"Human: Hi there my friend\n",
|
||||||
"AI: \n",
|
"AI: Hi there!\n",
|
||||||
"\n",
|
|
||||||
"Hi there my friend! Thank you for talking with me.\n",
|
|
||||||
"Human: Not to bad - how are you?\n",
|
"Human: Not to bad - how are you?\n",
|
||||||
"Chatbot:\u001b[0m\n",
|
"Chatbot:\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -132,7 +130,7 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"\"\\n\\nI'm doing well, thank you for asking. How about you?\""
|
"\"\\n\\nI'm doing well, thanks for asking. How about you?\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 5,
|
"execution_count": 5,
|
||||||
|
@ -66,7 +66,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 4,
|
"execution_count": 8,
|
||||||
"id": "1d45d429",
|
"id": "1d45d429",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@ -77,21 +77,21 @@
|
|||||||
" # Define dictionary to store information about entities.\n",
|
" # Define dictionary to store information about entities.\n",
|
||||||
" entities: dict = {}\n",
|
" entities: dict = {}\n",
|
||||||
" # Define key to pass information about entities into prompt.\n",
|
" # Define key to pass information about entities into prompt.\n",
|
||||||
" dynamic_key: str = \"entities\"\n",
|
" memory_key: str = \"entities\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
" @property\n",
|
" @property\n",
|
||||||
" def dynamic_keys(self) -> List[str]:\n",
|
" def memory_variables(self) -> List[str]:\n",
|
||||||
" \"\"\"Define the keys we are providing to the prompt.\"\"\"\n",
|
" \"\"\"Define the variables we are providing to the prompt.\"\"\"\n",
|
||||||
" return [self.dynamic_key]\n",
|
" return [self.memory_key]\n",
|
||||||
"\n",
|
"\n",
|
||||||
" def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n",
|
" def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n",
|
||||||
" \"\"\"Load the dynamic keys, in this case the entity key.\"\"\"\n",
|
" \"\"\"Load the memory variables, in this case the entity key.\"\"\"\n",
|
||||||
" # Get the input text and run through spacy\n",
|
" # Get the input text and run through spacy\n",
|
||||||
" doc = nlp(inputs[list(inputs.keys())[0]])\n",
|
" doc = nlp(inputs[list(inputs.keys())[0]])\n",
|
||||||
" # Extract known information about entities, if they exist.\n",
|
" # Extract known information about entities, if they exist.\n",
|
||||||
" entities = [self.entities[str(ent)] for ent in doc.ents if str(ent) in self.entities]\n",
|
" entities = [self.entities[str(ent)] for ent in doc.ents if str(ent) in self.entities]\n",
|
||||||
" # Return combined information about entities to put into context.\n",
|
" # Return combined information about entities to put into context.\n",
|
||||||
" return {self.dynamic_key: \"\\n\".join(entities)}\n",
|
" return {self.memory_key: \"\\n\".join(entities)}\n",
|
||||||
"\n",
|
"\n",
|
||||||
" def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n",
|
" def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n",
|
||||||
" \"\"\"Save context from this conversation to buffer.\"\"\"\n",
|
" \"\"\"Save context from this conversation to buffer.\"\"\"\n",
|
||||||
@ -117,7 +117,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 5,
|
"execution_count": 9,
|
||||||
"id": "c05159b6",
|
"id": "c05159b6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@ -147,7 +147,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 10,
|
||||||
"id": "f08dc8ed",
|
"id": "f08dc8ed",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@ -166,7 +166,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 7,
|
"execution_count": 11,
|
||||||
"id": "5b96e836",
|
"id": "5b96e836",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -196,7 +196,7 @@
|
|||||||
"\"\\n\\nThat's really interesting! I'm sure he has a lot of fun with it.\""
|
"\"\\n\\nThat's really interesting! I'm sure he has a lot of fun with it.\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 7,
|
"execution_count": 11,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
@ -215,7 +215,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 8,
|
"execution_count": 12,
|
||||||
"id": "4bca7070",
|
"id": "4bca7070",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -245,7 +245,7 @@
|
|||||||
"\" Harrison's favorite subject in college was machine learning.\""
|
"\" Harrison's favorite subject in college was machine learning.\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 8,
|
"execution_count": 12,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
|
@ -30,7 +30,7 @@ However, there are still some challenges going from that to an application runni
|
|||||||
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
|
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
|
||||||
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
|
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
|
||||||
|
|
||||||
*Problems solved*
|
*Problems Solved*
|
||||||
|
|
||||||
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
|
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
|
||||||
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
|
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
|
||||||
@ -46,7 +46,7 @@ LangChain provides several parts to help with that.
|
|||||||
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
|
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
|
||||||
- Chains: A combination of multiple tools in a deterministic manner.
|
- Chains: A combination of multiple tools in a deterministic manner.
|
||||||
|
|
||||||
*Problems solved*
|
*Problems Solved*
|
||||||
|
|
||||||
- Standard interface for working with Chains
|
- Standard interface for working with Chains
|
||||||
- Easy way to construct chains of LLMs
|
- Easy way to construct chains of LLMs
|
||||||
@ -65,7 +65,7 @@ Depending on the user input, the agent can then decide which, if any, of these t
|
|||||||
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
|
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
|
||||||
|
|
||||||
|
|
||||||
*Problems solved*
|
*Problems Solved*
|
||||||
|
|
||||||
- Standard agent interfaces
|
- Standard agent interfaces
|
||||||
- A selection of powerful agents to choose from
|
- A selection of powerful agents to choose from
|
||||||
@ -73,7 +73,20 @@ Depending on the user input, the agent can then decide which, if any, of these t
|
|||||||
|
|
||||||
**🧠 Memory**
|
**🧠 Memory**
|
||||||
|
|
||||||
Coming soon.
|
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
|
||||||
|
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
|
||||||
|
both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.
|
||||||
|
|
||||||
|
*Key Concepts*
|
||||||
|
|
||||||
|
- Memory: A class that can be added to an Agent or Chain to (1) pull in memory variables before calling that chain/agent, and (2) create new memories after the chain/agent finishes.
|
||||||
|
- Memory Variables: Variables returned from a Memory class, to be passed into the chain/agent along with the user input.
|
||||||
|
|
||||||
|
*Problems Solved*
|
||||||
|
|
||||||
|
- Standard memory interfaces
|
||||||
|
- A collection of common memory implementations to choose from
|
||||||
|
- Common chains/agents that use memory (e.g. chatbots)
|
||||||
|
|
||||||
Documentation Structure
|
Documentation Structure
|
||||||
=======================
|
=======================
|
||||||
|
@ -16,11 +16,11 @@ class Memory(BaseModel, ABC):
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def dynamic_keys(self) -> List[str]:
|
def memory_variables(self) -> List[str]:
|
||||||
"""Input keys this memory class will load dynamically."""
|
"""Input keys this memory class will load dynamically."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||||
"""Return key-value pairs given the text input to the chain."""
|
"""Return key-value pairs given the text input to the chain."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
@ -77,7 +77,7 @@ class Chain(BaseModel, ABC):
|
|||||||
|
|
||||||
"""
|
"""
|
||||||
if self.memory is not None:
|
if self.memory is not None:
|
||||||
external_context = self.memory.load_dynamic_keys(inputs)
|
external_context = self.memory.load_memory_variables(inputs)
|
||||||
inputs = dict(inputs, **external_context)
|
inputs = dict(inputs, **external_context)
|
||||||
self._validate_inputs(inputs)
|
self._validate_inputs(inputs)
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
|
@ -43,7 +43,7 @@ class ConversationChain(LLMChain, BaseModel):
|
|||||||
@root_validator()
|
@root_validator()
|
||||||
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
|
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
|
||||||
"""Validate that prompt input variables are consistent."""
|
"""Validate that prompt input variables are consistent."""
|
||||||
memory_keys = values["memory"].dynamic_keys
|
memory_keys = values["memory"].memory_variables
|
||||||
input_key = values["input_key"]
|
input_key = values["input_key"]
|
||||||
if input_key in memory_keys:
|
if input_key in memory_keys:
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
|
@ -14,23 +14,23 @@ class ConversationBufferMemory(Memory, BaseModel):
|
|||||||
"""Buffer for storing conversation memory."""
|
"""Buffer for storing conversation memory."""
|
||||||
|
|
||||||
buffer: str = ""
|
buffer: str = ""
|
||||||
dynamic_key: str = "history" #: :meta private:
|
memory_key: str = "history" #: :meta private:
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def dynamic_keys(self) -> List[str]:
|
def memory_variables(self) -> List[str]:
|
||||||
"""Will always return list of dynamic keys.
|
"""Will always return list of memory variables.
|
||||||
|
|
||||||
:meta private:
|
:meta private:
|
||||||
"""
|
"""
|
||||||
return [self.dynamic_key]
|
return [self.memory_key]
|
||||||
|
|
||||||
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||||
"""Return history buffer."""
|
"""Return history buffer."""
|
||||||
return {self.dynamic_key: self.buffer}
|
return {self.memory_key: self.buffer}
|
||||||
|
|
||||||
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||||
"""Save context from this conversation to buffer."""
|
"""Save context from this conversation to buffer."""
|
||||||
prompt_input_keys = list(set(inputs).difference(self.dynamic_keys))
|
prompt_input_keys = list(set(inputs).difference(self.memory_variables))
|
||||||
if len(prompt_input_keys) != 1:
|
if len(prompt_input_keys) != 1:
|
||||||
raise ValueError(f"One input key expected got {prompt_input_keys}")
|
raise ValueError(f"One input key expected got {prompt_input_keys}")
|
||||||
if len(outputs) != 1:
|
if len(outputs) != 1:
|
||||||
@ -46,19 +46,19 @@ class ConversationSummaryMemory(Memory, BaseModel):
|
|||||||
buffer: str = ""
|
buffer: str = ""
|
||||||
llm: LLM
|
llm: LLM
|
||||||
prompt: BasePromptTemplate = SUMMARY_PROMPT
|
prompt: BasePromptTemplate = SUMMARY_PROMPT
|
||||||
dynamic_key: str = "history" #: :meta private:
|
memory_key: str = "history" #: :meta private:
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def dynamic_keys(self) -> List[str]:
|
def memory_variables(self) -> List[str]:
|
||||||
"""Will always return list of dynamic keys.
|
"""Will always return list of memory variables.
|
||||||
|
|
||||||
:meta private:
|
:meta private:
|
||||||
"""
|
"""
|
||||||
return [self.dynamic_key]
|
return [self.memory_key]
|
||||||
|
|
||||||
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||||
"""Return history buffer."""
|
"""Return history buffer."""
|
||||||
return {self.dynamic_key: self.buffer}
|
return {self.memory_key: self.buffer}
|
||||||
|
|
||||||
@root_validator()
|
@root_validator()
|
||||||
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
|
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
|
||||||
@ -74,7 +74,7 @@ class ConversationSummaryMemory(Memory, BaseModel):
|
|||||||
|
|
||||||
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||||
"""Save context from this conversation to buffer."""
|
"""Save context from this conversation to buffer."""
|
||||||
prompt_input_keys = list(set(inputs).difference(self.dynamic_keys))
|
prompt_input_keys = list(set(inputs).difference(self.memory_variables))
|
||||||
if len(prompt_input_keys) != 1:
|
if len(prompt_input_keys) != 1:
|
||||||
raise ValueError(f"One input key expected got {prompt_input_keys}")
|
raise ValueError(f"One input key expected got {prompt_input_keys}")
|
||||||
if len(outputs) != 1:
|
if len(outputs) != 1:
|
||||||
|
@ -15,7 +15,7 @@ def test_conversation_chain_works() -> None:
|
|||||||
"""Test that conversation chain works in basic setting."""
|
"""Test that conversation chain works in basic setting."""
|
||||||
llm = FakeLLM()
|
llm = FakeLLM()
|
||||||
prompt = PromptTemplate(input_variables=["foo", "bar"], template="{foo} {bar}")
|
prompt = PromptTemplate(input_variables=["foo", "bar"], template="{foo} {bar}")
|
||||||
memory = ConversationBufferMemory(dynamic_key="foo")
|
memory = ConversationBufferMemory(memory_key="foo")
|
||||||
chain = ConversationChain(llm=llm, prompt=prompt, memory=memory, input_key="bar")
|
chain = ConversationChain(llm=llm, prompt=prompt, memory=memory, input_key="bar")
|
||||||
chain.run("foo")
|
chain.run("foo")
|
||||||
|
|
||||||
@ -32,7 +32,7 @@ def test_conversation_chain_errors_bad_variable() -> None:
|
|||||||
"""Test that conversation chain works in basic setting."""
|
"""Test that conversation chain works in basic setting."""
|
||||||
llm = FakeLLM()
|
llm = FakeLLM()
|
||||||
prompt = PromptTemplate(input_variables=["foo"], template="{foo}")
|
prompt = PromptTemplate(input_variables=["foo"], template="{foo}")
|
||||||
memory = ConversationBufferMemory(dynamic_key="foo")
|
memory = ConversationBufferMemory(memory_key="foo")
|
||||||
with pytest.raises(ValueError):
|
with pytest.raises(ValueError):
|
||||||
ConversationChain(llm=llm, prompt=prompt, memory=memory, input_key="foo")
|
ConversationChain(llm=llm, prompt=prompt, memory=memory, input_key="foo")
|
||||||
|
|
||||||
@ -40,8 +40,8 @@ def test_conversation_chain_errors_bad_variable() -> None:
|
|||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
"memory",
|
"memory",
|
||||||
[
|
[
|
||||||
ConversationBufferMemory(dynamic_key="baz"),
|
ConversationBufferMemory(memory_key="baz"),
|
||||||
ConversationSummaryMemory(llm=FakeLLM(), dynamic_key="baz"),
|
ConversationSummaryMemory(llm=FakeLLM(), memory_key="baz"),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
def test_conversation_memory(memory: Memory) -> None:
|
def test_conversation_memory(memory: Memory) -> None:
|
||||||
|
Loading…
Reference in New Issue
Block a user