forked from Archives/langchain
Merge branch 'master' into harrison/output_parser
This commit is contained in:
commit
9b674d3dc6
@ -8,6 +8,4 @@ The examples here are all end-to-end agents for specific applications.
|
||||
:glob:
|
||||
:caption: Agents
|
||||
|
||||
agents/mrkl.ipynb
|
||||
agents/react.ipynb
|
||||
agents/self_ask_with_search.ipynb
|
||||
agents/*
|
@ -8,8 +8,4 @@ The examples here are all end-to-end chains for specific applications.
|
||||
:glob:
|
||||
:caption: Chains
|
||||
|
||||
chains/llm_chain.ipynb
|
||||
chains/llm_math.ipynb
|
||||
chains/map_reduce.ipynb
|
||||
chains/sqlite.ipynb
|
||||
chains/vector_db_qa.ipynb
|
||||
chains/*
|
||||
|
@ -1,5 +1,25 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d31df93e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Memory\n",
|
||||
"So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of \"memory\" so that it may remember information about its previous interactions. The most clear and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of \"short-term memory\". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of \"long-term memory\".\n",
|
||||
"\n",
|
||||
"LangChain provides several specially created chains just for this purpose. This notebook walk throughs using one of those chains (the `ConversationChain`) with two different types of memory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d051c1da",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ConversationChain with default memory\n",
|
||||
"By default, the `ConversationChain` has a simple type of memory which remebers all previes inputs/outputs and adds them to the context that is passed. Let's take a look at using this chain (setting `verbose=True` so we can see the prompt)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
@ -37,7 +57,6 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain import OpenAI, ConversationChain\n",
|
||||
"from langchain.chains.conversation.memory import ConversationSummaryMemory\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"conversation = ConversationChain(llm=llm, verbose=True)\n",
|
||||
@ -129,9 +148,30 @@
|
||||
"conversation.predict(input=\"Tell me about yourself.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4fad9448",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ConversationChain with ConversationSummaryMemory\n",
|
||||
"Now lets take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time.\n",
|
||||
"\n",
|
||||
"Let's walk through an example, again setting `verbose=True` so we can see the prompt."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "f60a2fe8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains.conversation.memory import ConversationSummaryMemory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "b7274f2c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -159,7 +199,7 @@
|
||||
"\"\\n\\nI'm doing well, thank you for asking. I'm currently working on a project that I'm really excited about.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -171,7 +211,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 6,
|
||||
"id": "a6b6b88f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -187,7 +227,7 @@
|
||||
"\n",
|
||||
"Current conversation:\n",
|
||||
"\n",
|
||||
"The human greets the AI and asks how it is doing. The AI responds that it is doing well and is currently working on a project that it is excited about.\n",
|
||||
"The human and artificial intelligence are talking. The human asked the AI what it is doing, and the AI said that it is working on a project that it is excited about.\n",
|
||||
"Human: Tell me more about it!\n",
|
||||
"AI:\u001b[0m\n",
|
||||
"\n",
|
||||
@ -197,10 +237,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"\\n\\nI'm working on a project that involves helping people to better understand and use artificial intelligence. I'm really excited about it because I think it has the potential to make a big difference in people's lives.\""
|
||||
"\"\\n\\nI'm working on a project that I'm really excited about. It's a lot of work, but I think it's going to be really great when it's finished. I can't wait to show it to you!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -211,7 +251,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 7,
|
||||
"id": "dad869fe",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -228,7 +268,7 @@
|
||||
"Current conversation:\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The human greets the AI and asks how it is doing. The AI responds that it is doing well and is currently working on a project that it is excited about - a project that involves helping people to better understand and use artificial intelligence.\n",
|
||||
"The human and artificial intelligence are talking. The human asked the AI what it is doing, and the AI said that it is working on a project that it is excited about. The AI said that the project is a lot of work, but it is going to be great when it is finished.\n",
|
||||
"Human: Very cool -- what is the scope of the project?\n",
|
||||
"AI:\u001b[0m\n",
|
||||
"\n",
|
||||
@ -238,10 +278,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nThe project is still in the early stages, but the goal is to create a resource that will help people to understand artificial intelligence and how to use it effectively.'"
|
||||
"'\\n\\nThe project is quite large in scope. It involves a lot of data analysis and work with artificial intelligence algorithms.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
@ -91,6 +91,7 @@ The documentation is structured into the following sections:
|
||||
getting_started/llm_chain.md
|
||||
getting_started/sequential_chains.md
|
||||
getting_started/agents.ipynb
|
||||
getting_started/memory.ipynb
|
||||
|
||||
Goes over a simple walk through and tutorial for getting started setting up a simple chain that generates a company name based on what the company makes.
|
||||
Covers installation, environment set up, calling LLMs, and using prompts.
|
||||
|
@ -20,11 +20,11 @@ class Memory(BaseModel, ABC):
|
||||
"""Input keys this memory class will load dynamically."""
|
||||
|
||||
@abstractmethod
|
||||
def _load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Return key-value pairs given the text input to the chain."""
|
||||
|
||||
@abstractmethod
|
||||
def _save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||
"""Save the context of this model run to memory."""
|
||||
|
||||
|
||||
@ -77,7 +77,7 @@ class Chain(BaseModel, ABC):
|
||||
|
||||
"""
|
||||
if self.memory is not None:
|
||||
external_context = self.memory._load_dynamic_keys(inputs)
|
||||
external_context = self.memory.load_dynamic_keys(inputs)
|
||||
inputs = dict(inputs, **external_context)
|
||||
self._validate_inputs(inputs)
|
||||
if self.verbose:
|
||||
@ -87,7 +87,7 @@ class Chain(BaseModel, ABC):
|
||||
print("\n\033[1m> Finished chain.\033[0m")
|
||||
self._validate_outputs(outputs)
|
||||
if self.memory is not None:
|
||||
self.memory._save_context(inputs, outputs)
|
||||
self.memory.save_context(inputs, outputs)
|
||||
if return_only_outputs:
|
||||
return outputs
|
||||
else:
|
||||
|
@ -24,11 +24,11 @@ class ConversationBufferMemory(Memory, BaseModel):
|
||||
"""
|
||||
return [self.dynamic_key]
|
||||
|
||||
def _load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Return history buffer."""
|
||||
return {self.dynamic_key: self.buffer}
|
||||
|
||||
def _save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||
"""Save context from this conversation to buffer."""
|
||||
prompt_input_keys = list(set(inputs).difference(self.dynamic_keys))
|
||||
if len(prompt_input_keys) != 1:
|
||||
@ -56,7 +56,7 @@ class ConversationSummaryMemory(Memory, BaseModel):
|
||||
"""
|
||||
return [self.dynamic_key]
|
||||
|
||||
def _load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
def load_dynamic_keys(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Return history buffer."""
|
||||
return {self.dynamic_key: self.buffer}
|
||||
|
||||
@ -72,7 +72,7 @@ class ConversationSummaryMemory(Memory, BaseModel):
|
||||
)
|
||||
return values
|
||||
|
||||
def _save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
|
||||
"""Save context from this conversation to buffer."""
|
||||
prompt_input_keys = list(set(inputs).difference(self.dynamic_keys))
|
||||
if len(prompt_input_keys) != 1:
|
||||
|
@ -1,7 +1,7 @@
|
||||
"""Wrapper around OpenAI APIs."""
|
||||
from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
from pydantic import BaseModel, Extra, root_validator
|
||||
from pydantic import BaseModel, Extra, Field, root_validator
|
||||
|
||||
from langchain.llms.base import LLM
|
||||
from langchain.utils import get_from_dict_or_env
|
||||
@ -13,6 +13,9 @@ class OpenAI(LLM, BaseModel):
|
||||
To use, you should have the ``openai`` python package installed, and the
|
||||
environment variable ``OPENAI_API_KEY`` set with your API key.
|
||||
|
||||
Any parameters that are valid to be passed to the openai.create call can be passed
|
||||
in, even if not explicitly saved on this class.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
@ -37,7 +40,8 @@ class OpenAI(LLM, BaseModel):
|
||||
"""How many completions to generate for each prompt."""
|
||||
best_of: int = 1
|
||||
"""Generates best_of completions server-side and returns the "best"."""
|
||||
|
||||
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
|
||||
"""Holds any model parameters valid for `create` call not explicitly specified."""
|
||||
openai_api_key: Optional[str] = None
|
||||
|
||||
class Config:
|
||||
@ -45,6 +49,20 @@ class OpenAI(LLM, BaseModel):
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator(pre=True)
|
||||
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Build extra kwargs from additional params that were passed in."""
|
||||
all_required_field_names = {field.alias for field in cls.__fields__.values()}
|
||||
|
||||
extra = values.get("model_kwargs", {})
|
||||
for field_name in list(values):
|
||||
if field_name not in all_required_field_names:
|
||||
if field_name in extra:
|
||||
raise ValueError(f"Found {field_name} supplied twice.")
|
||||
extra[field_name] = values.pop(field_name)
|
||||
values["model_kwargs"] = extra
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
@ -66,7 +84,7 @@ class OpenAI(LLM, BaseModel):
|
||||
@property
|
||||
def _default_params(self) -> Mapping[str, Any]:
|
||||
"""Get the default parameters for calling OpenAI API."""
|
||||
return {
|
||||
normal_params = {
|
||||
"temperature": self.temperature,
|
||||
"max_tokens": self.max_tokens,
|
||||
"top_p": self.top_p,
|
||||
@ -75,6 +93,7 @@ class OpenAI(LLM, BaseModel):
|
||||
"n": self.n,
|
||||
"best_of": self.best_of,
|
||||
}
|
||||
return {**normal_params, **self.model_kwargs}
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Mapping[str, Any]:
|
||||
|
@ -1,5 +1,7 @@
|
||||
"""Test OpenAI API wrapper."""
|
||||
|
||||
import pytest
|
||||
|
||||
from langchain.llms.openai import OpenAI
|
||||
|
||||
|
||||
@ -8,3 +10,19 @@ def test_openai_call() -> None:
|
||||
llm = OpenAI(max_tokens=10)
|
||||
output = llm("Say foo:")
|
||||
assert isinstance(output, str)
|
||||
|
||||
|
||||
def test_openai_extra_kwargs() -> None:
|
||||
"""Test extra kwargs to openai."""
|
||||
# Check that foo is saved in extra_kwargs.
|
||||
llm = OpenAI(foo=3, max_tokens=10)
|
||||
assert llm.max_tokens == 10
|
||||
assert llm.model_kwargs == {"foo": 3}
|
||||
|
||||
# Test that if extra_kwargs are provided, they are added to it.
|
||||
llm = OpenAI(foo=3, model_kwargs={"bar": 2})
|
||||
assert llm.model_kwargs == {"foo": 3, "bar": 2}
|
||||
|
||||
# Test that if provided twice it errors
|
||||
with pytest.raises(ValueError):
|
||||
OpenAI(foo=3, model_kwargs={"foo": 2})
|
||||
|
@ -50,19 +50,19 @@ def test_conversation_memory(memory: Memory) -> None:
|
||||
good_inputs = {"foo": "bar", "baz": "foo"}
|
||||
# This is a good output because these is one variable.
|
||||
good_outputs = {"bar": "foo"}
|
||||
memory._save_context(good_inputs, good_outputs)
|
||||
memory.save_context(good_inputs, good_outputs)
|
||||
# This is a bad input because there are two variables that aren't the same as baz.
|
||||
bad_inputs = {"foo": "bar", "foo1": "bar"}
|
||||
with pytest.raises(ValueError):
|
||||
memory._save_context(bad_inputs, good_outputs)
|
||||
memory.save_context(bad_inputs, good_outputs)
|
||||
# This is a bad input because the only variable is the same as baz.
|
||||
bad_inputs = {"baz": "bar"}
|
||||
with pytest.raises(ValueError):
|
||||
memory._save_context(bad_inputs, good_outputs)
|
||||
memory.save_context(bad_inputs, good_outputs)
|
||||
# This is a bad output because it is empty.
|
||||
with pytest.raises(ValueError):
|
||||
memory._save_context(good_inputs, {})
|
||||
memory.save_context(good_inputs, {})
|
||||
# This is a bad output because there are two keys.
|
||||
bad_outputs = {"foo": "bar", "foo1": "bar"}
|
||||
with pytest.raises(ValueError):
|
||||
memory._save_context(good_inputs, bad_outputs)
|
||||
memory.save_context(good_inputs, bad_outputs)
|
||||
|
Loading…
Reference in New Issue
Block a user