change to agent (#173)

harrison/agent-improvements
Harrison Chase 2 years ago committed by GitHub
parent d70b5a2cbe
commit 5d887970f6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -68,7 +68,7 @@
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(tools, llm, agent_type=\"zero-shot-react-description\", verbose=True)"
"mrkl = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{

@ -33,7 +33,7 @@
"]\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"react = initialize_agent(tools, llm, agent_type=\"react-docstore\", verbose=True)"
"react = initialize_agent(tools, llm, agent=\"react-docstore\", verbose=True)"
]
},
{

@ -53,7 +53,7 @@
" )\n",
"]\n",
"\n",
"self_ask_with_search = initialize_agent(tools, llm, agent_type=\"self-ask-with-search\", verbose=True)\n",
"self_ask_with_search = initialize_agent(tools, llm, agent=\"self-ask-with-search\", verbose=True)\n",
"\n",
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]

@ -1,7 +1,7 @@
# Agents
Agents use an LLM to determine which tools to call and in what order.
Here are the supported types of agents available in LangChain.
Here are the agents available in LangChain.
For a tutorial on how to load agents, see [here](/getting_started/agents.ipynb).

@ -6,9 +6,12 @@
"metadata": {},
"source": [
"# Agents\n",
"\n",
"Agents use an LLM to determine which tools to call and in what order.\n",
"\n",
"Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input. In these types of chains, there is a \"agent\" (backed by an LLM) which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call.\n",
"\n",
"When used correctly agents can be extremely powerful. The purpose of this notebook is to show you how to easily use agents through the simplest, highest level API. If you want more low level control over various components, check out the documentation for custom agents (coming soon). For a list of supported agent types and their specifications, see [here](../explanation/agents.md)."
"When used correctly agents can be extremely powerful. The purpose of this notebook is to show you how to easily use agents through the simplest, highest level API. If you want more low level control over various components, check out the documentation for custom agents (coming soon)."
]
},
{
@ -18,11 +21,13 @@
"source": [
"## Concepts\n",
"\n",
"In order to understand agents, you should understand the following concepts:\n",
"In order to load agents, you should understand the following concepts:\n",
"\n",
"- Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.\n",
"- LLM: The language model powering the agent.\n",
"- AgentType: The type of agent to use. This should be a string. For a list of supported agents, see [here](../explanation/agents.md). Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon)."
"- Agent: The agent to use. This should be a string. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon).\n",
"\n",
"**For a list of supported agents and their specifications, see [here](../explanation/agents.md)**"
]
},
{
@ -101,7 +106,7 @@
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"llm = OpenAI(temperature=0)\n",
"agent = initialize_agent(tools, llm, verbose=True)"
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{

@ -19,7 +19,7 @@ These are, in increasing order of complexity:
Let's go through these categories and for each one identify key concepts (to clarify terminology) as well as the problems in this area LangChain helps solve.
**LLMs and Prompts**
**🦜 LLMs and Prompts**
Calling out to an LLM once is pretty easy, with most of them being behind well documented APIs.
However, there are still some challenges going from that to an application running in production that LangChain attempts to address.
@ -36,7 +36,7 @@ However, there are still some challenges going from that to an application runni
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
**Chains**
**🔗️ Chains**
Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with eachother or with other experts.
LangChain provides several parts to help with that.
@ -53,7 +53,7 @@ LangChain provides several parts to help with that.
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
- End-to-end chains for common workflows (database question/answer, recursive summarization, etc)
**Agents**
**🤖 Agents**
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input.
In these types of chains, there is a “agent” which has access to a suite of tools.
@ -71,7 +71,7 @@ Depending on the user input, the agent can then decide which, if any, of these t
- A selection of powerful agents to choose from
- Common chains that can be used as tools
**Memory**
**🧠 Memory**
Coming soon.

@ -1,10 +1,11 @@
"""Chain that takes in an input and produces an action and action input."""
from abc import ABC, abstractmethod
from typing import Any, ClassVar, List, NamedTuple, Optional, Tuple
from typing import Any, ClassVar, Dict, List, NamedTuple, Optional, Tuple
from pydantic import BaseModel
from langchain.agents.tools import Tool
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.input import ChainedInput, get_color_mapping
from langchain.llms.base import LLM
@ -19,13 +20,30 @@ class Action(NamedTuple):
log: str
class Agent(BaseModel, ABC):
class Agent(Chain, BaseModel, ABC):
"""Agent that uses an LLM."""
prompt: ClassVar[BasePromptTemplate]
llm_chain: LLMChain
tools: List[Tool]
verbose: bool = True
input_key: str = "input" #: :meta private:
output_key: str = "output" #: :meta private:
@property
def input_keys(self) -> List[str]:
"""Return the singular input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
return [self.output_key]
@property
@abstractmethod
@ -97,8 +115,9 @@ class Agent(BaseModel, ABC):
tool, tool_input = parsed_output
return Action(tool, tool_input, full_output)
def run(self, text: str) -> str:
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
"""Run text through and get agent response."""
text = inputs[self.input_key]
# Construct a mapping of tool name to tool for easy lookup
name_to_tool_map = {tool.name: tool.func for tool in self.tools}
# Construct the initial string to pass into the LLM. This is made up
@ -121,7 +140,7 @@ class Agent(BaseModel, ABC):
chained_input.add(output.log, color="green")
# If the tool chosen is the finishing tool, then we end and return.
if output.tool == self.finish_tool_name:
return output.tool_input
return {self.output_key: output.tool_input}
# Otherwise we lookup the tool
chain = name_to_tool_map[output.tool]
# We then call the tool on the tool input to get an observation

@ -8,7 +8,7 @@ from langchain.agents.self_ask_with_search.base import SelfAskWithSearchAgent
from langchain.agents.tools import Tool
from langchain.llms.base import LLM
AGENT_TYPE_TO_CLASS = {
AGENT_TO_CLASS = {
"zero-shot-react-description": ZeroShotAgent,
"react-docstore": ReActDocstoreAgent,
"self-ask-with-search": SelfAskWithSearchAgent,
@ -18,7 +18,7 @@ AGENT_TYPE_TO_CLASS = {
def initialize_agent(
tools: List[Tool],
llm: LLM,
agent_type: str = "zero-shot-react-description",
agent: str = "zero-shot-react-description",
**kwargs: Any,
) -> Agent:
"""Load agent given tools and LLM.
@ -26,17 +26,17 @@ def initialize_agent(
Args:
tools: List of tools this agent has access to.
llm: Language model to use as the agent.
agent_type: The agent to use. Valid options are:
agent: The agent to use. Valid options are:
`zero-shot-react-description`, `react-docstore`, `self-ask-with-search`.
**kwargs: Additional key word arguments to pass to the agent.
Returns:
An agent.
"""
if agent_type not in AGENT_TYPE_TO_CLASS:
if agent not in AGENT_TO_CLASS:
raise ValueError(
f"Got unknown agent type: {agent_type}. "
f"Valid types are: {AGENT_TYPE_TO_CLASS.keys()}."
f"Got unknown agent type: {agent}. "
f"Valid types are: {AGENT_TO_CLASS.keys()}."
)
agent_cls = AGENT_TYPE_TO_CLASS[agent_type]
agent_cls = AGENT_TO_CLASS[agent]
return agent_cls.from_llm_and_tools(llm, tools, **kwargs)

Loading…
Cancel
Save