add docs for custom agents (#196)

harrison/output_parser^2
Harrison Chase 2 years ago committed by GitHub
parent 08deed9002
commit 6eab5254e5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,232 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Custom Agent\n",
"\n",
"This notebook goes through how to create your own custom agent.\n",
"\n",
"An agent consists of three parts:\n",
" \n",
" - Tools: The tools the agent has available to use.\n",
" - LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.\n",
" - The agent class itself: this parses the output of the LLMChain to determin which action to take.\n",
" \n",
" \n",
"In this notebook we walk through two types of custom agents. The first type shows how to create a custom LLMChain, but still use an existing agent class to parse the output. The second shows how to create a custom agent class."
]
},
{
"cell_type": "markdown",
"id": "6064f080",
"metadata": {},
"source": [
"### Custom LLMChain\n",
"\n",
"The first way to create a custom agent is to use an existing Agent class, but use a custom LLMChain. This is the simplest way to create a custom Agent. It is highly reccomended that you work with the `ZeroShotAgent`, as at the moment that is by far the most generalizable one. \n",
"\n",
"Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. However, besides those instructions, you can customize the prompt as you wish.\n",
"\n",
"To ensure that the prompt contains the appropriate instructions, we will utilize a helper method on that class. The helper method for the `ZeroShotAgent` takes the following arguments:\n",
"\n",
"- tools: List of tools the agent will have access to, used to format the prompt.\n",
"- prefix: String to put before the list of tools.\n",
"- suffix: String to put after the list of tools.\n",
"- input_variables: List of input variables the final prompt will expect.\n",
"\n",
"For this exercise, we will give our agent access to Google Search, and we will customize it in that we will have it answer as a pirate."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool\n",
"from langchain import OpenAI, SerpAPIChain, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "becda2a1",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIChain()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n",
"\n",
"Question: {input}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "59db7b58",
"metadata": {},
"source": [
"In case we are curious, we can now take a look at the final prompt template to see what it looks like when its all put together."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e21d2098",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
"\n",
"Search: useful for when you need to answer questions about current events\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [Search]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n",
"\n",
"Question: {input}\n"
]
}
],
"source": [
"print(prompt.template)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "653b1617",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many people live in canada?\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look this up\n",
"Action: Search\n",
"Action Input: How many people live in canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,533,678 as of Friday, November 25, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada 2020 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Arrr, there be 38,533,678 people in Canada\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Arrr, there be 38,533,678 people in Canada'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"How many people live in canada?\")"
]
},
{
"cell_type": "markdown",
"id": "90171b2b",
"metadata": {},
"source": [
"### Custom Agent Class\n",
"\n",
"Coming soon."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "adefb4c2",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,7 +1,7 @@
"""Routing chains."""
from langchain.agents.agent import Agent
from langchain.agents.loading import initialize_agent
from langchain.agents.mrkl.base import MRKLChain
from langchain.agents.mrkl.base import MRKLChain, ZeroShotAgent
from langchain.agents.react.base import ReActChain
from langchain.agents.self_ask_with_search.base import SelfAskWithSearchChain
from langchain.agents.tools import Tool
@ -13,4 +13,5 @@ __all__ = [
"Agent",
"Tool",
"initialize_agent",
"ZeroShotAgent",
]

@ -83,14 +83,15 @@ class Agent(Chain, BaseModel, ABC):
pass
@classmethod
def _get_prompt(cls, tools: List[Tool]) -> BasePromptTemplate:
def create_prompt(cls, tools: List[Tool]) -> BasePromptTemplate:
"""Create a prompt for this class."""
return cls.prompt
@classmethod
def from_llm_and_tools(cls, llm: LLM, tools: List[Tool], **kwargs: Any) -> "Agent":
"""Construct an agent from an LLM and tools."""
cls._validate_tools(tools)
llm_chain = LLMChain(llm=llm, prompt=cls._get_prompt(tools))
llm_chain = LLMChain(llm=llm, prompt=cls.create_prompt(tools))
return cls(llm_chain=llm_chain, tools=tools, **kwargs)
def get_action(self, text: str) -> Action:

@ -2,11 +2,10 @@
from typing import Any, Callable, List, NamedTuple, Optional, Tuple
from langchain.agents.agent import Agent
from langchain.agents.mrkl.prompt import BASE_TEMPLATE
from langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX
from langchain.agents.tools import Tool
from langchain.llms.base import LLM
from langchain.prompts import PromptTemplate
from langchain.prompts.base import BasePromptTemplate
FINAL_ANSWER_ACTION = "Final Answer: "
@ -60,11 +59,32 @@ class ZeroShotAgent(Agent):
return "Thought:"
@classmethod
def _get_prompt(cls, tools: List[Tool]) -> BasePromptTemplate:
def create_prompt(
cls,
tools: List[Tool],
prefix: str = PREFIX,
suffix: str = SUFFIX,
input_variables: Optional[List[str]] = None,
) -> PromptTemplate:
"""Create prompt in the style of the zero shot agent.
Args:
tools: List of tools the agent will have access to, used to format the
prompt.
prefix: String to put before the list of tools.
suffix: String to put after the list of tools.
input_variables: List of input variables the final prompt will expect.
Returns:
A PromptTemplate with the template assembled from the pieces here.
"""
tool_strings = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
tool_names = ", ".join([tool.name for tool in tools])
template = BASE_TEMPLATE.format(tools=tool_strings, tool_names=tool_names)
return PromptTemplate(template=template, input_variables=["input"])
format_instructions = FORMAT_INSTRUCTIONS.format(tool_names=tool_names)
template = "\n\n".join([prefix, tool_strings, format_instructions, suffix])
if input_variables is None:
input_variables = ["input"]
return PromptTemplate(template=template, input_variables=input_variables)
@classmethod
def _validate_tools(cls, tools: List[Tool]) -> None:

@ -1,9 +1,6 @@
# flake8: noqa
BASE_TEMPLATE = """Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
PREFIX = """Answer the following questions as best you can. You have access to the following tools:"""
FORMAT_INSTRUCTIONS = """Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
@ -12,8 +9,7 @@ Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Final Answer: the final answer to the original input question"""
SUFFIX = """Begin!
Question: {{input}}"""
Question: {input}"""

@ -3,7 +3,7 @@
import pytest
from langchain.agents.mrkl.base import ZeroShotAgent, get_action_and_input
from langchain.agents.mrkl.prompt import BASE_TEMPLATE
from langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX
from langchain.agents.tools import Tool
from langchain.prompts import PromptTemplate
from tests.unit_tests.llms.fake_llm import FakeLLM
@ -59,8 +59,13 @@ def test_from_chains() -> None:
agent = ZeroShotAgent.from_llm_and_tools(FakeLLM(), chain_configs)
expected_tools_prompt = "foo: foobar1\nbar: foobar2"
expected_tool_names = "foo, bar"
expected_template = BASE_TEMPLATE.format(
tools=expected_tools_prompt, tool_names=expected_tool_names
expected_template = "\n\n".join(
[
PREFIX,
expected_tools_prompt,
FORMAT_INSTRUCTIONS.format(tool_names=expected_tool_names),
SUFFIX,
]
)
prompt = agent.llm_chain.prompt
assert isinstance(prompt, PromptTemplate)

Loading…
Cancel
Save