docs: agents & callbacks fixes (#10066)

Various improvements to the Agents & Callbacks sections of the
documentation including formatting, spelling, and grammar fixes to
improve readability.
pull/10133/head
seamusp 10 months ago committed by GitHub
parent 58d7d86e51
commit afd96b2460
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -37,11 +37,11 @@ This agent is designed to be used in conversational settings.
The prompt is designed to make the agent helpful and conversational.
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.
### [Self ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
### [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
This agent utilizes a single tool that should be named `Intermediate Answer`.
This tool should be able to lookup factual answers to questions. This agent
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
is equivalent to the original [self-ask with search paper](https://ofir.io/self-ask.pdf),
where a Google search API was provided as the tool.
### [ReAct document store](/docs/modules/agents/agent_types/react_docstore.html)
@ -54,4 +54,4 @@ This agent is equivalent to the
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute.html)
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).

@ -1,6 +1,6 @@
# Plan and execute
# Plan-and-execute
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
The planning is almost always done by an LLM.

@ -1,13 +1,13 @@
# Custom LLM Agent
# Custom LLM agent
This notebook goes through how to create your own custom LLM agent.
An LLM agent consists of three parts:
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
- LLM: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
import Example from "@snippets/modules/agents/how_to/custom_llm_agent.mdx"

@ -4,10 +4,10 @@ This notebook goes through how to create your own custom agent based on a chat m
An LLM chat agent consists of three parts:
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
- ChatModel: This is the language model that powers the agent
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
- `ChatModel`: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
import Example from "@snippets/modules/agents/how_to/custom_llm_chat_agent.mdx"

@ -7,11 +7,13 @@
"source": [
"# OpenAI Multi Functions Agent\n",
"\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model.\n",
"\n",
"Install openai,google-search-results packages which are required as the langchain packages call them internally\n",
"Install `openai`, `google-search-results` packages which are required as the LangChain packages call them internally.\n",
"\n",
">pip install openai google-search-results\n"
"```bash\n",
"pip install openai google-search-results\n",
"```\n"
]
},
{
@ -34,8 +36,8 @@
"source": [
"The agent is given ability to perform search functionalities with the respective tool\n",
"\n",
"SerpAPIWrapper:\n",
">This initializes the SerpAPIWrapper for search functionality (search).\n"
"`SerpAPIWrapper`:\n",
">This initializes the `SerpAPIWrapper` for search functionality (search).\n"
]
},
{
@ -228,7 +230,7 @@
"source": [
"## Configuring max iteration behavior\n",
"\n",
"To make sure that our agent doesn't get stuck in excessively long loops, we can set max_iterations. We can also set an early stopping method, which will determine our agent's behavior once the number of max iterations is hit. By default, the early stopping uses method `force` which just returns that constant string. Alternatively, you could specify method `generate` which then does one FINAL pass through the LLM to generate an output."
"To make sure that our agent doesn't get stuck in excessively long loops, we can set `max_iterations`. We can also set an early stopping method, which will determine our agent's behavior once the number of max iterations is hit. By default, the early stopping uses method `force` which just returns that constant string. Alternatively, you could specify method `generate` which then does one FINAL pass through the LLM to generate an output."
]
},
{
@ -428,7 +430,7 @@
"id": "067a8d3e",
"metadata": {},
"source": [
"Notice that we never get around to looking up the weather the day before yesterday, due to hitting our max_iterations limit."
"Notice that we never get around to looking up the weather the day before yesterday, due to hitting our `max_iterations` limit."
]
},
{

@ -5,9 +5,9 @@
"id": "0c3f1df8",
"metadata": {},
"source": [
"# Self ask with search\n",
"# Self-ask with search\n",
"\n",
"This walkthrough showcases the Self Ask With Search chain."
"This walkthrough showcases the self-ask with search chain."
]
},
{

@ -7,24 +7,15 @@
"source": [
"# Add Memory to OpenAI Functions Agent\n",
"\n",
"This notebook goes over how to add memory to OpenAI Functions agent."
"This notebook goes over how to add memory to an OpenAI Functions agent."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "ac594f26",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"outputs": [],
"source": [
"from langchain import (\n",
" LLMMathChain,\n",

@ -56,7 +56,7 @@
"source": [
"Define tools which provide:\n",
"- The `n`th prime number (using a small subset for this example) \n",
"- The LLMMathChain to act as a calculator"
"- The `LLMMathChain` to act as a calculator"
]
},
{

@ -7,9 +7,9 @@
"source": [
"# Combine agents and vector stores\n",
"\n",
"This notebook covers how to combine agents and vectorstores. The use case for this is that you've ingested your data into a vectorstore and want to interact with it in an agentic manner.\n",
"This notebook covers how to combine agents and vector stores. The use case for this is that you've ingested your data into a vector store and want to interact with it in an agentic manner.\n",
"\n",
"The recommended method for doing so is to create a RetrievalQA and then use that as a tool in the overall agent. Let's take a look at doing this below. You can do this with multiple different vectordbs, and use the agent as a way to route between them. There are two different ways of doing this - you can either let the agent use the vectorstores as normal tools, or you can set `return_direct=True` to really just use the agent as a router."
"The recommended method for doing so is to create a `RetrievalQA` and then use that as a tool in the overall agent. Let's take a look at doing this below. You can do this with multiple different vector DBs, and use the agent as a way to route between them. There are two different ways of doing this - you can either let the agent use the vector stores as normal tools, or you can set `return_direct=True` to really just use the agent as a router."
]
},
{
@ -18,7 +18,7 @@
"id": "9b22020a",
"metadata": {},
"source": [
"## Create the Vectorstore"
"## Create the vector store"
]
},
{
@ -419,9 +419,9 @@
"id": "49a0cbbe",
"metadata": {},
"source": [
"## Multi-Hop vectorstore reasoning\n",
"## Multi-Hop vector store reasoning\n",
"\n",
"Because vectorstores are easily usable as tools in agents, it is easy to use answer multi-hop questions that depend on vectorstores using the existing agent framework"
"Because vector stores are easily usable as tools in agents, it is easy to use answer multi-hop questions that depend on vector stores using the existing agent framework."
]
},
{

@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for Agents by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async methods are currently supported for the following `Tools`: [`GoogleSerperAPIWrapper`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/google_serper.py), [`SerpAPIWrapper`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/serpapi.py), [`LLMMathChain`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm_math/base.py) and [`Qdrant`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/qdrant.py). Async support for other agent tools are on the roadmap.\n",
"Async methods are currently supported for the following `Tool`s: [`GoogleSerperAPIWrapper`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/google_serper.py), [`SerpAPIWrapper`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/serpapi.py), [`LLMMathChain`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm_math/base.py) and [`Qdrant`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/qdrant.py). Async support for other agent tools are on the roadmap.\n",
"\n",
"For `Tool`s that have a `coroutine` implemented (the four mentioned above), the `AgentExecutor` will `await` them directly. Otherwise, the `AgentExecutor` will call the `Tool`'s `func` via `asyncio.get_event_loop().run_in_executor` to avoid blocking the main runloop.\n",
"\n",
@ -21,7 +21,7 @@
"id": "97800378-cc34-4283-9bd0-43f336bc914c",
"metadata": {},
"source": [
"## Serial vs. Concurrent Execution\n",
"## Serial vs. concurrent execution\n",
"\n",
"In this example, we kick off agents to answer some questions serially vs. concurrently. You can see that concurrent execution significantly speeds this up."
]

@ -17,9 +17,11 @@
"id": "LFKylC3CPtTl"
},
"source": [
"Install libraries which are required to run this example notebook\n",
"Install libraries which are required to run this example notebook:\n",
"\n",
"`pip install -q openai langchain yfinance`"
"```bash\n",
"pip install -q openai langchain yfinance\n",
"```\n"
]
},
{

@ -11,7 +11,7 @@
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n",
"\n",
"In this notebook we will create a somewhat contrieved example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query."
"In this notebook we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query."
]
},
{
@ -51,7 +51,7 @@
"source": [
"## Set up tools\n",
"\n",
"We will create one legitimate tool (search) and then 99 fake tools"
"We will create one legitimate tool (search) and then 99 fake tools."
]
},
{
@ -92,7 +92,7 @@
"source": [
"## Tool Retriever\n",
"\n",
"We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
"We will use a vector store to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
]
},
{
@ -206,7 +206,7 @@
"id": "2e7a075c",
"metadata": {},
"source": [
"## Prompt Template\n",
"## Prompt template\n",
"\n",
"The prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done."
]
@ -245,7 +245,7 @@
"id": "1583acdc",
"metadata": {},
"source": [
"The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use"
"The custom prompt template now has the concept of a `tools_getter`, which we call on the input to select the tools to use."
]
},
{
@ -308,7 +308,7 @@
"id": "ef3a1af3",
"metadata": {},
"source": [
"## Output Parser\n",
"## Output parser\n",
"\n",
"The output parser is unchanged from the previous notebook, since we are not changing anything about the output format."
]
@ -360,7 +360,7 @@
"source": [
"## Set up LLM, stop sequence, and the agent\n",
"\n",
"Also the same as the previous notebook"
"Also the same as the previous notebook."
]
},
{

@ -10,13 +10,13 @@
"This notebook goes through how to create your own custom MRKL agent.\n",
"\n",
"A MRKL agent consists of three parts:\n",
" \n",
" - Tools: The tools the agent has available to use.\n",
" - LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.\n",
" - The agent class itself: this parses the output of the LLMChain to determine which action to take.\n",
"\n",
"- Tools: The tools the agent has available to use.\n",
"- `LLMChain`: The `LLMChain` that produces the text that is parsed in a certain way to determine which action to take.\n",
"- The agent class itself: this parses the output of the `LLMChain` to determine which action to take.\n",
" \n",
" \n",
"In this notebook we walk through how to create a custom MRKL agent by creating a custom LLMChain."
"In this notebook we walk through how to create a custom MRKL agent by creating a custom `LLMChain`."
]
},
{
@ -26,16 +26,16 @@
"source": [
"### Custom LLMChain\n",
"\n",
"The first way to create a custom agent is to use an existing Agent class, but use a custom LLMChain. This is the simplest way to create a custom Agent. It is highly recommended that you work with the `ZeroShotAgent`, as at the moment that is by far the most generalizable one. \n",
"The first way to create a custom agent is to use an existing Agent class, but use a custom `LLMChain`. This is the simplest way to create a custom Agent. It is highly recommended that you work with the `ZeroShotAgent`, as at the moment that is by far the most generalizable one. \n",
"\n",
"Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. Additionally, we currently require an `agent_scratchpad` input variable to put notes on previous actions and observations. This should almost always be the final part of the prompt. However, besides those instructions, you can customize the prompt as you wish.\n",
"Most of the work in creating the custom `LLMChain` comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. Additionally, we currently require an `agent_scratchpad` input variable to put notes on previous actions and observations. This should almost always be the final part of the prompt. However, besides those instructions, you can customize the prompt as you wish.\n",
"\n",
"To ensure that the prompt contains the appropriate instructions, we will utilize a helper method on that class. The helper method for the `ZeroShotAgent` takes the following arguments:\n",
"\n",
"- tools: List of tools the agent will have access to, used to format the prompt.\n",
"- prefix: String to put before the list of tools.\n",
"- suffix: String to put after the list of tools.\n",
"- input_variables: List of input variables the final prompt will expect.\n",
"- `tools`: List of tools the agent will have access to, used to format the prompt.\n",
"- `prefix`: String to put before the list of tools.\n",
"- `suffix`: String to put after the list of tools.\n",
"- `input_variables`: List of input variables the final prompt will expect.\n",
"\n",
"For this exercise, we will give our agent access to Google Search, and we will customize it in that we will have it answer as a pirate."
]

@ -10,9 +10,9 @@
"This notebook goes through how to create your own custom agent.\n",
"\n",
"An agent consists of two parts:\n",
" \n",
" - Tools: The tools the agent has available to use.\n",
" - The agent class itself: this decides which action to take.\n",
"\n",
"- Tools: The tools the agent has available to use.\n",
"- The agent class itself: this decides which action to take.\n",
" \n",
" \n",
"In this notebook we walk through how to create a custom agent that predicts/takes multiple steps at a time."

@ -7,7 +7,7 @@
"source": [
"# Handle parsing errors\n",
"\n",
"Occasionally the LLM cannot determine what step to take because it outputs format in incorrect form to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with `handle_parsing_errors`! Let's explore how."
"Occasionally the LLM cannot determine what step to take because its outputs are not correctly formatted to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with `handle_parsing_errors`! Let's explore how."
]
},
{
@ -130,7 +130,7 @@
"source": [
"## Default error handling\n",
"\n",
"Handle errors with `Invalid or incomplete response`"
"Handle errors with `Invalid or incomplete response`:"
]
},
{
@ -202,9 +202,9 @@
"id": "6613cc9c",
"metadata": {},
"source": [
"## Custom Error Message\n",
"## Custom error message\n",
"\n",
"You can easily customize the message to use when there are parsing errors"
"You can easily customize the message to use when there are parsing errors."
]
},
{

@ -47,7 +47,7 @@
"id": "1d329c3d",
"metadata": {},
"source": [
"Initialize the agent with `return_intermediate_steps=True`"
"Initialize the agent with `return_intermediate_steps=True`:"
]
},
{

@ -54,7 +54,7 @@
"id": "5e9d92c2",
"metadata": {},
"source": [
"First, let's do a run with a normal agent to show what would happen without this parameter. For this example, we will use a specifically crafter adversarial example that tries to trick it into continuing forever.\n",
"First, let's do a run with a normal agent to show what would happen without this parameter. For this example, we will use a specifically crafted adversarial example that tries to trick it into continuing forever.\n",
"\n",
"Try running the cell below and see what happens!"
]
@ -203,7 +203,7 @@
"id": "0f7a80fb",
"metadata": {},
"source": [
"By default, the early stopping uses method `force` which just returns that constant string. Alternatively, you could specify method `generate` which then does one FINAL pass through the LLM to generate an output."
"By default, the early stopping uses the `force` method which just returns that constant string. Alternatively, you could specify the `generate` method which then does one FINAL pass through the LLM to generate an output."
]
},
{

@ -54,7 +54,7 @@
"id": "5e9d92c2",
"metadata": {},
"source": [
"First, let's do a run with a normal agent to show what would happen without this parameter. For this example, we will use a specifically crafter adversarial example that tries to trick it into continuing forever.\n",
"First, let's do a run with a normal agent to show what would happen without this parameter. For this example, we will use a specifically crafted adversarial example that tries to trick it into continuing forever.\n",
"\n",
"Try running the cell below and see what happens!"
]
@ -199,7 +199,7 @@
"id": "0f7a80fb",
"metadata": {},
"source": [
"By default, the early stopping uses method `force` which just returns that constant string. Alternatively, you could specify method `generate` which then does one FINAL pass through the LLM to generate an output."
"By default, the early stopping uses the `force` method which just returns that constant string. Alternatively, you could specify the `generate` method which then does one FINAL pass through the LLM to generate an output."
]
},
{

@ -7,12 +7,12 @@
"source": [
"# Shared memory across agents and tools\n",
"\n",
"This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"This notebook goes over adding memory to **both** an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"\n",
"We are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. And, the summarization tool also needs access to the conversation memory."
"We are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. The summarization tool also needs access to the conversation memory."
]
},
{
@ -102,7 +102,7 @@
"id": "0021675b",
"metadata": {},
"source": [
"We can now construct the LLMChain, with the Memory object, and then create the agent."
"We can now construct the `LLMChain`, with the Memory object, and then create the agent."
]
},
{

@ -187,7 +187,7 @@
"}\n",
"`\n",
"\n",
"and you don't only want the action_input to be streamed, but the entire JSON."
"and you don't only want the `action_input` to be streamed, but the entire JSON."
]
}
],

@ -36,7 +36,7 @@
"id": "1b7ee35f",
"metadata": {},
"source": [
"Load the toolkit"
"Load the toolkit:"
]
},
{
@ -55,7 +55,7 @@
"id": "203fa80a",
"metadata": {},
"source": [
"Set a system message specific to that toolkit"
"Set a system message specific to that toolkit:"
]
},
{

@ -10,10 +10,10 @@
"\n",
"When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"\n",
"- name (str), is required and must be unique within a set of tools provided to an agent\n",
"- description (str), is optional but recommended, as it is used by an agent to determine tool use\n",
"- return_direct (bool), defaults to False\n",
"- args_schema (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.\n",
"- `name` (str), is required and must be unique within a set of tools provided to an agent\n",
"- `description` (str), is optional but recommended, as it is used by an agent to determine tool use\n",
"- `return_direct` (bool), defaults to False\n",
"- `args_schema` (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.\n",
"\n",
"\n",
"There are two main ways to define a tool, we will cover both in the example below."
@ -116,7 +116,7 @@
"id": "e9b560f7",
"metadata": {},
"source": [
"You can also define a custom `args_schema`` to provide more information about inputs."
"You can also define a custom `args_schema` to provide more information about inputs."
]
},
{
@ -442,7 +442,7 @@
"id": "de34a6a3",
"metadata": {},
"source": [
"You can also provide `args_schema` to provide more information about the argument"
"You can also provide `args_schema` to provide more information about the argument."
]
},
{
@ -533,7 +533,7 @@
"source": [
"## Subclassing the BaseTool\n",
"\n",
"The BaseTool automatically infers the schema from the _run method's signature."
"The BaseTool automatically infers the schema from the `_run` method's signature."
]
},
{
@ -840,7 +840,7 @@
"metadata": {},
"source": [
"## Using tools to return directly\n",
"Often, it can be desirable to have a tool output returned directly to the user, if its called. You can do this easily with LangChain by setting the return_direct flag for a tool to be True."
"Often, it can be desirable to have a tool output returned directly to the user, if its called. You can do this easily with LangChain by setting the `return_direct` flag for a tool to be True."
]
},
{

@ -7,11 +7,11 @@
"source": [
"# Human-in-the-loop Tool Validation\n",
"\n",
"This walkthrough demonstrates how to add Human validation to any Tool. We'll do this using the `HumanApprovalCallbackhandler`.\n",
"This walkthrough demonstrates how to add human validation to any Tool. We'll do this using the `HumanApprovalCallbackhandler`.\n",
"\n",
"Let's suppose we need to make use of the ShellTool. Adding this tool to an automated flow poses obvious risks. Let's see how we could enforce manual human approval of inputs going into this tool.\n",
"Let's suppose we need to make use of the `ShellTool`. Adding this tool to an automated flow poses obvious risks. Let's see how we could enforce manual human approval of inputs going into this tool.\n",
"\n",
"**Note**: We generally recommend against using the ShellTool. There's a lot of ways to misuse it, and it's not required for most use cases. We employ it here only for demonstration purposes."
"**Note**: We generally recommend against using the `ShellTool`. There's a lot of ways to misuse it, and it's not required for most use cases. We employ it here only for demonstration purposes."
]
},
{
@ -60,7 +60,7 @@
"metadata": {},
"source": [
"## Adding Human Approval\n",
"Adding the default HumanApprovalCallbackHandler to the tool will make it so that a user has to manually approve every input to the tool before the command is actually executed."
"Adding the default `HumanApprovalCallbackHandler` to the tool will make it so that a user has to manually approve every input to the tool before the command is actually executed."
]
},
{

@ -139,7 +139,7 @@
"source": [
"## Multi-Input Tools with a string format\n",
"\n",
"An alternative to the structured tool would be to use the regular `Tool` class and accept a single string. The tool would then have to handle the parsing logic to extract the relavent values from the text, which tightly couples the tool representation to the agent prompt. This is still useful if the underlying language model can't reliabl generate structured schema. \n",
"An alternative to the structured tool would be to use the regular `Tool` class and accept a single string. The tool would then have to handle the parsing logic to extract the relavent values from the text, which tightly couples the tool representation to the agent prompt. This is still useful if the underlying language model can't reliably generate structured schema. \n",
"\n",
"Let's take the multiplication function as an example. In order to use this, we will tell the agent to generate the \"Action Input\" as a comma-separated list of length two. We will then write a thin wrapper that takes a string, splits it into two around a comma, and passes both parsed sides as integers to the multiplication function."
]

@ -9,7 +9,7 @@
"\n",
"If you are planning to use the async API, it is recommended to use `AsyncCallbackHandler` to avoid blocking the runloop. \n",
"\n",
"**Advanced** if you use a sync `CallbackHandler` while using an async method to run your llm/chain/tool/agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe."
"**Advanced** if you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe."
]
},
{

@ -1,3 +1,3 @@
# Tags
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, e.g. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. `on_llm_start`, `on_chat_model_start`, `on_chain_start`, `on_tool_start`.
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, e.g. if you want to log all requests made to a specific `LLMChain`, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. `on_llm_start`, `on_chat_model_start`, `on_chain_start`, `on_tool_start`.

@ -1,6 +1,8 @@
Install openai,google-search-results packages which are required as the langchain packages call them internally
Install `openai`, `google-search-results` packages which are required as the LangChain packages call them internally.
>pip install openai google-search-results
```bash
pip install openai google-search-results
```
```python
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain

@ -53,7 +53,7 @@ executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
```
## Run Example
## Run example
```python

@ -202,7 +202,7 @@ print(response)
## Adding in memory
Here is how you add in memory to this agent
Here is how you add in memory to this agent:
```python

@ -1,12 +1,12 @@
This will go over how to get started building an agent.
We will use a LangChain agent class, but show how to customize it to give it specific context.
We will then define custom tools, and then run it all in the standard LangChain AgentExecutor.
We will then define custom tools, and then run it all in the standard LangChain `AgentExecutor`.
### Set up the agent
We will use the OpenAIFunctionsAgent.
We will use the `OpenAIFunctionsAgent`.
This is easiest and best agent to get started with.
It does however require usage of ChatOpenAI models.
It does however require usage of `ChatOpenAI` models.
If you want to use a different language model, we would recommend using the [ReAct](/docs/modules/agents/agent_types/react) agent.
For this guide, we will construct a custom agent that has access to a custom tool.
@ -40,7 +40,7 @@ tools = [get_word_length]
Now let us create the prompt.
We can use the `OpenAIFunctionsAgent.create_prompt` helper function to create a prompt automatically.
This allows for a few different ways to customize, including passing in a custom SystemMessage, which we will do.
This allows for a few different ways to customize, including passing in a custom `SystemMessage`, which we will do.
```python
from langchain.schema import SystemMessage
@ -55,7 +55,7 @@ Putting those pieces together, we can now create the agent.
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
```
Finally, we create the AgentExecutor - the runtime for our agent.
Finally, we create the `AgentExecutor` - the runtime for our agent.
```python
from langchain.agents import AgentExecutor
@ -97,7 +97,7 @@ Let's fix that by adding in memory.
In order to do this, we need to do two things:
1. Add a place for memory variables to go in the prompt
2. Add memory to the AgentExecutor (note that we add it here, and NOT to the agent, as this is the outermost chain)
2. Add memory to the `AgentExecutor` (note that we add it here, and NOT to the agent, as this is the outermost chain)
First, let's add a place for memory in the prompt.
We do this by adding a placeholder for messages with the key `"chat_history"`.

@ -1,5 +1,5 @@
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)
2. If the Agent returns an `AgentFinish`, then return that directly to the user
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
@ -43,7 +43,7 @@ tools = [
]
```
## Prompt Template
## Prompt template
This instructs the agent on what to do. Generally, the template should incorporate:
@ -112,11 +112,11 @@ prompt = CustomPromptTemplate(
)
```
## Output Parser
## Output parser
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
This is where you can change the parsing to do retries, handle whitespace, etc
This is where you can change the parsing to do retries, handle whitespace, etc.
```python
@ -164,7 +164,7 @@ This depends heavily on the prompt and model you are using. Generally, you want
## Set up the Agent
We can now combine everything to set up our agent
We can now combine everything to set up our agent:
```python
@ -225,7 +225,7 @@ agent_executor.run("How many people live in canada as of 2023?")
If you want to add memory to the agent, you'll need to:
1. Add a place in the custom prompt for the chat_history
1. Add a place in the custom prompt for the `chat_history`
2. Add a memory object to the agent executor.

@ -1,5 +1,5 @@
The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:
1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)
2. If the Agent returns an `AgentFinish`, then return that directly to the user
3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`
4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.
@ -35,7 +35,7 @@ import re
from getpass import getpass
```
## Set up tool
## Set up tools
Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools).
@ -57,7 +57,7 @@ tools = [
]
```
## Prompt Template
## Prompt template
This instructs the agent on what to do. Generally, the template should incorporate:
@ -131,11 +131,11 @@ prompt = CustomPromptTemplate(
)
```
## Output Parser
## Output parser
The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.
This is where you can change the parsing to do retries, handle whitespace, etc
This is where you can change the parsing to do retries, handle whitespace, etc.
```python
@ -188,7 +188,7 @@ This depends heavily on the prompt and model you are using. Generally, you want
## Set up the Agent
We can now combine everything to set up our agent
We can now combine everything to set up our agent:
```python

@ -72,7 +72,7 @@ class BaseCallbackHandler:
LangChain provides a few built-in handlers that you can use to get started. These are available in the `langchain/callbacks` module. The most basic handler is the `StdOutCallbackHandler`, which simply logs all events to `stdout`.
**Note** when the `verbose` flag on the object is set to true, the `StdOutCallbackHandler` will be invoked even without being explicitly passed in.
**Note**: when the `verbose` flag on the object is set to true, the `StdOutCallbackHandler` will be invoked even without being explicitly passed in.
```python
from langchain.callbacks import StdOutCallbackHandler
@ -137,6 +137,6 @@ The `verbose` argument is available on most objects throughout the API (Chains,
### When do you want to use each of these?
- Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are _not specific to a single request_, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.
- Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are _not specific to a single request_, but rather to the entire chain. For example, if you want to log all the requests made to an `LLMChain`, you would pass a handler to the constructor.
- Request callbacks are most useful for use cases such as streaming, where you want to stream the output of a single request to a specific websocket connection, or other similar use cases. For example, if you want to stream the output of a single request to a websocket, you would pass a handler to the `call()` method

Loading…
Cancel
Save