(WIP) agents (#171)

harrison/agent-improvements
Harrison Chase 2 years ago committed by GitHub
parent 22bd12a097
commit d3a7429f61
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -17,11 +17,6 @@ create a truly powerful app - the real power comes when you are able to
combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications.
It aims to create:
1. a comprehensive collection of pieces you would ever want to combine
2. a flexible interface for combining pieces into a single comprehensive "chain"
3. a schema for easily saving and sharing those chains
## 📖 Documentation
@ -31,78 +26,62 @@ Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documen
- Reference (full API docs)
- Resources (high level explanation of core concepts)
## 🚀 What can I do with this
This project was largely inspired by a few projects seen on Twitter for which we thought it would make sense to have more explicit tooling. A lot of the initial functionality was done in an attempt to recreate those. Those are:
**[Self-ask-with-search](https://ofir.io/self-ask.pdf)**
To recreate this paper, use the following code snippet or checkout the [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/demos/self_ask_with_search.ipynb).
```python
from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain
llm = OpenAI(temperature=0)
search = SerpAPIChain()
self_ask_with_search = SelfAskWithSearchChain(llm=llm, search_chain=search)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
```
**[LLM Math](https://twitter.com/amasad/status/1568824744367259648?s=20&t=-7wxpXBJinPgDuyHLouP1w)**
To recreate this example, use the following code snippet or check out the [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/demos/llm_math.ipynb).
```python
from langchain import OpenAI, LLMMathChain
llm = OpenAI(temperature=0)
llm_math = LLMMathChain(llm=llm)
llm_math.run("How many of the integers between 0 and 99 inclusive are divisible by 8?")
```
**Generic Prompting**
## 🚀 What can this help with?
You can also use this for simple prompting pipelines, as in the below example and this [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/demos/simple_prompts.ipynb).
There are three main areas (with a forth coming soon) that LangChain is designed to help with.
These are, in increasing order of complexity:
1. LLM and Prompts
2. Chains
3. Agents
4. (Coming Soon) Memory
```python
from langchain import PromptTemplate, OpenAI, LLMChain
Let's go through these categories and for each one identify key concepts (to clarify terminology) as well as the problems in this area LangChain helps solve.
template = """Question: {question}
### LLMs and Prompts
Calling out to an LLM once is pretty easy, with most of them being behind well documented APIs.
However, there are still some challenges going from that to an application running in production that LangChain attempts to address.
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI(temperature=0)
llm_chain = LLMChain(prompt=prompt, llm=llm)
**Key Concepts**
- LLM: A large language model, in particular a text-to-text model.
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
**Problems solved**
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
llm_chain.predict(question=question)
```
### Chains
Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with eachother or with other experts.
LangChain provides several parts to help with that.
**Embed & Search Documents**
**Key Concepts**
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
- Chains: A combination of multiple tools in a deterministic manner.
We support two vector databases to store and search embeddings -- FAISS and Elasticsearch. Here's a code snippet showing how to use FAISS to store embeddings and search for text similar to a query. Both database backends are featured in this [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/integrations/embeddings.ipynb).
**Problems solved**
- Standard interface for working with Chains
- Easy way to construct chains of LLMs
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
- End-to-end chains for common workflows (database question/answer, recursive summarization, etc)
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.faiss import FAISS
from langchain.text_splitter import CharacterTextSplitter
### Agents
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input.
In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
with open('state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
**Key Concepts**
- Tools: same as above.
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
embeddings = OpenAIEmbeddings()
docsearch = FAISS.from_texts(texts, embeddings)
**Problems solved**
- Standard agent interfaces
- A selection of powerful agents to choose from
- Common chains that can be used as tools
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
```
### Memory
Coming soon.
## 🤖 Developer Guide

@ -0,0 +1,13 @@
Agents
======
The examples here are all end-to-end agents for specific applications.
.. toctree::
:maxdepth: 1
:glob:
:caption: Agents
agents/mrkl.ipynb
agents/react.ipynb
agents/self_ask_with_search.ipynb

@ -7,7 +7,7 @@
"source": [
"# MRKL\n",
"\n",
"This notebook showcases using the MRKL chain to route between tasks"
"This notebook showcases using an agent to replicate the MRKL chain."
]
},
{
@ -26,13 +26,13 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain, SQLDatabase, SQLDatabaseChain\n",
"from langchain.chains.mrkl.base import ChainConfig"
"from langchain import LLMMathChain, OpenAI, SerpAPIChain, SQLDatabase, SQLDatabaseChain\n",
"from langchain.agents import initialize_agent, Tool"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 2,
"id": "07e96d99",
"metadata": {},
"outputs": [],
@ -42,39 +42,38 @@
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../notebooks/Chinook.db\")\n",
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)\n",
"chains = [\n",
" ChainConfig(\n",
" action_name = \"Search\",\n",
" action=search.run,\n",
" action_description=\"useful for when you need to answer questions about current events\"\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" ),\n",
" ChainConfig(\n",
" action_name=\"Calculator\",\n",
" action=llm_math_chain.run,\n",
" action_description=\"useful for when you need to answer questions about math\"\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"useful for when you need to answer questions about math\"\n",
" ),\n",
" \n",
" ChainConfig(\n",
" action_name=\"FooBar DB\",\n",
" action=db_chain.run,\n",
" action_description=\"useful for when you need to answer questions about FooBar. Input should be in the form of a question\"\n",
" Tool(\n",
" name=\"FooBar DB\",\n",
" func=db_chain.run,\n",
" description=\"useful for when you need to answer questions about FooBar. Input should be in the form of a question\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "a069c4b6",
"metadata": {},
"outputs": [],
"source": [
"mrkl = MRKLChain.from_chains(llm, chains, verbose=True)"
"mrkl = initialize_agent(tools, llm, agent_type=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "e603cd7d",
"metadata": {},
"outputs": [
@ -82,38 +81,34 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\n",
"Thought:\u001b[102m I need to find the age of Olivia Wilde's boyfriend\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Olivia Wilde's boyfriend\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde's boyfriend\"\u001b[0m\n",
"Observation: \u001b[104mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
"Thought:\u001b[102m I need to find the age of Harry Styles\n",
"Observation: \u001b[36;1m\u001b[1;3mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Harry Styles\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[104m28 years\u001b[0m\n",
"Thought:\u001b[102m I need to calculate 28 to the 0.23 power\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 28^0.23\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"28^0.23\u001b[102m\n",
"28^0.23\u001b[32;1m\u001b[1;3m\n",
"\n",
"```python\n",
"print(28**0.23)\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[103m2.1520202182226886\n",
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[103mAnswer: 2.1520202182226886\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"\u001b[0m\n",
"Thought:\u001b[102m I now know the final answer\n",
"Final Answer: 2.1520202182226886\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: 2.1520202182226886\u001b[0m"
]
},
{
@ -122,7 +117,7 @@
"'2.1520202182226886'"
]
},
"execution_count": 6,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@ -133,7 +128,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 5,
"id": "a5c07010",
"metadata": {},
"outputs": [
@ -141,41 +136,37 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?\n",
"Thought:\u001b[102m I need to find an album called 'The Storm Before the Calm'\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find an album called 'The Storm Before the Calm'\n",
"Action: Search\n",
"Action Input: \"The Storm Before the Calm album\"\u001b[0m\n",
"Observation: \u001b[104mThe Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis ...\u001b[0m\n",
"Thought:\u001b[102m I need to check if Alanis is in the FooBar database\n",
"Observation: \u001b[36;1m\u001b[1;3mThe Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to check if Alanis is in the FooBar database\n",
"Action: FooBar DB\n",
"Action Input: \"Does Alanis Morissette exist in the FooBar database?\"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Does Alanis Morissette exist in the FooBar database?\n",
"SQLQuery:\u001b[102m SELECT * FROM Artist WHERE Name = 'Alanis Morissette'\u001b[0m\n",
"SQLResult: \u001b[103m[(4, 'Alanis Morissette')]\u001b[0m\n",
"Answer:\u001b[102m Yes\u001b[0m\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT * FROM Artist WHERE Name = 'Alanis Morissette'\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(4, 'Alanis Morissette')]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Yes\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[101m Yes\u001b[0m\n",
"Thought:\u001b[102m I need to find out what albums of Alanis's are in the FooBar database\n",
"Observation: \u001b[38;5;200m\u001b[1;3m Yes\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out what albums of Alanis's are in the FooBar database\n",
"Action: FooBar DB\n",
"Action Input: \"What albums by Alanis Morissette are in the FooBar database?\"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"What albums by Alanis Morissette are in the FooBar database?\n",
"SQLQuery:\u001b[102m SELECT Title FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'Alanis Morissette')\u001b[0m\n",
"SQLResult: \u001b[103m[('Jagged Little Pill',)]\u001b[0m\n",
"Answer:\u001b[102m Jagged Little Pill\u001b[0m\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Album.Title FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alanis Morissette'\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('Jagged Little Pill',)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Jagged Little Pill\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[101m Jagged Little Pill\u001b[0m\n",
"Thought:\u001b[102m I now know the final answer\n",
"Final Answer: The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Observation: \u001b[38;5;200m\u001b[1;3m Jagged Little Pill\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill\u001b[0m"
]
},
{
@ -184,7 +175,7 @@
"'The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill'"
]
},
"execution_count": 10,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}

@ -0,0 +1,89 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "82140df0",
"metadata": {},
"source": [
"# ReAct\n",
"\n",
"This notebook showcases using an agent to implement the ReAct logic."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4e272b47",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, Wikipedia\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents.react.base import DocstoreExplorer\n",
"docstore=DocstoreExplorer(Wikipedia())\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=docstore.search\n",
" ),\n",
" Tool(\n",
" name=\"Lookup\",\n",
" func=docstore.lookup\n",
" )\n",
"]\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"react = initialize_agent(tools, llm, agent_type=\"react-docstore\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8078c8f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\n",
"Thought 1:"
]
}
],
"source": [
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
"react.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4ff64e81",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -20,23 +20,19 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"What is the hometown of the reigning men's U.S. Open champion?\n",
"Are follow up questions needed here:\u001b[102m Yes.\n",
"Are follow up questions needed here:\u001b[32;1m\u001b[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
"Intermediate answer: \u001b[103mCarlos Alcaraz won the 2022 Men's single title while Poland's Iga Swiatek won the Women's single title defeating Tunisian's Ons Jabeur..\u001b[0m\u001b[102m\n",
"Follow up: Where is Carlos Alcaraz from?\u001b[0m\n",
"Intermediate answer: \u001b[103mEl Palmar, Murcia, Spain.\u001b[0m\u001b[102m\n",
"So the final answer is: El Palmar, Murcia, Spain\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Intermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mFollow up: Where is Carlos Alcaraz from?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mEl Palmar, Spain\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mSo the final answer is: El Palmar, Spain\u001b[0m"
]
},
{
"data": {
"text/plain": [
"'\\nSo the final answer is: El Palmar, Murcia, Spain'"
"'El Palmar, Spain'"
]
},
"execution_count": 1,
@ -45,12 +41,19 @@
}
],
"source": [
"from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain\n",
"from langchain import OpenAI, SerpAPIChain\n",
"from langchain.agents import initialize_agent, Tool\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIChain()\n",
"tools = [\n",
" Tool(\n",
" name=\"Intermediate Answer\",\n",
" func=search.run\n",
" )\n",
"]\n",
"\n",
"self_ask_with_search = SelfAskWithSearchChain(llm=llm, search_chain=search, verbose=True)\n",
"self_ask_with_search = initialize_agent(tools, llm, agent_type=\"self-ask-with-search\", verbose=True)\n",
"\n",
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]

@ -0,0 +1,15 @@
Chains
======
The examples here are all end-to-end chains for specific applications.
.. toctree::
:maxdepth: 1
:glob:
:caption: Chains
chains/llm_chain.ipynb
chains/llm_math.ipynb
chains/map_reduce.ipynb
chains/sqlite.ipynb
chains/vector_db_qa.ipynb

@ -5,9 +5,9 @@
"id": "d8a5c5d4",
"metadata": {},
"source": [
"# Simple Example\n",
"# LLM Chain\n",
"\n",
"This notebook showcases a simple chain."
"This notebook showcases a simple LLM chain."
]
},
{
@ -81,7 +81,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.7"
"version": "3.7.6"
}
},
"nbformat": 4,

@ -1,10 +0,0 @@
Demos
=====
The examples here are all end-to-end chains of specific applications.
.. toctree::
:maxdepth: 1
:glob:
demos/*

@ -1,98 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "82140df0",
"metadata": {},
"source": [
"# ReAct\n",
"\n",
"This notebook showcases the implementation of the ReAct chain logic."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4e272b47",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, ReActChain, Wikipedia\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"react = ReActChain(llm=llm, docstore=Wikipedia(), verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8078c8f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\n",
"Thought 1:\u001b[102m I need to search David Chanoff and find the U.S. Navy admiral he\n",
"collaborated with.\n",
"Action 1: Search[David Chanoff]\u001b[0m\n",
"Observation 1: \u001b[103mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
"Thought 2:\u001b[102m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe.\n",
"Action 2: Search[William J. Crowe]\u001b[0m\n",
"Observation 2: \u001b[103mWilliam James Crowe Jr. (January 2, 1925 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
"Thought 3:\u001b[102m William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton. So the answer is Bill Clinton.\n",
"Action 3: Finish[Bill Clinton]\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Bill Clinton'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
"react.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0a6bd3b4",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,28 @@
# Agents
Agents use an LLM to determine which tools to call and in what order.
Here are the supported types of agents available in LangChain.
For a tutorial on how to load agents, see [here](/getting_started/agents.ipynb).
### `zero-shot-react-description`
This agent uses the ReAct framework to determine which tool to use
based solely on the tool's description. Any number of tools can be provided.
This agent requires that a description is provided for each tool.
### `react-docstore`
This agent uses the ReAct framework to interact with a docstore. Two tools must
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
The `Search` tool should search for a document, while the `Lookup` tool should lookup
a term in the most recently found document.
This agent is equivalent to the
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
### `self-ask-with-search`
This agent utilizes a single tool that should be named `Intermediate Answer`.
This tool should be able to lookup factual answers to questions. This agent
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
where a Google search API was provided as the tool.

@ -25,3 +25,7 @@ These are datastores that store documents. They expose a method for passing in a
## Chains
These are pipelines that combine multiple of the above ideas.
They vary greatly in complexity and are combination of generic, highly configurable pipelines and more narrow (but usually more complex) pipelines.
## Agents
As opposed to a chain, whether the steps to be taken are known ahead of time, agents
use an LLM to determine which tools to call and in what order.

@ -29,7 +29,7 @@ This induces the to model to think about what action to take, then take it.
Resources:
- [Paper](https://arxiv.org/pdf/2210.03629.pdf)
- [LangChain Example](https://github.com/hwchase17/langchain/blob/master/examples/react.ipynb)
- [LangChain Example](https://github.com/hwchase17/langchain/blob/master/docs/examples/agents/react.ipynb)
### Self-ask
@ -38,7 +38,7 @@ In this method, the model explicitly asks itself follow-up questions, which are
Resources:
- [Paper](https://ofir.io/self-ask.pdf)
- [LangChain Example](https://github.com/hwchase17/langchain/blob/master/examples/self_ask_with_search.ipynb)
- [LangChain Example](https://github.com/hwchase17/langchain/blob/master/docs/examples/agents/self_ask_with_search.ipynb)
### Prompt Chaining

@ -0,0 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5436020b",
"metadata": {},
"source": [
"# Agents\n",
"Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input. In these types of chains, there is a \"agent\" (backed by an LLM) which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call.\n",
"\n",
"When used correctly agents can be extremely powerful. The purpose of this notebook is to show you how to easily use agents through the simplest, highest level API. If you want more low level control over various components, check out the documentation for custom agents (coming soon). For a list of supported agent types and their specifications, see [here](../explanation/agents.md)."
]
},
{
"cell_type": "markdown",
"id": "3c6226b9",
"metadata": {},
"source": [
"## Concepts\n",
"\n",
"In order to understand agents, you should understand the following concepts:\n",
"\n",
"- Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.\n",
"- LLM: The language model powering the agent.\n",
"- AgentType: The type of agent to use. This should be a string. For a list of supported agents, see [here](../explanation/agents.md). Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon)."
]
},
{
"cell_type": "markdown",
"id": "05d4b21e",
"metadata": {},
"source": [
"## Tools\n",
"When constructing your own agent, you will need to provide it with a list of Tools that it can use. A Tool is defined as below.\n",
"\n",
"```python\n",
"class Tool(NamedTuple):\n",
" \"\"\"Interface for tools.\"\"\"\n",
"\n",
" name: str\n",
" func: Callable[[str], str]\n",
" description: Optional[str] = None\n",
"```\n",
"\n",
"The two required components of a Tool are the name and then the tool itself. A tool description is optional, as it is needed for some agents but not all."
]
},
{
"cell_type": "markdown",
"id": "2558a02d",
"metadata": {},
"source": [
"## Loading an agent\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "36ed392e",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "56ff7670",
"metadata": {},
"outputs": [],
"source": [
"# Load the tool configs that are needed.\n",
"from langchain import LLMMathChain, SerpAPIChain\n",
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIChain()\n",
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"useful for when you need to answer questions about math\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5b93047d",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"llm = OpenAI(temperature=0)\n",
"agent = initialize_agent(tools, llm, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6f96a891",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Olivia Wilde's boyfriend\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde's boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Harry Styles\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 28^0.23\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"28^0.23\u001b[32;1m\u001b[1;3m\n",
"\n",
"```python\n",
"print(28**0.23)\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: 2.1520202182226886\u001b[0m"
]
},
{
"data": {
"text/plain": [
"'2.1520202182226886'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f0852ff",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,4 +1,4 @@
# Using Chains
# LLM Chains
Calling an LLM is a great first step, but it's just the beginning.
Normally when you use an LLM in an application, you are not sending user input directly to the LLM.
@ -33,7 +33,5 @@ Now we can run that can only specifying the product!
chain.run("colorful socks")
```
There we go! There's the first chain.
That is it for the Getting Started example.
As a next step, we would suggest checking out the more complex chains in the [Demos section](/examples/demos)
There we go! There's the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.

@ -8,12 +8,75 @@ create a truly powerful app - the real power comes when you are able to
combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications.
It aims to create:
1. a comprehensive collection of pieces you would ever want to combine
2. a flexible interface for combining pieces into a single comprehensive "chain"
3. a schema for easily saving and sharing those chains
There are three main areas (with a forth coming soon) that LangChain is designed to help with.
These are, in increasing order of complexity:
1. LLM and Prompts
2. Chains
3. Agents
4. (Coming Soon) Memory
Let's go through these categories and for each one identify key concepts (to clarify terminology) as well as the problems in this area LangChain helps solve.
**LLMs and Prompts**
Calling out to an LLM once is pretty easy, with most of them being behind well documented APIs.
However, there are still some challenges going from that to an application running in production that LangChain attempts to address.
*Key Concepts*
- LLM: A large language model, in particular a text-to-text model.
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
*Problems solved*
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
**Chains**
Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with eachother or with other experts.
LangChain provides several parts to help with that.
*Key Concepts*
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
- Chains: A combination of multiple tools in a deterministic manner.
*Problems solved*
- Standard interface for working with Chains
- Easy way to construct chains of LLMs
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
- End-to-end chains for common workflows (database question/answer, recursive summarization, etc)
**Agents**
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input.
In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
*Key Concepts*
- Tools: same as above.
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
*Problems solved*
- Standard agent interfaces
- A selection of powerful agents to choose from
- Common chains that can be used as tools
**Memory**
Coming soon.
Documentation Structure
=======================
The documentation is structured into the following sections:
@ -25,7 +88,9 @@ The documentation is structured into the following sections:
getting_started/installation.md
getting_started/environment.md
getting_started/llm.md
getting_started/chains.md
getting_started/llm_chain.md
getting_started/sequential_chains.md
getting_started/agents.ipynb
Goes over a simple walk through and tutorial for getting started setting up a simple chain that generates a company name based on what the company makes.
Covers installation, environment set up, calling LLMs, and using prompts.
@ -37,9 +102,10 @@ Start here if you haven't used LangChain before.
:caption: How-To Examples
:name: examples
examples/demos.rst
examples/integrations.rst
examples/prompts.rst
examples/integrations.rst
examples/chains.rst
examples/agents.rst
examples/model_laboratory.ipynb
More elaborate examples and walk-throughs of particular
@ -62,6 +128,7 @@ common tasks or cool demos.
modules/text_splitter
modules/vectorstore
modules/chains
modules/agents
Full API documentation. This is the place to look if you want to
@ -74,6 +141,7 @@ see detailed information about the various classes, methods, and APIs.
:name: resources
explanation/core_concepts.md
explanation/agents.md
explanation/glossary.md
Discord <https://discord.gg/6adMQxSpJS>

@ -0,0 +1,7 @@
:mod:`langchain.agents`
===============================
.. automodule:: langchain.agents
:members:
:undoc-members:

@ -5,13 +5,11 @@ from pathlib import Path
with open(Path(__file__).absolute().parents[0] / "VERSION") as _f:
__version__ = _f.read().strip()
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
from langchain.chains import (
LLMChain,
LLMMathChain,
MRKLChain,
PythonChain,
ReActChain,
SelfAskWithSearchChain,
SerpAPIChain,
SQLDatabaseChain,
VectorDBQA,

@ -0,0 +1,16 @@
"""Routing chains."""
from langchain.agents.agent import Agent
from langchain.agents.loading import initialize_agent
from langchain.agents.mrkl.base import MRKLChain
from langchain.agents.react.base import ReActChain
from langchain.agents.self_ask_with_search.base import SelfAskWithSearchChain
from langchain.agents.tools import Tool
__all__ = [
"MRKLChain",
"SelfAskWithSearchChain",
"ReActChain",
"Agent",
"Tool",
"initialize_agent",
]

@ -0,0 +1,134 @@
"""Chain that takes in an input and produces an action and action input."""
from abc import ABC, abstractmethod
from typing import Any, ClassVar, List, NamedTuple, Optional, Tuple
from pydantic import BaseModel
from langchain.agents.tools import Tool
from langchain.chains.llm import LLMChain
from langchain.input import ChainedInput, get_color_mapping
from langchain.llms.base import LLM
from langchain.prompts.base import BasePromptTemplate
class Action(NamedTuple):
"""Action to take."""
tool: str
tool_input: str
log: str
class Agent(BaseModel, ABC):
"""Agent that uses an LLM."""
prompt: ClassVar[BasePromptTemplate]
llm_chain: LLMChain
tools: List[Tool]
verbose: bool = True
@property
@abstractmethod
def observation_prefix(self) -> str:
"""Prefix to append the observation with."""
@property
@abstractmethod
def llm_prefix(self) -> str:
"""Prefix to append the LLM call with."""
@property
def finish_tool_name(self) -> str:
"""Name of the tool to use to finish the chain."""
return "Final Answer"
@property
def starter_string(self) -> str:
"""Put this string after user input but before first LLM call."""
return "\n"
@abstractmethod
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
"""Extract tool and tool input from llm output."""
def _fix_text(self, text: str) -> str:
"""Fix the text."""
raise ValueError("fix_text not implemented for this agent.")
@property
def _stop(self) -> List[str]:
return [f"\n{self.observation_prefix}"]
@classmethod
def _validate_tools(cls, tools: List[Tool]) -> None:
"""Validate that appropriate tools are passed in."""
pass
@classmethod
def _get_prompt(cls, tools: List[Tool]) -> BasePromptTemplate:
return cls.prompt
@classmethod
def from_llm_and_tools(cls, llm: LLM, tools: List[Tool], **kwargs: Any) -> "Agent":
"""Construct an agent from an LLM and tools."""
cls._validate_tools(tools)
llm_chain = LLMChain(llm=llm, prompt=cls._get_prompt(tools))
return cls(llm_chain=llm_chain, tools=tools, **kwargs)
def get_action(self, text: str) -> Action:
"""Given input, decided what to do.
Args:
text: input string
Returns:
Action specifying what tool to use.
"""
input_key = self.llm_chain.input_keys[0]
inputs = {input_key: text, "stop": self._stop}
full_output = self.llm_chain.predict(**inputs)
parsed_output = self._extract_tool_and_input(full_output)
while parsed_output is None:
full_output = self._fix_text(full_output)
inputs = {input_key: text + full_output, "stop": self._stop}
output = self.llm_chain.predict(**inputs)
full_output += output
parsed_output = self._extract_tool_and_input(full_output)
tool, tool_input = parsed_output
return Action(tool, tool_input, full_output)
def run(self, text: str) -> str:
"""Run text through and get agent response."""
# Construct a mapping of tool name to tool for easy lookup
name_to_tool_map = {tool.name: tool.func for tool in self.tools}
# Construct the initial string to pass into the LLM. This is made up
# of the user input, the special starter string, and then the LLM prefix.
# The starter string is a special string that may be used by a LLM to
# immediately follow the user input. The LLM prefix is a string that
# prompts the LLM to take an action.
starter_string = text + self.starter_string + self.llm_prefix
# We use the ChainedInput class to iteratively add to the input over time.
chained_input = ChainedInput(starter_string, verbose=self.verbose)
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green"]
)
# We now enter the agent loop (until it returns something).
while True:
# Call the LLM to see what to do.
output = self.get_action(chained_input.input)
# Add the log to the Chained Input.
chained_input.add(output.log, color="green")
# If the tool chosen is the finishing tool, then we end and return.
if output.tool == self.finish_tool_name:
return output.tool_input
# Otherwise we lookup the tool
chain = name_to_tool_map[output.tool]
# We then call the tool on the tool input to get an observation
observation = chain(output.tool_input)
# We then log the observation
chained_input.add(f"\n{self.observation_prefix}")
chained_input.add(observation, color=color_mapping[output.tool])
# We then add the LLM prefix into the prompt to get the LLM to start
# thinking, and start the loop all over.
chained_input.add(f"\n{self.llm_prefix}")

@ -0,0 +1,42 @@
"""Load agent."""
from typing import Any, List
from langchain.agents.agent import Agent
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.react.base import ReActDocstoreAgent
from langchain.agents.self_ask_with_search.base import SelfAskWithSearchAgent
from langchain.agents.tools import Tool
from langchain.llms.base import LLM
AGENT_TYPE_TO_CLASS = {
"zero-shot-react-description": ZeroShotAgent,
"react-docstore": ReActDocstoreAgent,
"self-ask-with-search": SelfAskWithSearchAgent,
}
def initialize_agent(
tools: List[Tool],
llm: LLM,
agent_type: str = "zero-shot-react-description",
**kwargs: Any,
) -> Agent:
"""Load agent given tools and LLM.
Args:
tools: List of tools this agent has access to.
llm: Language model to use as the agent.
agent_type: The agent to use. Valid options are:
`zero-shot-react-description`, `react-docstore`, `self-ask-with-search`.
**kwargs: Additional key word arguments to pass to the agent.
Returns:
An agent.
"""
if agent_type not in AGENT_TYPE_TO_CLASS:
raise ValueError(
f"Got unknown agent type: {agent_type}. "
f"Valid types are: {AGENT_TYPE_TO_CLASS.keys()}."
)
agent_cls = AGENT_TYPE_TO_CLASS[agent_type]
return agent_cls.from_llm_and_tools(llm, tools, **kwargs)

@ -0,0 +1,137 @@
"""Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf."""
from typing import Any, Callable, List, NamedTuple, Optional, Tuple
from langchain.agents.agent import Agent
from langchain.agents.mrkl.prompt import BASE_TEMPLATE
from langchain.agents.tools import Tool
from langchain.llms.base import LLM
from langchain.prompts import PromptTemplate
from langchain.prompts.base import BasePromptTemplate
FINAL_ANSWER_ACTION = "Final Answer: "
class ChainConfig(NamedTuple):
"""Configuration for chain to use in MRKL system.
Args:
action_name: Name of the action.
action: Action function to call.
action_description: Description of the action.
"""
action_name: str
action: Callable
action_description: str
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
"""Parse out the action and input from the LLM output."""
ps = [p for p in llm_output.split("\n") if p]
if ps[-1].startswith("Final Answer"):
directive = ps[-1][len(FINAL_ANSWER_ACTION) :]
return "Final Answer", directive
if not ps[-1].startswith("Action Input: "):
raise ValueError(
"The last line does not have an action input, "
"something has gone terribly wrong."
)
if not ps[-2].startswith("Action: "):
raise ValueError(
"The second to last line does not have an action, "
"something has gone terribly wrong."
)
action = ps[-2][len("Action: ") :]
action_input = ps[-1][len("Action Input: ") :]
return action, action_input.strip(" ").strip('"')
class ZeroShotAgent(Agent):
"""Agent for the MRKL chain."""
@property
def observation_prefix(self) -> str:
"""Prefix to append the observation with."""
return "Observation: "
@property
def llm_prefix(self) -> str:
"""Prefix to append the llm call with."""
return "Thought:"
@classmethod
def _get_prompt(cls, tools: List[Tool]) -> BasePromptTemplate:
tool_strings = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
tool_names = ", ".join([tool.name for tool in tools])
template = BASE_TEMPLATE.format(tools=tool_strings, tool_names=tool_names)
return PromptTemplate(template=template, input_variables=["input"])
@classmethod
def _validate_tools(cls, tools: List[Tool]) -> None:
for tool in tools:
if tool.description is None:
raise ValueError(
f"Got a tool {tool.name} without a description. For this agent, "
f"a description must always be provided."
)
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
return get_action_and_input(text)
class MRKLChain(ZeroShotAgent):
"""Chain that implements the MRKL system.
Example:
.. code-block:: python
from langchain import OpenAI, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
chains = [...]
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
"""
@classmethod
def from_chains(cls, llm: LLM, chains: List[ChainConfig], **kwargs: Any) -> "Agent":
"""User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Args:
llm: The LLM to use as the agent LLM.
chains: The chains the MRKL system has access to.
**kwargs: parameters to be passed to initialization.
Returns:
An initialized MRKL chain.
Example:
.. code-block:: python
from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIChain()
llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
"""
tools = [
Tool(name=c.action_name, func=c.action, description=c.action_description)
for c in chains
]
return cls.from_llm_and_tools(llm, tools, **kwargs)

@ -0,0 +1,114 @@
"""Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf."""
import re
from typing import Any, ClassVar, List, Optional, Tuple
from pydantic import BaseModel
from langchain.agents.agent import Agent
from langchain.agents.react.prompt import PROMPT
from langchain.agents.tools import Tool
from langchain.chains.llm import LLMChain
from langchain.docstore.base import Docstore
from langchain.docstore.document import Document
from langchain.llms.base import LLM
from langchain.prompts.base import BasePromptTemplate
class ReActDocstoreAgent(Agent, BaseModel):
"""Agent for the ReAct chin."""
prompt: ClassVar[BasePromptTemplate] = PROMPT
i: int = 1
@classmethod
def _validate_tools(cls, tools: List[Tool]) -> None:
if len(tools) != 2:
raise ValueError(f"Exactly two tools must be specified, but got {tools}")
tool_names = {tool.name for tool in tools}
if tool_names != {"Lookup", "Search"}:
raise ValueError(
f"Tool names should be Lookup and Search, got {tool_names}"
)
def _fix_text(self, text: str) -> str:
return text + f"\nAction {self.i}:"
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
action_prefix = f"Action {self.i}: "
if not text.split("\n")[-1].startswith(action_prefix):
return None
self.i += 1
action_block = text.split("\n")[-1]
action_str = action_block[len(action_prefix) :]
# Parse out the action and the directive.
re_matches = re.search(r"(.*?)\[(.*?)\]", action_str)
if re_matches is None:
raise ValueError(f"Could not parse action directive: {action_str}")
return re_matches.group(1), re_matches.group(2)
@property
def finish_tool_name(self) -> str:
"""Name of the tool of when to finish the chain."""
return "Finish"
@property
def observation_prefix(self) -> str:
"""Prefix to append the observation with."""
return f"Observation {self.i - 1}: "
@property
def _stop(self) -> List[str]:
return [f"\nObservation {self.i}: "]
@property
def llm_prefix(self) -> str:
"""Prefix to append the LLM call with."""
return f"Thought {self.i}:"
class DocstoreExplorer:
"""Class to assist with exploration of a document store."""
def __init__(self, docstore: Docstore):
"""Initialize with a docstore, and set initial document to None."""
self.docstore = docstore
self.document: Optional[Document] = None
def search(self, term: str) -> str:
"""Search for a term in the docstore, and if found save."""
result = self.docstore.search(term)
if isinstance(result, Document):
self.document = result
return self.document.summary
else:
self.document = None
return result
def lookup(self, term: str) -> str:
"""Lookup a term in document (if saved)."""
if self.document is None:
raise ValueError("Cannot lookup without a successful search first")
return self.document.lookup(term)
class ReActChain(ReActDocstoreAgent):
"""Chain that implements the ReAct paper.
Example:
.. code-block:: python
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
"""
def __init__(self, llm: LLM, docstore: Docstore, **kwargs: Any):
"""Initialize with the LLM and a docstore."""
docstore_explorer = DocstoreExplorer(docstore)
tools = [
Tool(name="Search", func=docstore_explorer.search),
Tool(name="Lookup", func=docstore_explorer.lookup),
]
llm_chain = LLMChain(llm=llm, prompt=PROMPT)
super().__init__(llm_chain=llm_chain, tools=tools, **kwargs)

@ -0,0 +1,84 @@
"""Chain that does self ask with search."""
from typing import Any, ClassVar, List, Tuple
from langchain.agents.agent import Agent
from langchain.agents.self_ask_with_search.prompt import PROMPT
from langchain.agents.tools import Tool
from langchain.chains.llm import LLMChain
from langchain.chains.serpapi import SerpAPIChain
from langchain.llms.base import LLM
from langchain.prompts.base import BasePromptTemplate
class SelfAskWithSearchAgent(Agent):
"""Agent for the self-ask-with-search paper."""
prompt: ClassVar[BasePromptTemplate] = PROMPT
@classmethod
def _validate_tools(cls, tools: List[Tool]) -> None:
if len(tools) != 1:
raise ValueError(f"Exactly one tool must be specified, but got {tools}")
tool_names = {tool.name for tool in tools}
if tool_names != {"Intermediate Answer"}:
raise ValueError(
f"Tool name should be Intermediate Answer, got {tool_names}"
)
def _extract_tool_and_input(self, text: str) -> Tuple[str, str]:
followup = "Follow up:"
if "\n" not in text:
last_line = text
else:
last_line = text.split("\n")[-1]
if followup not in last_line:
finish_string = "So the final answer is: "
if finish_string not in last_line:
raise ValueError("We should probably never get here")
return "Final Answer", text[len(finish_string) :]
if ":" not in last_line:
after_colon = last_line
else:
after_colon = text.split(":")[-1]
if " " == after_colon[0]:
after_colon = after_colon[1:]
if "?" != after_colon[-1]:
print("we probably should never get here..." + text)
return "Intermediate Answer", after_colon
@property
def observation_prefix(self) -> str:
"""Prefix to append the observation with."""
return "Intermediate answer: "
@property
def llm_prefix(self) -> str:
"""Prefix to append the LLM call with."""
return ""
@property
def starter_string(self) -> str:
"""Put this string after user input but before first LLM call."""
return "\nAre follow up questions needed here:"
class SelfAskWithSearchChain(SelfAskWithSearchAgent):
"""Chain that does self ask with search.
Example:
.. code-block:: python
from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain
search_chain = SerpAPIChain()
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
"""
def __init__(self, llm: LLM, search_chain: SerpAPIChain, **kwargs: Any):
"""Initialize with just an LLM and a search chain."""
search_tool = Tool(name="Intermediate Answer", func=search_chain.run)
llm_chain = LLMChain(llm=llm, prompt=PROMPT)
super().__init__(llm_chain=llm_chain, tools=[search_tool], **kwargs)

@ -0,0 +1,10 @@
"""Interface for tools."""
from typing import Callable, NamedTuple, Optional
class Tool(NamedTuple):
"""Interface for tools."""
name: str
func: Callable[[str], str]
description: Optional[str] = None

@ -1,10 +1,7 @@
"""Chains are easily reusable components which can be linked together."""
from langchain.chains.llm import LLMChain
from langchain.chains.llm_math.base import LLMMathChain
from langchain.chains.mrkl.base import MRKLChain
from langchain.chains.python import PythonChain
from langchain.chains.react.base import ReActChain
from langchain.chains.self_ask_with_search.base import SelfAskWithSearchChain
from langchain.chains.sequential import SequentialChain, SimpleSequentialChain
from langchain.chains.serpapi import SerpAPIChain
from langchain.chains.sql_database.base import SQLDatabaseChain
@ -14,11 +11,8 @@ __all__ = [
"LLMChain",
"LLMMathChain",
"PythonChain",
"SelfAskWithSearchChain",
"SerpAPIChain",
"ReActChain",
"SQLDatabaseChain",
"MRKLChain",
"VectorDBQA",
"SequentialChain",
"SimpleSequentialChain",

@ -1,170 +0,0 @@
"""Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf."""
from typing import Any, Callable, Dict, List, NamedTuple, Tuple
from pydantic import BaseModel, Extra
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.mrkl.prompt import BASE_TEMPLATE
from langchain.input import ChainedInput, get_color_mapping
from langchain.llms.base import LLM
from langchain.prompts import BasePromptTemplate, PromptTemplate
FINAL_ANSWER_ACTION = "Final Answer: "
class ChainConfig(NamedTuple):
"""Configuration for chain to use in MRKL system.
Args:
action_name: Name of the action.
action: Action function to call.
action_description: Description of the action.
"""
action_name: str
action: Callable
action_description: str
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
"""Parse out the action and input from the LLM output."""
ps = [p for p in llm_output.split("\n") if p]
if ps[-1].startswith(FINAL_ANSWER_ACTION):
directive = ps[-1][len(FINAL_ANSWER_ACTION) :]
return FINAL_ANSWER_ACTION, directive
if not ps[-1].startswith("Action Input: "):
raise ValueError(
"The last line does not have an action input, "
"something has gone terribly wrong."
)
if not ps[-2].startswith("Action: "):
raise ValueError(
"The second to last line does not have an action, "
"something has gone terribly wrong."
)
action = ps[-2][len("Action: ") :]
action_input = ps[-1][len("Action Input: ") :]
return action, action_input.strip(" ").strip('"')
class MRKLChain(Chain, BaseModel):
"""Chain that implements the MRKL system.
Example:
.. code-block:: python
from langchain import OpenAI, Prompt, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
action_to_chain_map = {...}
mrkl = MRKLChain(
llm=llm,
prompt=prompt,
action_to_chain_map=action_to_chain_map
)
"""
llm: LLM
"""LLM wrapper to use as router."""
prompt: BasePromptTemplate
"""Prompt to use as router."""
action_to_chain_map: Dict[str, Callable]
"""Mapping from action name to chain to execute."""
input_key: str = "question" #: :meta private:
output_key: str = "answer" #: :meta private:
@classmethod
def from_chains(
cls, llm: LLM, chains: List[ChainConfig], **kwargs: Any
) -> "MRKLChain":
"""User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Args:
llm: The LLM to use as the router LLM.
chains: The chains the MRKL system has access to.
**kwargs: parameters to be passed to initialization.
Returns:
An initialized MRKL chain.
Example:
.. code-block:: python
from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIChain()
llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
"""
tools = "\n".join(
[f"{chain.action_name}: {chain.action_description}" for chain in chains]
)
tool_names = ", ".join([chain.action_name for chain in chains])
template = BASE_TEMPLATE.format(tools=tools, tool_names=tool_names)
prompt = PromptTemplate(template=template, input_variables=["input"])
action_to_chain_map = {chain.action_name: chain.action for chain in chains}
return cls(
llm=llm, prompt=prompt, action_to_chain_map=action_to_chain_map, **kwargs
)
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Expect input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Expect output key.
:meta private:
"""
return [self.output_key]
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
llm_chain = LLMChain(llm=self.llm, prompt=self.prompt)
chained_input = ChainedInput(
f"{inputs[self.input_key]}\nThought:", verbose=self.verbose
)
color_mapping = get_color_mapping(
list(self.action_to_chain_map.keys()), excluded_colors=["green"]
)
while True:
thought = llm_chain.predict(
input=chained_input.input, stop=["\nObservation"]
)
chained_input.add(thought, color="green")
action, action_input = get_action_and_input(thought)
if action == FINAL_ANSWER_ACTION:
return {self.output_key: action_input}
chain = self.action_to_chain_map[action]
ca = chain(action_input)
chained_input.add("\nObservation: ")
chained_input.add(ca, color=color_mapping[action])
chained_input.add("\nThought:")

@ -2,7 +2,7 @@
from langchain.prompts.prompt import PromptTemplate
_PROMPT_TEMPLATE = """
You are an agent controlling a browser. You are given:
You are an agents controlling a browser. You are given:
(1) an objective that you are trying to achieve
(2) the URL of your current web page

@ -1,107 +0,0 @@
"""Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf."""
import re
from typing import Any, Dict, List, Tuple
from pydantic import BaseModel, Extra
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.react.prompt import PROMPT
from langchain.docstore.base import Docstore
from langchain.docstore.document import Document
from langchain.input import ChainedInput
from langchain.llms.base import LLM
def predict_until_observation(
llm_chain: LLMChain, prompt: str, i: int
) -> Tuple[str, str, str]:
"""Generate text until an observation is needed."""
action_prefix = f"Action {i}: "
stop_seq = f"\nObservation {i}:"
ret_text = llm_chain.predict(input=prompt, stop=[stop_seq])
# Sometimes the LLM forgets to take an action, so we prompt it to.
while not ret_text.split("\n")[-1].startswith(action_prefix):
ret_text += f"\nAction {i}:"
new_text = llm_chain.predict(input=prompt + ret_text, stop=[stop_seq])
ret_text += new_text
# The action block should be the last line.
action_block = ret_text.split("\n")[-1]
action_str = action_block[len(action_prefix) :]
# Parse out the action and the directive.
re_matches = re.search(r"(.*?)\[(.*?)\]", action_str)
if re_matches is None:
raise ValueError(f"Could not parse action directive: {action_str}")
return ret_text, re_matches.group(1), re_matches.group(2)
class ReActChain(Chain, BaseModel):
"""Chain that implements the ReAct paper.
Example:
.. code-block:: python
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
"""
llm: LLM
"""LLM wrapper to use."""
docstore: Docstore
"""Docstore to use."""
input_key: str = "question" #: :meta private:
output_key: str = "answer" #: :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Expect input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Expect output key.
:meta private:
"""
return [self.output_key]
def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
question = inputs[self.input_key]
llm_chain = LLMChain(llm=self.llm, prompt=PROMPT)
chained_input = ChainedInput(f"{question}\nThought 1:", verbose=self.verbose)
i = 1
document = None
while True:
ret_text, action, directive = predict_until_observation(
llm_chain, chained_input.input, i
)
chained_input.add(ret_text, color="green")
if action == "Search":
result = self.docstore.search(directive)
if isinstance(result, Document):
document = result
observation = document.summary
else:
document = None
observation = result
elif action == "Lookup":
if document is None:
raise ValueError("Cannot lookup without a successful search first")
observation = document.lookup(directive)
elif action == "Finish":
return {self.output_key: directive}
else:
raise ValueError(f"Got unknown action directive: {action}")
chained_input.add(f"\nObservation {i}: ")
chained_input.add(observation, color="yellow")
chained_input.add(f"\nThought {i + 1}:")
i += 1

@ -1,149 +0,0 @@
"""Chain that does self ask with search."""
from typing import Any, Dict, List
from pydantic import BaseModel, Extra
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.self_ask_with_search.prompt import PROMPT
from langchain.chains.serpapi import SerpAPIChain
from langchain.input import ChainedInput
from langchain.llms.base import LLM
def extract_answer(generated: str) -> str:
"""Extract answer from text."""
if "\n" not in generated:
last_line = generated
else:
last_line = generated.split("\n")[-1]
if ":" not in last_line:
after_colon = last_line
else:
after_colon = generated.split(":")[-1]
if " " == after_colon[0]:
after_colon = after_colon[1:]
if "." == after_colon[-1]:
after_colon = after_colon[:-1]
return after_colon
def extract_question(generated: str, followup: str) -> str:
"""Extract question from text."""
if "\n" not in generated:
last_line = generated
else:
last_line = generated.split("\n")[-1]
if followup not in last_line:
print("we probably should never get here..." + generated)
if ":" not in last_line:
after_colon = last_line
else:
after_colon = generated.split(":")[-1]
if " " == after_colon[0]:
after_colon = after_colon[1:]
if "?" != after_colon[-1]:
print("we probably should never get here..." + generated)
return after_colon
def get_last_line(generated: str) -> str:
"""Get the last line in text."""
if "\n" not in generated:
last_line = generated
else:
last_line = generated.split("\n")[-1]
return last_line
def greenify(_input: str) -> str:
"""Add green highlighting to text."""
return "\x1b[102m" + _input + "\x1b[0m"
def yellowfy(_input: str) -> str:
"""Add yellow highlighting to text."""
return "\x1b[106m" + _input + "\x1b[0m"
class SelfAskWithSearchChain(Chain, BaseModel):
"""Chain that does self ask with search.
Example:
.. code-block:: python
from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain
search_chain = SerpAPIChain()
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
"""
llm: LLM
"""LLM wrapper to use."""
search_chain: SerpAPIChain
"""Search chain to use."""
input_key: str = "question" #: :meta private:
output_key: str = "answer" #: :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Expect input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Expect output key.
:meta private:
"""
return [self.output_key]
def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
chained_input = ChainedInput(inputs[self.input_key], verbose=self.verbose)
chained_input.add("\nAre follow up questions needed here:")
llm_chain = LLMChain(llm=self.llm, prompt=PROMPT)
intermediate = "\nIntermediate answer:"
followup = "Follow up:"
finalans = "\nSo the final answer is:"
ret_text = llm_chain.predict(input=chained_input.input, stop=[intermediate])
chained_input.add(ret_text, color="green")
while followup in get_last_line(ret_text):
question = extract_question(ret_text, followup)
external_answer = self.search_chain.run(question)
if external_answer is not None:
chained_input.add(intermediate + " ")
chained_input.add(external_answer + ".", color="yellow")
ret_text = llm_chain.predict(
input=chained_input.input, stop=["\nIntermediate answer:"]
)
chained_input.add(ret_text, color="green")
else:
# We only get here in the very rare case that Google returns no answer.
chained_input.add(intermediate + " ")
preds = llm_chain.predict(
input=chained_input.input, stop=["\n" + followup, finalans]
)
chained_input.add(preds, color="green")
if finalans not in ret_text:
chained_input.add(finalans)
ret_text = llm_chain.predict(input=chained_input.input, stop=["\n"])
chained_input.add(ret_text, color="green")
return {self.output_key: ret_text}

@ -1,6 +1,6 @@
"""Integration test for self ask with search."""
from langchain.chains.react.base import ReActChain
from langchain.agents.react.base import ReActChain
from langchain.docstore.wikipedia import Wikipedia
from langchain.llms.openai import OpenAI

@ -1,5 +1,5 @@
"""Integration test for self ask with search."""
from langchain.chains.self_ask_with_search.base import SelfAskWithSearchChain
from langchain.agents.self_ask_with_search.base import SelfAskWithSearchChain
from langchain.chains.serpapi import SerpAPIChain
from langchain.llms.openai import OpenAI

@ -0,0 +1 @@
"""Test agent functionality."""

@ -2,8 +2,9 @@
import pytest
from langchain.chains.mrkl.base import ChainConfig, MRKLChain, get_action_and_input
from langchain.chains.mrkl.prompt import BASE_TEMPLATE
from langchain.agents.mrkl.base import ZeroShotAgent, get_action_and_input
from langchain.agents.mrkl.prompt import BASE_TEMPLATE
from langchain.agents.tools import Tool
from langchain.prompts import PromptTemplate
from tests.unit_tests.llms.fake_llm import FakeLLM
@ -29,7 +30,7 @@ def test_get_final_answer() -> None:
"Final Answer: 1994"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer: "
assert action == "Final Answer"
assert action_input == "1994"
@ -52,19 +53,15 @@ def test_bad_action_line() -> None:
def test_from_chains() -> None:
"""Test initializing from chains."""
chain_configs = [
ChainConfig(
action_name="foo", action=lambda x: "foo", action_description="foobar1"
),
ChainConfig(
action_name="bar", action=lambda x: "bar", action_description="foobar2"
),
Tool(name="foo", func=lambda x: "foo", description="foobar1"),
Tool(name="bar", func=lambda x: "bar", description="foobar2"),
]
mrkl_chain = MRKLChain.from_chains(FakeLLM(), chain_configs)
agent = ZeroShotAgent.from_llm_and_tools(FakeLLM(), chain_configs)
expected_tools_prompt = "foo: foobar1\nbar: foobar2"
expected_tool_names = "foo, bar"
expected_template = BASE_TEMPLATE.format(
tools=expected_tools_prompt, tool_names=expected_tool_names
)
prompt = mrkl_chain.prompt
prompt = agent.llm_chain.prompt
assert isinstance(prompt, PromptTemplate)
assert prompt.template == expected_template

@ -4,8 +4,8 @@ from typing import Any, List, Mapping, Optional, Union
import pytest
from langchain.chains.llm import LLMChain
from langchain.chains.react.base import ReActChain, predict_until_observation
from langchain.agents.react.base import ReActChain, ReActDocstoreAgent
from langchain.agents.tools import Tool
from langchain.docstore.base import Docstore
from langchain.docstore.document import Document
from langchain.llms.base import LLM
@ -51,33 +51,32 @@ class FakeDocstore(Docstore):
def test_predict_until_observation_normal() -> None:
"""Test predict_until_observation when observation is made normally."""
outputs = ["foo\nAction 1: search[foo]"]
outputs = ["foo\nAction 1: Search[foo]"]
fake_llm = FakeListLLM(outputs)
fake_llm_chain = LLMChain(llm=fake_llm, prompt=_FAKE_PROMPT)
ret_text, action, directive = predict_until_observation(fake_llm_chain, "", 1)
assert ret_text == outputs[0]
assert action == "search"
assert directive == "foo"
tools = [
Tool("Search", lambda x: x),
Tool("Lookup", lambda x: x),
]
agent = ReActDocstoreAgent.from_llm_and_tools(fake_llm, tools)
output = agent.get_action("")
assert output.log == outputs[0]
assert output.tool == "Search"
assert output.tool_input == "foo"
def test_predict_until_observation_repeat() -> None:
"""Test when no action is generated initially."""
outputs = ["foo", " search[foo]"]
outputs = ["foo", " Search[foo]"]
fake_llm = FakeListLLM(outputs)
fake_llm_chain = LLMChain(llm=fake_llm, prompt=_FAKE_PROMPT)
ret_text, action, directive = predict_until_observation(fake_llm_chain, "", 1)
assert ret_text == "foo\nAction 1: search[foo]"
assert action == "search"
assert directive == "foo"
def test_predict_until_observation_error() -> None:
"""Test handling of generation of text that cannot be parsed."""
outputs = ["foo\nAction 1: foo"]
fake_llm = FakeListLLM(outputs)
fake_llm_chain = LLMChain(llm=fake_llm, prompt=_FAKE_PROMPT)
with pytest.raises(ValueError):
predict_until_observation(fake_llm_chain, "", 1)
tools = [
Tool("Search", lambda x: x),
Tool("Lookup", lambda x: x),
]
agent = ReActDocstoreAgent.from_llm_and_tools(fake_llm, tools)
output = agent.get_action("")
assert output.log == "foo\nAction 1: Search[foo]"
assert output.tool == "Search"
assert output.tool_input == "foo"
def test_react_chain() -> None:
@ -89,9 +88,8 @@ def test_react_chain() -> None:
]
fake_llm = FakeListLLM(responses)
react_chain = ReActChain(llm=fake_llm, docstore=FakeDocstore())
inputs = {"question": "when was langchain made"}
output = react_chain(inputs)
assert output["answer"] == "2022"
output = react_chain.run("when was langchain made")
assert output == "2022"
def test_react_chain_bad_action() -> None:
@ -101,5 +99,5 @@ def test_react_chain_bad_action() -> None:
]
fake_llm = FakeListLLM(responses)
react_chain = ReActChain(llm=fake_llm, docstore=FakeDocstore())
with pytest.raises(ValueError):
with pytest.raises(KeyError):
react_chain.run("when was langchain made")
Loading…
Cancel
Save