forked from Archives/langchain
Compare commits
14 Commits
main
...
harrison/r
Author | SHA1 | Date | |
---|---|---|---|
|
178e8217a4 | ||
|
314a098fb6 | ||
|
505cb2eb62 | ||
|
4ccb9b684a | ||
|
4de8b089aa | ||
|
1c6f64021d | ||
|
aba405a570 | ||
|
71a0940435 | ||
|
8c8eb47765 | ||
|
68eaf4e5ee | ||
|
2a84d3d5ca | ||
|
45ce74d0bc | ||
|
2a2d3323c9 | ||
|
6f55fa8ba7 |
103
README.md
103
README.md
@ -17,11 +17,6 @@ create a truly powerful app - the real power comes when you are able to
|
|||||||
combine them with other sources of computation or knowledge.
|
combine them with other sources of computation or knowledge.
|
||||||
|
|
||||||
This library is aimed at assisting in the development of those types of applications.
|
This library is aimed at assisting in the development of those types of applications.
|
||||||
It aims to create:
|
|
||||||
|
|
||||||
1. a comprehensive collection of pieces you would ever want to combine
|
|
||||||
2. a flexible interface for combining pieces into a single comprehensive "chain"
|
|
||||||
3. a schema for easily saving and sharing those chains
|
|
||||||
|
|
||||||
## 📖 Documentation
|
## 📖 Documentation
|
||||||
|
|
||||||
@ -31,78 +26,42 @@ Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documen
|
|||||||
- Reference (full API docs)
|
- Reference (full API docs)
|
||||||
- Resources (high level explanation of core concepts)
|
- Resources (high level explanation of core concepts)
|
||||||
|
|
||||||
## 🚀 What can I do with this
|
## 🚀 What can this help with?
|
||||||
|
|
||||||
This project was largely inspired by a few projects seen on Twitter for which we thought it would make sense to have more explicit tooling. A lot of the initial functionality was done in an attempt to recreate those. Those are:
|
There are three main areas (with a forth coming soon) that LangChain is designed to help with.
|
||||||
|
These are, in increasing order of complexity:
|
||||||
|
1. LLM and Prompt usage
|
||||||
|
2. Chaining LLMs with other tools in a deterministic manner
|
||||||
|
3. Having a router LLM which uses other tools as needed
|
||||||
|
4. (Coming Soon) Memory
|
||||||
|
|
||||||
**[Self-ask-with-search](https://ofir.io/self-ask.pdf)**
|
### LLMs and Prompts
|
||||||
|
Calling out to an LLM once is pretty easy, with most of them being behind well documented APIs.
|
||||||
|
However, there are still some challenges going from that to an application running in production that LangChain attempts to address:
|
||||||
|
- Easy switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
|
||||||
|
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
|
||||||
|
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
|
||||||
|
- More coming soon
|
||||||
|
|
||||||
To recreate this paper, use the following code snippet or checkout the [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/demos/self_ask_with_search.ipynb).
|
### Chains
|
||||||
|
Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with eachother or with other tools.
|
||||||
|
LangChain provides several parts to help with that:
|
||||||
|
- Standard interface for working with Chains
|
||||||
|
- Easy way to construct chains of LLMs
|
||||||
|
- Lots of integrations with other tools that you may want to use in conjunction with LLMs (search, databases, Python REPL, etc)
|
||||||
|
- End-to-end chains for common workflows (database question/answer, recursive summarization, etc)
|
||||||
|
|
||||||
```python
|
### Routing Chains
|
||||||
from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain
|
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input.
|
||||||
|
In these types of chains, there is a "router" LLM chain which has access to a suite of tools.
|
||||||
|
Depending on the user input, the router can then decide which, if any, of these tools to call.
|
||||||
|
To help develop applications like these, LangChain provides:
|
||||||
|
- Standard router and router chain interfaces
|
||||||
|
- Common router LLM chains from literature
|
||||||
|
- Common chains that can be used as tools
|
||||||
|
|
||||||
llm = OpenAI(temperature=0)
|
### Memory
|
||||||
search = SerpAPIChain()
|
Coming soon.
|
||||||
|
|
||||||
self_ask_with_search = SelfAskWithSearchChain(llm=llm, search_chain=search)
|
|
||||||
|
|
||||||
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
|
|
||||||
```
|
|
||||||
|
|
||||||
**[LLM Math](https://twitter.com/amasad/status/1568824744367259648?s=20&t=-7wxpXBJinPgDuyHLouP1w)**
|
|
||||||
|
|
||||||
To recreate this example, use the following code snippet or check out the [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/demos/llm_math.ipynb).
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain import OpenAI, LLMMathChain
|
|
||||||
|
|
||||||
llm = OpenAI(temperature=0)
|
|
||||||
llm_math = LLMMathChain(llm=llm)
|
|
||||||
|
|
||||||
llm_math.run("How many of the integers between 0 and 99 inclusive are divisible by 8?")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Generic Prompting**
|
|
||||||
|
|
||||||
You can also use this for simple prompting pipelines, as in the below example and this [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/demos/simple_prompts.ipynb).
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain import PromptTemplate, OpenAI, LLMChain
|
|
||||||
|
|
||||||
template = """Question: {question}
|
|
||||||
|
|
||||||
Answer: Let's think step by step."""
|
|
||||||
prompt = PromptTemplate(template=template, input_variables=["question"])
|
|
||||||
llm = OpenAI(temperature=0)
|
|
||||||
llm_chain = LLMChain(prompt=prompt, llm=llm)
|
|
||||||
|
|
||||||
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
|
|
||||||
|
|
||||||
llm_chain.predict(question=question)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Embed & Search Documents**
|
|
||||||
|
|
||||||
We support two vector databases to store and search embeddings -- FAISS and Elasticsearch. Here's a code snippet showing how to use FAISS to store embeddings and search for text similar to a query. Both database backends are featured in this [example notebook](https://github.com/hwchase17/langchain/blob/master/docs/examples/integrations/embeddings.ipynb).
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.embeddings.openai import OpenAIEmbeddings
|
|
||||||
from langchain.faiss import FAISS
|
|
||||||
from langchain.text_splitter import CharacterTextSplitter
|
|
||||||
|
|
||||||
with open('state_of_the_union.txt') as f:
|
|
||||||
state_of_the_union = f.read()
|
|
||||||
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
|
|
||||||
texts = text_splitter.split_text(state_of_the_union)
|
|
||||||
|
|
||||||
embeddings = OpenAIEmbeddings()
|
|
||||||
|
|
||||||
docsearch = FAISS.from_texts(texts, embeddings)
|
|
||||||
|
|
||||||
query = "What did the president say about Ketanji Brown Jackson"
|
|
||||||
docs = docsearch.similarity_search(query)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🤖 Developer Guide
|
## 🤖 Developer Guide
|
||||||
|
|
||||||
|
@ -2,9 +2,24 @@ Demos
|
|||||||
=====
|
=====
|
||||||
|
|
||||||
The examples here are all end-to-end chains of specific applications.
|
The examples here are all end-to-end chains of specific applications.
|
||||||
|
They are separated into normal chains and then routing chains.
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
:glob:
|
:glob:
|
||||||
|
:caption: Chains
|
||||||
|
|
||||||
demos/*
|
demos/llm_math.ipynb
|
||||||
|
demos/map_reduce.ipynb
|
||||||
|
demos/simple_prompts.ipynb
|
||||||
|
demos/sqlite.ipynb
|
||||||
|
demos/vector_db_qa.ipynb
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
:glob:
|
||||||
|
:caption: Routing Chains
|
||||||
|
|
||||||
|
demos/mrkl.ipynb
|
||||||
|
demos/react.ipynb
|
||||||
|
demos/self_ask_with_search.ipynb
|
||||||
|
183
docs/examples/demos/custom_routing_chains.ipynb
Normal file
183
docs/examples/demos/custom_routing_chains.ipynb
Normal file
@ -0,0 +1,183 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0af33207",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Custom Routing Chains\n",
|
||||||
|
"\n",
|
||||||
|
"This covers how to implement a custom routing chain. That problem really reduces to how to implement a custom router. This also acts as a design doc of sorts for routers."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "16773dc8",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Terminology\n",
|
||||||
|
"\n",
|
||||||
|
"Before going through any code, let's align on some terminology.\n",
|
||||||
|
"- Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.\n",
|
||||||
|
"- Tool Input: The input string to a tool.\n",
|
||||||
|
"- Observation: The output from calling a tool on a particular input.\n",
|
||||||
|
"- Router: The object responsible for deciding which tools to call and when. Exposes a `route` method, which takes in a string and returns a Router Output.\n",
|
||||||
|
"- Router Output: The object returned from calling `Router.route` on a string. Consists of:\n",
|
||||||
|
" - The tool to use\n",
|
||||||
|
" - The input to that tool\n",
|
||||||
|
" - A log of the router's thinking.\n",
|
||||||
|
"- Routing Chain: A chain which is made up of a router and suite of tools. When passed a string, the Routing Chain will iterative call tools as needed until it arrives at a Final Answer.\n",
|
||||||
|
"- Final Answer: The final output of a Routing Chain."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "6eaca15e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Router\n",
|
||||||
|
"A central piece of this chain is the router. The router is responsible for taking user input and deciding which tools, if any, to use. Although it doesn't necessarily have to be backed by a language model (LLM), for pretty much all current use cases it is. LLMs make great routers because they are really good at understanding human intent, which makes them perfect for choosing which tools to use (and for interpreting the output of those tools).\n",
|
||||||
|
"\n",
|
||||||
|
"Below is the interface we expect routers to expose, along with the RouterOutput definition.\n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"\n",
|
||||||
|
"class RouterOutput(NamedTuple):\n",
|
||||||
|
" \"\"\"Output of a router.\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" tool: str\n",
|
||||||
|
" tool_input: str\n",
|
||||||
|
" log: str\n",
|
||||||
|
" \n",
|
||||||
|
"\n",
|
||||||
|
"class Router(ABC):\n",
|
||||||
|
" \"\"\"Chain responsible for deciding the action to take.\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" @abstractmethod\n",
|
||||||
|
" def route(self, text: str) -> RouterOutput:\n",
|
||||||
|
" \"\"\"Given input, decided how to route it.\n",
|
||||||
|
"\n",
|
||||||
|
" Args:\n",
|
||||||
|
" text: input string\n",
|
||||||
|
"\n",
|
||||||
|
" Returns:\n",
|
||||||
|
" RouterOutput specifying what tool to use.\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" @property\n",
|
||||||
|
" @abstractmethod\n",
|
||||||
|
" def observation_prefix(self) -> str:\n",
|
||||||
|
" \"\"\"Prefix to append the observation with before calling the router again.\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" @property\n",
|
||||||
|
" @abstractmethod\n",
|
||||||
|
" def router_prefix(self) -> str:\n",
|
||||||
|
" \"\"\"Prefix to prepend the router call with.\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" @property\n",
|
||||||
|
" def finish_tool_name(self) -> str:\n",
|
||||||
|
" \"\"\"Name of the tool to use to finish the chain.\"\"\"\n",
|
||||||
|
" return \"Final Answer\"\n",
|
||||||
|
"\n",
|
||||||
|
" @property\n",
|
||||||
|
" def starter_string(self) -> str:\n",
|
||||||
|
" \"\"\"Put this string after user input but before first router call.\"\"\"\n",
|
||||||
|
" return \"\\n\"\n",
|
||||||
|
"```"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "471389be",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"In order to understand why the router interface is what it is, let's take a look at how it is used in the RoutingChain class:\n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:\n",
|
||||||
|
" # Construct a mapping of tool name to tool for easy lookup\n",
|
||||||
|
" name_to_tool_map = {tc.tool_name: tc.tool for tc in self.tool_configs}\n",
|
||||||
|
" # Construct the initial string to pass into the router. This is made up\n",
|
||||||
|
" # of the user input, the special starter string, and then the router prefix.\n",
|
||||||
|
" # The starter string is a special string that may be used by a router to\n",
|
||||||
|
" # immediately follow the user input. The router prefix is a string that\n",
|
||||||
|
" # prompts the router to start routing.\n",
|
||||||
|
" starter_string = (\n",
|
||||||
|
" inputs[self.input_key]\n",
|
||||||
|
" + self.router.starter_string\n",
|
||||||
|
" + self.router.router_prefix\n",
|
||||||
|
" )\n",
|
||||||
|
" # We use the ChainedInput class to iteratively add to the input over time.\n",
|
||||||
|
" chained_input = ChainedInput(starter_string, verbose=self.verbose)\n",
|
||||||
|
" # We construct a mapping from each tool to a color, used for logging.\n",
|
||||||
|
" color_mapping = get_color_mapping(\n",
|
||||||
|
" [c.tool_name for c in self.tool_configs], excluded_colors=[\"green\"]\n",
|
||||||
|
" )\n",
|
||||||
|
" # We now enter the router loop (until it returns something).\n",
|
||||||
|
" while True:\n",
|
||||||
|
" # Call the router to see what to do.\n",
|
||||||
|
" output = self.router.route(chained_input.input)\n",
|
||||||
|
" # Add the log to the Chained Input.\n",
|
||||||
|
" chained_input.add(output.log, color=\"green\")\n",
|
||||||
|
" # If the tool chosen is the finishing tool, then we end and return.\n",
|
||||||
|
" if output.tool == self.router.finish_tool_name:\n",
|
||||||
|
" return {self.output_key: output.tool_input}\n",
|
||||||
|
" # Otherwise we lookup the tool\n",
|
||||||
|
" chain = name_to_tool_map[output.tool]\n",
|
||||||
|
" # We then call the tool on the tool input to get an observation\n",
|
||||||
|
" observation = chain(output.tool_input)\n",
|
||||||
|
" # We then log the observation\n",
|
||||||
|
" chained_input.add(f\"\\n{self.router.observation_prefix}\")\n",
|
||||||
|
" chained_input.add(observation, color=color_mapping[output.tool])\n",
|
||||||
|
" # We then add the router prefix into the prompt to get the router to start\n",
|
||||||
|
" # thinking, and start the loop all over.\n",
|
||||||
|
" chained_input.add(f\"\\n{self.router.router_prefix}\")\n",
|
||||||
|
"\n",
|
||||||
|
"```"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d9f6ca91",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Once we have the custom router written, it is pretty easy to construct the routing chain:\n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"tools: List[ToolConfig] = ...\n",
|
||||||
|
"router = CustomRouter(....)\n",
|
||||||
|
"routing_chain = RoutingChain(tools=tools, router=router, verbose=True)\n",
|
||||||
|
"```"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "5d0c7662",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
@ -27,12 +27,12 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain, SQLDatabase, SQLDatabaseChain\n",
|
"from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain, SQLDatabase, SQLDatabaseChain\n",
|
||||||
"from langchain.chains.mrkl.base import ChainConfig"
|
"from langchain.routing_chains.mrkl.base import ChainConfig"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 8,
|
"execution_count": 2,
|
||||||
"id": "07e96d99",
|
"id": "07e96d99",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@ -64,7 +64,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 9,
|
"execution_count": 3,
|
||||||
"id": "a069c4b6",
|
"id": "a069c4b6",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
@ -74,7 +74,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 4,
|
||||||
"id": "e603cd7d",
|
"id": "e603cd7d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -86,32 +86,32 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\n",
|
"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\n",
|
||||||
"Thought:\u001b[102m I need to find the age of Olivia Wilde's boyfriend\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Olivia Wilde's boyfriend\n",
|
||||||
"Action: Search\n",
|
"Action: Search\n",
|
||||||
"Action Input: \"Olivia Wilde's boyfriend\"\u001b[0m\n",
|
"Action Input: \"Olivia Wilde's boyfriend\"\u001b[0m\n",
|
||||||
"Observation: \u001b[104mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
|
"Observation: \u001b[36;1m\u001b[1;3mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
|
||||||
"Thought:\u001b[102m I need to find the age of Harry Styles\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Harry Styles\n",
|
||||||
"Action: Search\n",
|
"Action: Search\n",
|
||||||
"Action Input: \"Harry Styles age\"\u001b[0m\n",
|
"Action Input: \"Harry Styles age\"\u001b[0m\n",
|
||||||
"Observation: \u001b[104m28 years\u001b[0m\n",
|
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
|
||||||
"Thought:\u001b[102m I need to calculate 28 to the 0.23 power\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 to the 0.23 power\n",
|
||||||
"Action: Calculator\n",
|
"Action: Calculator\n",
|
||||||
"Action Input: 28^0.23\u001b[0m\n",
|
"Action Input: 28^0.23\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"28^0.23\u001b[102m\n",
|
"28^0.23\u001b[32;1m\u001b[1;3m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"```python\n",
|
"```python\n",
|
||||||
"print(28**0.23)\n",
|
"print(28**0.23)\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"\u001b[0m\n",
|
"\u001b[0m\n",
|
||||||
"Answer: \u001b[103m2.1520202182226886\n",
|
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
|
||||||
"\u001b[0m\n",
|
"\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Observation: \u001b[103mAnswer: 2.1520202182226886\n",
|
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
|
||||||
"\u001b[0m\n",
|
"\u001b[0m\n",
|
||||||
"Thought:\u001b[102m I now know the final answer\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||||||
"Final Answer: 2.1520202182226886\u001b[0m\n",
|
"Final Answer: 2.1520202182226886\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||||
]
|
]
|
||||||
@ -122,7 +122,7 @@
|
|||||||
"'2.1520202182226886'"
|
"'2.1520202182226886'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 6,
|
"execution_count": 4,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
@ -133,7 +133,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 10,
|
"execution_count": 5,
|
||||||
"id": "a5c07010",
|
"id": "a5c07010",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
@ -145,35 +145,35 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"Who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?\n",
|
"Who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?\n",
|
||||||
"Thought:\u001b[102m I need to find an album called 'The Storm Before the Calm'\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I need to find an album called 'The Storm Before the Calm'\n",
|
||||||
"Action: Search\n",
|
"Action: Search\n",
|
||||||
"Action Input: \"The Storm Before the Calm album\"\u001b[0m\n",
|
"Action Input: \"The Storm Before the Calm album\"\u001b[0m\n",
|
||||||
"Observation: \u001b[104mThe Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis ...\u001b[0m\n",
|
"Observation: \u001b[36;1m\u001b[1;3mThe Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis ...\u001b[0m\n",
|
||||||
"Thought:\u001b[102m I need to check if Alanis is in the FooBar database\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I need to check if Alanis is in the FooBar database\n",
|
||||||
"Action: FooBar DB\n",
|
"Action: FooBar DB\n",
|
||||||
"Action Input: \"Does Alanis Morissette exist in the FooBar database?\"\u001b[0m\n",
|
"Action Input: \"Does Alanis Morissette exist in the FooBar database?\"\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"Does Alanis Morissette exist in the FooBar database?\n",
|
"Does Alanis Morissette exist in the FooBar database?\n",
|
||||||
"SQLQuery:\u001b[102m SELECT * FROM Artist WHERE Name = 'Alanis Morissette'\u001b[0m\n",
|
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT * FROM Artist WHERE Name = 'Alanis Morissette'\u001b[0m\n",
|
||||||
"SQLResult: \u001b[103m[(4, 'Alanis Morissette')]\u001b[0m\n",
|
"SQLResult: \u001b[33;1m\u001b[1;3m[(4, 'Alanis Morissette')]\u001b[0m\n",
|
||||||
"Answer:\u001b[102m Yes\u001b[0m\n",
|
"Answer:\u001b[32;1m\u001b[1;3m Yes\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Observation: \u001b[101m Yes\u001b[0m\n",
|
"Observation: \u001b[38;5;200m\u001b[1;3m Yes\u001b[0m\n",
|
||||||
"Thought:\u001b[102m I need to find out what albums of Alanis's are in the FooBar database\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I need to find out what albums of Alanis's are in the FooBar database\n",
|
||||||
"Action: FooBar DB\n",
|
"Action: FooBar DB\n",
|
||||||
"Action Input: \"What albums by Alanis Morissette are in the FooBar database?\"\u001b[0m\n",
|
"Action Input: \"What albums by Alanis Morissette are in the FooBar database?\"\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"What albums by Alanis Morissette are in the FooBar database?\n",
|
"What albums by Alanis Morissette are in the FooBar database?\n",
|
||||||
"SQLQuery:\u001b[102m SELECT Title FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'Alanis Morissette')\u001b[0m\n",
|
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Album.Title FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = \"Alanis Morissette\"\u001b[0m\n",
|
||||||
"SQLResult: \u001b[103m[('Jagged Little Pill',)]\u001b[0m\n",
|
"SQLResult: \u001b[33;1m\u001b[1;3m[('Jagged Little Pill',)]\u001b[0m\n",
|
||||||
"Answer:\u001b[102m Jagged Little Pill\u001b[0m\n",
|
"Answer:\u001b[32;1m\u001b[1;3m Jagged Little Pill\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Observation: \u001b[101m Jagged Little Pill\u001b[0m\n",
|
"Observation: \u001b[38;5;200m\u001b[1;3m Jagged Little Pill\u001b[0m\n",
|
||||||
"Thought:\u001b[102m I now know the final answer\n",
|
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||||||
"Final Answer: The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill\u001b[0m\n",
|
"Final Answer: The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||||
]
|
]
|
||||||
@ -184,7 +184,7 @@
|
|||||||
"'The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill'"
|
"'The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 10,
|
"execution_count": 5,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"output_type": "execute_result"
|
"output_type": "execute_result"
|
||||||
}
|
}
|
||||||
|
@ -37,14 +37,16 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\n",
|
"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\n",
|
||||||
"Thought 1:\u001b[102m I need to search David Chanoff and find the U.S. Navy admiral he\n",
|
"Thought 1:\u001b[32;1m\u001b[1;3m I need to search David Chanoff and find the U.S. Navy admiral he collaborated\n",
|
||||||
"collaborated with.\n",
|
"with.\n",
|
||||||
"Action 1: Search[David Chanoff]\u001b[0m\n",
|
"Action 1: Search[David Chanoff]\u001b[0m\n",
|
||||||
"Observation 1: \u001b[103mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
|
"Observation 1: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
|
||||||
"Thought 2:\u001b[102m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe.\n",
|
"Thought 2:\u001b[32;1m\u001b[1;3m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I\n",
|
||||||
|
"need to search him next.\n",
|
||||||
"Action 2: Search[William J. Crowe]\u001b[0m\n",
|
"Action 2: Search[William J. Crowe]\u001b[0m\n",
|
||||||
"Observation 2: \u001b[103mWilliam James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
|
"Observation 2: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
|
||||||
"Thought 3:\u001b[102m William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton. So the answer is Bill Clinton.\n",
|
"Thought 3:\u001b[32;1m\u001b[1;3m William J. Crowe served as the ambassador to the United Kingdom under\n",
|
||||||
|
"President Bill Clinton. So the answer is Bill Clinton.\n",
|
||||||
"Action 3: Finish[Bill Clinton]\u001b[0m\n",
|
"Action 3: Finish[Bill Clinton]\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||||
]
|
]
|
||||||
@ -68,7 +70,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"id": "0a6bd3b4",
|
"id": "4ff64e81",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": []
|
"source": []
|
||||||
|
195
docs/examples/demos/routing_chains.ipynb
Normal file
195
docs/examples/demos/routing_chains.ipynb
Normal file
@ -0,0 +1,195 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "5436020b",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Routing Chains\n",
|
||||||
|
"Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input. In these types of chains, there is a \"router\" LLM chain which has access to a suite of tools. Depending on the user input, the router can then decide which, if any, of these tools to call.\n",
|
||||||
|
"\n",
|
||||||
|
"These types of chains are called Routing Chains. When used correctly these can be extremely powerful. The purpose of this notebook is to show you how to easily use routing chains through the simplest, highest level API. If you want more low level control over various components, check out the documentation for custom routing chains."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3c6226b9",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Concepts\n",
|
||||||
|
"\n",
|
||||||
|
"In order to understand routing chains, you should understand the following concepts:\n",
|
||||||
|
"- Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.\n",
|
||||||
|
"- LLM: The language model responsible for doing the router.\n",
|
||||||
|
"- RouterType: The type of the router to use. This should be a string (see more on the allowed router types below). Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported routers. If you want to implement a custom router, see the documentation for custom routing chains."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "05d4b21e",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Tools\n",
|
||||||
|
"When constructing your own Routing Chain, you will need to provide it with a list of tools that it can use. This is done with a list of Tools. The Tools are used not only to create the Routing Chain, but is also sometimes used to create the router itself (often, the router logic depends on the tools available). \n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"class Tool(NamedTuple):\n",
|
||||||
|
" \"\"\"Interface for tools.\"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" name: str\n",
|
||||||
|
" func: Callable[[str], str]\n",
|
||||||
|
" description: Optional[str] = None\n",
|
||||||
|
"```\n",
|
||||||
|
"\n",
|
||||||
|
"The two required components of a ToolConfig are the name and then the tool itself. A tool description is optional, as it is needed for some routers but not all."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "2558a02d",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Loading the chains\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"id": "36ed392e",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Import things that are needed generically\n",
|
||||||
|
"from langchain.routing_chains import load_routing_chain, Tool\n",
|
||||||
|
"from langchain.llms import OpenAI"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "56ff7670",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Load the tool configs that are needed.\n",
|
||||||
|
"from langchain import LLMMathChain, SerpAPIChain\n",
|
||||||
|
"llm = OpenAI(temperature=0)\n",
|
||||||
|
"search = SerpAPIChain()\n",
|
||||||
|
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
|
||||||
|
"tools = [\n",
|
||||||
|
" Tool(\n",
|
||||||
|
" name = \"Search\",\n",
|
||||||
|
" func=search.run,\n",
|
||||||
|
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||||
|
" ),\n",
|
||||||
|
" Tool(\n",
|
||||||
|
" name=\"Calculator\",\n",
|
||||||
|
" func=llm_math_chain.run,\n",
|
||||||
|
" description=\"useful for when you need to answer questions about math\"\n",
|
||||||
|
" )\n",
|
||||||
|
"]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "5b93047d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Construct the routing chain. We will use the default router type here.\n",
|
||||||
|
"# See documentation for a full list of options.\n",
|
||||||
|
"router_llm = OpenAI(temperature=0)\n",
|
||||||
|
"chain = load_routing_chain(tools, router_llm, verbose=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "6f96a891",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
|
"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\n",
|
||||||
|
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Olivia Wilde's boyfriend\n",
|
||||||
|
"Action: Search\n",
|
||||||
|
"Action Input: \"Olivia Wilde's boyfriend\"\u001b[0m\n",
|
||||||
|
"Observation: \u001b[36;1m\u001b[1;3mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
|
||||||
|
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Harry Styles\n",
|
||||||
|
"Action: Search\n",
|
||||||
|
"Action Input: \"Harry Styles age\"\u001b[0m\n",
|
||||||
|
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
|
||||||
|
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 to the 0.23 power\n",
|
||||||
|
"Action: Calculator\n",
|
||||||
|
"Action Input: 28^0.23\u001b[0m\n",
|
||||||
|
"\n",
|
||||||
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
|
"28^0.23\u001b[32;1m\u001b[1;3m\n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"print(28**0.23)\n",
|
||||||
|
"```\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||||
|
"\n",
|
||||||
|
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||||||
|
"Final Answer: 2.1520202182226886\u001b[0m\n",
|
||||||
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"'2.1520202182226886'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"chain.run(\"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "2f0852ff",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
@ -24,19 +24,19 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
"\u001b[1m> Entering new chain...\u001b[0m\n",
|
||||||
"What is the hometown of the reigning men's U.S. Open champion?\n",
|
"What is the hometown of the reigning men's U.S. Open champion?\n",
|
||||||
"Are follow up questions needed here:\u001b[102m Yes.\n",
|
"Are follow up questions needed here:\u001b[32;1m\u001b[1;3m Yes.\n",
|
||||||
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
|
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
|
||||||
"Intermediate answer: \u001b[103mCarlos Alcaraz won the 2022 Men's single title while Poland's Iga Swiatek won the Women's single title defeating Tunisian's Ons Jabeur..\u001b[0m\u001b[102m\n",
|
"Intermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz\u001b[0m\n",
|
||||||
"Follow up: Where is Carlos Alcaraz from?\u001b[0m\n",
|
"\u001b[32;1m\u001b[1;3mFollow up: Where is Carlos Alcaraz from?\u001b[0m\n",
|
||||||
"Intermediate answer: \u001b[103mEl Palmar, Murcia, Spain.\u001b[0m\u001b[102m\n",
|
"Intermediate answer: \u001b[36;1m\u001b[1;3mEl Palmar, Spain\u001b[0m\n",
|
||||||
"So the final answer is: El Palmar, Murcia, Spain\u001b[0m\n",
|
"\u001b[32;1m\u001b[1;3mSo the final answer is: El Palmar, Spain\u001b[0m\n",
|
||||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"'\\nSo the final answer is: El Palmar, Murcia, Spain'"
|
"'El Palmar, Spain'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 1,
|
"execution_count": 1,
|
||||||
|
@ -8,12 +8,47 @@ create a truly powerful app - the real power comes when you are able to
|
|||||||
combine them with other sources of computation or knowledge.
|
combine them with other sources of computation or knowledge.
|
||||||
|
|
||||||
This library is aimed at assisting in the development of those types of applications.
|
This library is aimed at assisting in the development of those types of applications.
|
||||||
It aims to create:
|
|
||||||
|
|
||||||
1. a comprehensive collection of pieces you would ever want to combine
|
There are three main areas (with a forth coming soon) that LangChain is designed to help with.
|
||||||
2. a flexible interface for combining pieces into a single comprehensive "chain"
|
These are, in increasing order of complexity:
|
||||||
3. a schema for easily saving and sharing those chains
|
1. LLM and Prompt usage
|
||||||
|
2. Chaining LLMs with other tools in a deterministic manner
|
||||||
|
3. Having a router LLM which uses other tools as needed
|
||||||
|
4. (Coming Soon) Memory
|
||||||
|
|
||||||
|
**LLMs and Prompts**
|
||||||
|
|
||||||
|
Calling out to an LLM once is pretty easy, with most of them being behind well documented APIs.
|
||||||
|
However, there are still some challenges going from that to an application running in production that LangChain attempts to address:
|
||||||
|
- Easy switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
|
||||||
|
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
|
||||||
|
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
|
||||||
|
- More coming soon
|
||||||
|
|
||||||
|
**Chains**
|
||||||
|
|
||||||
|
Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with eachother or with other tools.
|
||||||
|
LangChain provides several parts to help with that:
|
||||||
|
- Standard interface for working with Chains
|
||||||
|
- Easy way to construct chains of LLMs
|
||||||
|
- Lots of integrations with other tools that you may want to use in conjunction with LLMs (search, databases, Python REPL, etc)
|
||||||
|
- End-to-end chains for common workflows (database question/answer, recursive summarization, etc)
|
||||||
|
|
||||||
|
**Routing Chains**
|
||||||
|
|
||||||
|
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input.
|
||||||
|
In these types of chains, there is a "router" LLM chain which has access to a suite of tools.
|
||||||
|
Depending on the user input, the router can then decide which, if any, of these tools to call.
|
||||||
|
To help develop applications like these, LangChain provides:
|
||||||
|
- Standard router and router chain interfaces
|
||||||
|
- Common router LLM chains from literature
|
||||||
|
- Common chains that can be used as tools
|
||||||
|
|
||||||
|
**Memory**
|
||||||
|
Coming soon.
|
||||||
|
|
||||||
|
Documentation Structure
|
||||||
|
=======================
|
||||||
The documentation is structured into the following sections:
|
The documentation is structured into the following sections:
|
||||||
|
|
||||||
|
|
||||||
@ -62,6 +97,7 @@ common tasks or cool demos.
|
|||||||
modules/text_splitter
|
modules/text_splitter
|
||||||
modules/vectorstore
|
modules/vectorstore
|
||||||
modules/chains
|
modules/chains
|
||||||
|
modules/routing_chains
|
||||||
|
|
||||||
|
|
||||||
Full API documentation. This is the place to look if you want to
|
Full API documentation. This is the place to look if you want to
|
||||||
|
7
docs/modules/routing_chains.rst
Normal file
7
docs/modules/routing_chains.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
:mod:`langchain.routing_chains`
|
||||||
|
===============================
|
||||||
|
|
||||||
|
.. automodule:: langchain.routing_chains
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
|
@ -8,10 +8,7 @@ with open(Path(__file__).absolute().parents[0] / "VERSION") as _f:
|
|||||||
from langchain.chains import (
|
from langchain.chains import (
|
||||||
LLMChain,
|
LLMChain,
|
||||||
LLMMathChain,
|
LLMMathChain,
|
||||||
MRKLChain,
|
|
||||||
PythonChain,
|
PythonChain,
|
||||||
ReActChain,
|
|
||||||
SelfAskWithSearchChain,
|
|
||||||
SerpAPIChain,
|
SerpAPIChain,
|
||||||
SQLDatabaseChain,
|
SQLDatabaseChain,
|
||||||
VectorDBQA,
|
VectorDBQA,
|
||||||
@ -24,6 +21,7 @@ from langchain.prompts import (
|
|||||||
Prompt,
|
Prompt,
|
||||||
PromptTemplate,
|
PromptTemplate,
|
||||||
)
|
)
|
||||||
|
from langchain.routing_chains import MRKLChain, ReActChain, SelfAskWithSearchChain
|
||||||
from langchain.sql_database import SQLDatabase
|
from langchain.sql_database import SQLDatabase
|
||||||
from langchain.vectorstores import FAISS, ElasticVectorSearch
|
from langchain.vectorstores import FAISS, ElasticVectorSearch
|
||||||
|
|
||||||
|
@ -1,10 +1,7 @@
|
|||||||
"""Chains are easily reusable components which can be linked together."""
|
"""Chains are easily reusable components which can be linked together."""
|
||||||
from langchain.chains.llm import LLMChain
|
from langchain.chains.llm import LLMChain
|
||||||
from langchain.chains.llm_math.base import LLMMathChain
|
from langchain.chains.llm_math.base import LLMMathChain
|
||||||
from langchain.chains.mrkl.base import MRKLChain
|
|
||||||
from langchain.chains.python import PythonChain
|
from langchain.chains.python import PythonChain
|
||||||
from langchain.chains.react.base import ReActChain
|
|
||||||
from langchain.chains.self_ask_with_search.base import SelfAskWithSearchChain
|
|
||||||
from langchain.chains.sequential import SequentialChain, SimpleSequentialChain
|
from langchain.chains.sequential import SequentialChain, SimpleSequentialChain
|
||||||
from langchain.chains.serpapi import SerpAPIChain
|
from langchain.chains.serpapi import SerpAPIChain
|
||||||
from langchain.chains.sql_database.base import SQLDatabaseChain
|
from langchain.chains.sql_database.base import SQLDatabaseChain
|
||||||
@ -14,11 +11,8 @@ __all__ = [
|
|||||||
"LLMChain",
|
"LLMChain",
|
||||||
"LLMMathChain",
|
"LLMMathChain",
|
||||||
"PythonChain",
|
"PythonChain",
|
||||||
"SelfAskWithSearchChain",
|
|
||||||
"SerpAPIChain",
|
"SerpAPIChain",
|
||||||
"ReActChain",
|
|
||||||
"SQLDatabaseChain",
|
"SQLDatabaseChain",
|
||||||
"MRKLChain",
|
|
||||||
"VectorDBQA",
|
"VectorDBQA",
|
||||||
"SequentialChain",
|
"SequentialChain",
|
||||||
"SimpleSequentialChain",
|
"SimpleSequentialChain",
|
||||||
|
@ -1,170 +0,0 @@
|
|||||||
"""Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf."""
|
|
||||||
from typing import Any, Callable, Dict, List, NamedTuple, Tuple
|
|
||||||
|
|
||||||
from pydantic import BaseModel, Extra
|
|
||||||
|
|
||||||
from langchain.chains.base import Chain
|
|
||||||
from langchain.chains.llm import LLMChain
|
|
||||||
from langchain.chains.mrkl.prompt import BASE_TEMPLATE
|
|
||||||
from langchain.input import ChainedInput, get_color_mapping
|
|
||||||
from langchain.llms.base import LLM
|
|
||||||
from langchain.prompts import BasePromptTemplate, PromptTemplate
|
|
||||||
|
|
||||||
FINAL_ANSWER_ACTION = "Final Answer: "
|
|
||||||
|
|
||||||
|
|
||||||
class ChainConfig(NamedTuple):
|
|
||||||
"""Configuration for chain to use in MRKL system.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
action_name: Name of the action.
|
|
||||||
action: Action function to call.
|
|
||||||
action_description: Description of the action.
|
|
||||||
"""
|
|
||||||
|
|
||||||
action_name: str
|
|
||||||
action: Callable
|
|
||||||
action_description: str
|
|
||||||
|
|
||||||
|
|
||||||
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
|
|
||||||
"""Parse out the action and input from the LLM output."""
|
|
||||||
ps = [p for p in llm_output.split("\n") if p]
|
|
||||||
if ps[-1].startswith(FINAL_ANSWER_ACTION):
|
|
||||||
directive = ps[-1][len(FINAL_ANSWER_ACTION) :]
|
|
||||||
return FINAL_ANSWER_ACTION, directive
|
|
||||||
if not ps[-1].startswith("Action Input: "):
|
|
||||||
raise ValueError(
|
|
||||||
"The last line does not have an action input, "
|
|
||||||
"something has gone terribly wrong."
|
|
||||||
)
|
|
||||||
if not ps[-2].startswith("Action: "):
|
|
||||||
raise ValueError(
|
|
||||||
"The second to last line does not have an action, "
|
|
||||||
"something has gone terribly wrong."
|
|
||||||
)
|
|
||||||
action = ps[-2][len("Action: ") :]
|
|
||||||
action_input = ps[-1][len("Action Input: ") :]
|
|
||||||
return action, action_input.strip(" ").strip('"')
|
|
||||||
|
|
||||||
|
|
||||||
class MRKLChain(Chain, BaseModel):
|
|
||||||
"""Chain that implements the MRKL system.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
from langchain import OpenAI, Prompt, MRKLChain
|
|
||||||
from langchain.chains.mrkl.base import ChainConfig
|
|
||||||
llm = OpenAI(temperature=0)
|
|
||||||
prompt = PromptTemplate(...)
|
|
||||||
action_to_chain_map = {...}
|
|
||||||
mrkl = MRKLChain(
|
|
||||||
llm=llm,
|
|
||||||
prompt=prompt,
|
|
||||||
action_to_chain_map=action_to_chain_map
|
|
||||||
)
|
|
||||||
"""
|
|
||||||
|
|
||||||
llm: LLM
|
|
||||||
"""LLM wrapper to use as router."""
|
|
||||||
prompt: BasePromptTemplate
|
|
||||||
"""Prompt to use as router."""
|
|
||||||
action_to_chain_map: Dict[str, Callable]
|
|
||||||
"""Mapping from action name to chain to execute."""
|
|
||||||
input_key: str = "question" #: :meta private:
|
|
||||||
output_key: str = "answer" #: :meta private:
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_chains(
|
|
||||||
cls, llm: LLM, chains: List[ChainConfig], **kwargs: Any
|
|
||||||
) -> "MRKLChain":
|
|
||||||
"""User friendly way to initialize the MRKL chain.
|
|
||||||
|
|
||||||
This is intended to be an easy way to get up and running with the
|
|
||||||
MRKL chain.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
llm: The LLM to use as the router LLM.
|
|
||||||
chains: The chains the MRKL system has access to.
|
|
||||||
**kwargs: parameters to be passed to initialization.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
An initialized MRKL chain.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain
|
|
||||||
from langchain.chains.mrkl.base import ChainConfig
|
|
||||||
llm = OpenAI(temperature=0)
|
|
||||||
search = SerpAPIChain()
|
|
||||||
llm_math_chain = LLMMathChain(llm=llm)
|
|
||||||
chains = [
|
|
||||||
ChainConfig(
|
|
||||||
action_name = "Search",
|
|
||||||
action=search.search,
|
|
||||||
action_description="useful for searching"
|
|
||||||
),
|
|
||||||
ChainConfig(
|
|
||||||
action_name="Calculator",
|
|
||||||
action=llm_math_chain.run,
|
|
||||||
action_description="useful for doing math"
|
|
||||||
)
|
|
||||||
]
|
|
||||||
mrkl = MRKLChain.from_chains(llm, chains)
|
|
||||||
"""
|
|
||||||
tools = "\n".join(
|
|
||||||
[f"{chain.action_name}: {chain.action_description}" for chain in chains]
|
|
||||||
)
|
|
||||||
tool_names = ", ".join([chain.action_name for chain in chains])
|
|
||||||
template = BASE_TEMPLATE.format(tools=tools, tool_names=tool_names)
|
|
||||||
prompt = PromptTemplate(template=template, input_variables=["input"])
|
|
||||||
action_to_chain_map = {chain.action_name: chain.action for chain in chains}
|
|
||||||
return cls(
|
|
||||||
llm=llm, prompt=prompt, action_to_chain_map=action_to_chain_map, **kwargs
|
|
||||||
)
|
|
||||||
|
|
||||||
class Config:
|
|
||||||
"""Configuration for this pydantic object."""
|
|
||||||
|
|
||||||
extra = Extra.forbid
|
|
||||||
arbitrary_types_allowed = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def input_keys(self) -> List[str]:
|
|
||||||
"""Expect input key.
|
|
||||||
|
|
||||||
:meta private:
|
|
||||||
"""
|
|
||||||
return [self.input_key]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def output_keys(self) -> List[str]:
|
|
||||||
"""Expect output key.
|
|
||||||
|
|
||||||
:meta private:
|
|
||||||
"""
|
|
||||||
return [self.output_key]
|
|
||||||
|
|
||||||
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
|
|
||||||
llm_chain = LLMChain(llm=self.llm, prompt=self.prompt)
|
|
||||||
chained_input = ChainedInput(
|
|
||||||
f"{inputs[self.input_key]}\nThought:", verbose=self.verbose
|
|
||||||
)
|
|
||||||
color_mapping = get_color_mapping(
|
|
||||||
list(self.action_to_chain_map.keys()), excluded_colors=["green"]
|
|
||||||
)
|
|
||||||
while True:
|
|
||||||
thought = llm_chain.predict(
|
|
||||||
input=chained_input.input, stop=["\nObservation"]
|
|
||||||
)
|
|
||||||
chained_input.add(thought, color="green")
|
|
||||||
action, action_input = get_action_and_input(thought)
|
|
||||||
if action == FINAL_ANSWER_ACTION:
|
|
||||||
return {self.output_key: action_input}
|
|
||||||
chain = self.action_to_chain_map[action]
|
|
||||||
ca = chain(action_input)
|
|
||||||
chained_input.add("\nObservation: ")
|
|
||||||
chained_input.add(ca, color=color_mapping[action])
|
|
||||||
chained_input.add("\nThought:")
|
|
@ -1,107 +0,0 @@
|
|||||||
"""Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf."""
|
|
||||||
import re
|
|
||||||
from typing import Any, Dict, List, Tuple
|
|
||||||
|
|
||||||
from pydantic import BaseModel, Extra
|
|
||||||
|
|
||||||
from langchain.chains.base import Chain
|
|
||||||
from langchain.chains.llm import LLMChain
|
|
||||||
from langchain.chains.react.prompt import PROMPT
|
|
||||||
from langchain.docstore.base import Docstore
|
|
||||||
from langchain.docstore.document import Document
|
|
||||||
from langchain.input import ChainedInput
|
|
||||||
from langchain.llms.base import LLM
|
|
||||||
|
|
||||||
|
|
||||||
def predict_until_observation(
|
|
||||||
llm_chain: LLMChain, prompt: str, i: int
|
|
||||||
) -> Tuple[str, str, str]:
|
|
||||||
"""Generate text until an observation is needed."""
|
|
||||||
action_prefix = f"Action {i}: "
|
|
||||||
stop_seq = f"\nObservation {i}:"
|
|
||||||
ret_text = llm_chain.predict(input=prompt, stop=[stop_seq])
|
|
||||||
# Sometimes the LLM forgets to take an action, so we prompt it to.
|
|
||||||
while not ret_text.split("\n")[-1].startswith(action_prefix):
|
|
||||||
ret_text += f"\nAction {i}:"
|
|
||||||
new_text = llm_chain.predict(input=prompt + ret_text, stop=[stop_seq])
|
|
||||||
ret_text += new_text
|
|
||||||
# The action block should be the last line.
|
|
||||||
action_block = ret_text.split("\n")[-1]
|
|
||||||
action_str = action_block[len(action_prefix) :]
|
|
||||||
# Parse out the action and the directive.
|
|
||||||
re_matches = re.search(r"(.*?)\[(.*?)\]", action_str)
|
|
||||||
if re_matches is None:
|
|
||||||
raise ValueError(f"Could not parse action directive: {action_str}")
|
|
||||||
return ret_text, re_matches.group(1), re_matches.group(2)
|
|
||||||
|
|
||||||
|
|
||||||
class ReActChain(Chain, BaseModel):
|
|
||||||
"""Chain that implements the ReAct paper.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
from langchain import ReActChain, OpenAI
|
|
||||||
react = ReAct(llm=OpenAI())
|
|
||||||
"""
|
|
||||||
|
|
||||||
llm: LLM
|
|
||||||
"""LLM wrapper to use."""
|
|
||||||
docstore: Docstore
|
|
||||||
"""Docstore to use."""
|
|
||||||
input_key: str = "question" #: :meta private:
|
|
||||||
output_key: str = "answer" #: :meta private:
|
|
||||||
|
|
||||||
class Config:
|
|
||||||
"""Configuration for this pydantic object."""
|
|
||||||
|
|
||||||
extra = Extra.forbid
|
|
||||||
arbitrary_types_allowed = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def input_keys(self) -> List[str]:
|
|
||||||
"""Expect input key.
|
|
||||||
|
|
||||||
:meta private:
|
|
||||||
"""
|
|
||||||
return [self.input_key]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def output_keys(self) -> List[str]:
|
|
||||||
"""Expect output key.
|
|
||||||
|
|
||||||
:meta private:
|
|
||||||
"""
|
|
||||||
return [self.output_key]
|
|
||||||
|
|
||||||
def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
|
||||||
question = inputs[self.input_key]
|
|
||||||
llm_chain = LLMChain(llm=self.llm, prompt=PROMPT)
|
|
||||||
chained_input = ChainedInput(f"{question}\nThought 1:", verbose=self.verbose)
|
|
||||||
i = 1
|
|
||||||
document = None
|
|
||||||
while True:
|
|
||||||
ret_text, action, directive = predict_until_observation(
|
|
||||||
llm_chain, chained_input.input, i
|
|
||||||
)
|
|
||||||
chained_input.add(ret_text, color="green")
|
|
||||||
if action == "Search":
|
|
||||||
result = self.docstore.search(directive)
|
|
||||||
if isinstance(result, Document):
|
|
||||||
document = result
|
|
||||||
observation = document.summary
|
|
||||||
else:
|
|
||||||
document = None
|
|
||||||
observation = result
|
|
||||||
elif action == "Lookup":
|
|
||||||
if document is None:
|
|
||||||
raise ValueError("Cannot lookup without a successful search first")
|
|
||||||
observation = document.lookup(directive)
|
|
||||||
elif action == "Finish":
|
|
||||||
return {self.output_key: directive}
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Got unknown action directive: {action}")
|
|
||||||
chained_input.add(f"\nObservation {i}: ")
|
|
||||||
chained_input.add(observation, color="yellow")
|
|
||||||
chained_input.add(f"\nThought {i + 1}:")
|
|
||||||
i += 1
|
|
@ -1,149 +0,0 @@
|
|||||||
"""Chain that does self ask with search."""
|
|
||||||
from typing import Any, Dict, List
|
|
||||||
|
|
||||||
from pydantic import BaseModel, Extra
|
|
||||||
|
|
||||||
from langchain.chains.base import Chain
|
|
||||||
from langchain.chains.llm import LLMChain
|
|
||||||
from langchain.chains.self_ask_with_search.prompt import PROMPT
|
|
||||||
from langchain.chains.serpapi import SerpAPIChain
|
|
||||||
from langchain.input import ChainedInput
|
|
||||||
from langchain.llms.base import LLM
|
|
||||||
|
|
||||||
|
|
||||||
def extract_answer(generated: str) -> str:
|
|
||||||
"""Extract answer from text."""
|
|
||||||
if "\n" not in generated:
|
|
||||||
last_line = generated
|
|
||||||
else:
|
|
||||||
last_line = generated.split("\n")[-1]
|
|
||||||
|
|
||||||
if ":" not in last_line:
|
|
||||||
after_colon = last_line
|
|
||||||
else:
|
|
||||||
after_colon = generated.split(":")[-1]
|
|
||||||
|
|
||||||
if " " == after_colon[0]:
|
|
||||||
after_colon = after_colon[1:]
|
|
||||||
if "." == after_colon[-1]:
|
|
||||||
after_colon = after_colon[:-1]
|
|
||||||
|
|
||||||
return after_colon
|
|
||||||
|
|
||||||
|
|
||||||
def extract_question(generated: str, followup: str) -> str:
|
|
||||||
"""Extract question from text."""
|
|
||||||
if "\n" not in generated:
|
|
||||||
last_line = generated
|
|
||||||
else:
|
|
||||||
last_line = generated.split("\n")[-1]
|
|
||||||
|
|
||||||
if followup not in last_line:
|
|
||||||
print("we probably should never get here..." + generated)
|
|
||||||
|
|
||||||
if ":" not in last_line:
|
|
||||||
after_colon = last_line
|
|
||||||
else:
|
|
||||||
after_colon = generated.split(":")[-1]
|
|
||||||
|
|
||||||
if " " == after_colon[0]:
|
|
||||||
after_colon = after_colon[1:]
|
|
||||||
if "?" != after_colon[-1]:
|
|
||||||
print("we probably should never get here..." + generated)
|
|
||||||
|
|
||||||
return after_colon
|
|
||||||
|
|
||||||
|
|
||||||
def get_last_line(generated: str) -> str:
|
|
||||||
"""Get the last line in text."""
|
|
||||||
if "\n" not in generated:
|
|
||||||
last_line = generated
|
|
||||||
else:
|
|
||||||
last_line = generated.split("\n")[-1]
|
|
||||||
|
|
||||||
return last_line
|
|
||||||
|
|
||||||
|
|
||||||
def greenify(_input: str) -> str:
|
|
||||||
"""Add green highlighting to text."""
|
|
||||||
return "\x1b[102m" + _input + "\x1b[0m"
|
|
||||||
|
|
||||||
|
|
||||||
def yellowfy(_input: str) -> str:
|
|
||||||
"""Add yellow highlighting to text."""
|
|
||||||
return "\x1b[106m" + _input + "\x1b[0m"
|
|
||||||
|
|
||||||
|
|
||||||
class SelfAskWithSearchChain(Chain, BaseModel):
|
|
||||||
"""Chain that does self ask with search.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain
|
|
||||||
search_chain = SerpAPIChain()
|
|
||||||
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
|
|
||||||
"""
|
|
||||||
|
|
||||||
llm: LLM
|
|
||||||
"""LLM wrapper to use."""
|
|
||||||
search_chain: SerpAPIChain
|
|
||||||
"""Search chain to use."""
|
|
||||||
input_key: str = "question" #: :meta private:
|
|
||||||
output_key: str = "answer" #: :meta private:
|
|
||||||
|
|
||||||
class Config:
|
|
||||||
"""Configuration for this pydantic object."""
|
|
||||||
|
|
||||||
extra = Extra.forbid
|
|
||||||
arbitrary_types_allowed = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def input_keys(self) -> List[str]:
|
|
||||||
"""Expect input key.
|
|
||||||
|
|
||||||
:meta private:
|
|
||||||
"""
|
|
||||||
return [self.input_key]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def output_keys(self) -> List[str]:
|
|
||||||
"""Expect output key.
|
|
||||||
|
|
||||||
:meta private:
|
|
||||||
"""
|
|
||||||
return [self.output_key]
|
|
||||||
|
|
||||||
def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
|
||||||
chained_input = ChainedInput(inputs[self.input_key], verbose=self.verbose)
|
|
||||||
chained_input.add("\nAre follow up questions needed here:")
|
|
||||||
llm_chain = LLMChain(llm=self.llm, prompt=PROMPT)
|
|
||||||
intermediate = "\nIntermediate answer:"
|
|
||||||
followup = "Follow up:"
|
|
||||||
finalans = "\nSo the final answer is:"
|
|
||||||
ret_text = llm_chain.predict(input=chained_input.input, stop=[intermediate])
|
|
||||||
chained_input.add(ret_text, color="green")
|
|
||||||
while followup in get_last_line(ret_text):
|
|
||||||
question = extract_question(ret_text, followup)
|
|
||||||
external_answer = self.search_chain.run(question)
|
|
||||||
if external_answer is not None:
|
|
||||||
chained_input.add(intermediate + " ")
|
|
||||||
chained_input.add(external_answer + ".", color="yellow")
|
|
||||||
ret_text = llm_chain.predict(
|
|
||||||
input=chained_input.input, stop=["\nIntermediate answer:"]
|
|
||||||
)
|
|
||||||
chained_input.add(ret_text, color="green")
|
|
||||||
else:
|
|
||||||
# We only get here in the very rare case that Google returns no answer.
|
|
||||||
chained_input.add(intermediate + " ")
|
|
||||||
preds = llm_chain.predict(
|
|
||||||
input=chained_input.input, stop=["\n" + followup, finalans]
|
|
||||||
)
|
|
||||||
chained_input.add(preds, color="green")
|
|
||||||
|
|
||||||
if finalans not in ret_text:
|
|
||||||
chained_input.add(finalans)
|
|
||||||
ret_text = llm_chain.predict(input=chained_input.input, stop=["\n"])
|
|
||||||
chained_input.add(ret_text, color="green")
|
|
||||||
|
|
||||||
return {self.output_key: ret_text}
|
|
18
langchain/routing_chains/__init__.py
Normal file
18
langchain/routing_chains/__init__.py
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
"""Routing chains."""
|
||||||
|
from langchain.routing_chains.loading import load_routing_chain
|
||||||
|
from langchain.routing_chains.mrkl.base import MRKLChain
|
||||||
|
from langchain.routing_chains.react.base import ReActChain
|
||||||
|
from langchain.routing_chains.router import LLMRouter
|
||||||
|
from langchain.routing_chains.routing_chain import RoutingChain
|
||||||
|
from langchain.routing_chains.self_ask_with_search.base import SelfAskWithSearchChain
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"MRKLChain",
|
||||||
|
"SelfAskWithSearchChain",
|
||||||
|
"ReActChain",
|
||||||
|
"LLMRouter",
|
||||||
|
"RoutingChain",
|
||||||
|
"Tool",
|
||||||
|
"load_routing_chain",
|
||||||
|
]
|
43
langchain/routing_chains/loading.py
Normal file
43
langchain/routing_chains/loading.py
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
"""Load routing chains."""
|
||||||
|
from typing import Any, List
|
||||||
|
|
||||||
|
from langchain.llms.base import LLM
|
||||||
|
from langchain.routing_chains.mrkl.base import ZeroShotRouter
|
||||||
|
from langchain.routing_chains.react.base import ReActDocstoreRouter
|
||||||
|
from langchain.routing_chains.routing_chain import RoutingChain
|
||||||
|
from langchain.routing_chains.self_ask_with_search.base import SelfAskWithSearchRouter
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
ROUTER_TYPE_TO_CLASS = {
|
||||||
|
"zero-shot-react-description": ZeroShotRouter,
|
||||||
|
"react-docstore": ReActDocstoreRouter,
|
||||||
|
"self-ask-with-search": SelfAskWithSearchRouter,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def load_routing_chain(
|
||||||
|
tools: List[Tool],
|
||||||
|
llm: LLM,
|
||||||
|
router_type: str = "zero-shot-react-description",
|
||||||
|
**kwargs: Any,
|
||||||
|
) -> RoutingChain:
|
||||||
|
"""Load routing chain given tools and LLM.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
tools: List of tools this routing chain has access to.
|
||||||
|
llm: Language model to use as the router.
|
||||||
|
router_type: The router to use. Valid options are:
|
||||||
|
`zero-shot-react-description`.
|
||||||
|
**kwargs: Additional key word arguments to pass to the routing chain.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A routing chain.
|
||||||
|
"""
|
||||||
|
if router_type not in ROUTER_TYPE_TO_CLASS:
|
||||||
|
raise ValueError(
|
||||||
|
f"Got unknown router type: {router_type}. "
|
||||||
|
f"Valid types are: {ROUTER_TYPE_TO_CLASS.keys()}."
|
||||||
|
)
|
||||||
|
router_cls = ROUTER_TYPE_TO_CLASS[router_type]
|
||||||
|
router = router_cls.from_llm_and_tools(llm, tools)
|
||||||
|
return RoutingChain(router=router, tools=tools, **kwargs)
|
176
langchain/routing_chains/mrkl/base.py
Normal file
176
langchain/routing_chains/mrkl/base.py
Normal file
@ -0,0 +1,176 @@
|
|||||||
|
"""Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf."""
|
||||||
|
from typing import Any, Callable, List, NamedTuple, Optional, Tuple
|
||||||
|
|
||||||
|
from langchain.chains.llm import LLMChain
|
||||||
|
from langchain.llms.base import LLM
|
||||||
|
from langchain.prompts import PromptTemplate
|
||||||
|
from langchain.routing_chains.mrkl.prompt import BASE_TEMPLATE
|
||||||
|
from langchain.routing_chains.router import LLMRouter
|
||||||
|
from langchain.routing_chains.routing_chain import RoutingChain
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
FINAL_ANSWER_ACTION = "Final Answer: "
|
||||||
|
|
||||||
|
|
||||||
|
class ChainConfig(NamedTuple):
|
||||||
|
"""Configuration for chain to use in MRKL system.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
action_name: Name of the action.
|
||||||
|
action: Action function to call.
|
||||||
|
action_description: Description of the action.
|
||||||
|
"""
|
||||||
|
|
||||||
|
action_name: str
|
||||||
|
action: Callable
|
||||||
|
action_description: str
|
||||||
|
|
||||||
|
|
||||||
|
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
|
||||||
|
"""Parse out the action and input from the LLM output."""
|
||||||
|
ps = [p for p in llm_output.split("\n") if p]
|
||||||
|
if ps[-1].startswith("Final Answer"):
|
||||||
|
directive = ps[-1][len(FINAL_ANSWER_ACTION) :]
|
||||||
|
return "Final Answer", directive
|
||||||
|
if not ps[-1].startswith("Action Input: "):
|
||||||
|
raise ValueError(
|
||||||
|
"The last line does not have an action input, "
|
||||||
|
"something has gone terribly wrong."
|
||||||
|
)
|
||||||
|
if not ps[-2].startswith("Action: "):
|
||||||
|
raise ValueError(
|
||||||
|
"The second to last line does not have an action, "
|
||||||
|
"something has gone terribly wrong."
|
||||||
|
)
|
||||||
|
action = ps[-2][len("Action: ") :]
|
||||||
|
action_input = ps[-1][len("Action Input: ") :]
|
||||||
|
return action, action_input.strip(" ").strip('"')
|
||||||
|
|
||||||
|
|
||||||
|
class ZeroShotRouter(LLMRouter):
|
||||||
|
"""Router for the MRKL chain."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def observation_prefix(self) -> str:
|
||||||
|
"""Prefix to append the observation with."""
|
||||||
|
return "Observation: "
|
||||||
|
|
||||||
|
@property
|
||||||
|
def router_prefix(self) -> str:
|
||||||
|
"""Prefix to append the router call with."""
|
||||||
|
return "Thought:"
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_llm_and_tools(cls, llm: LLM, tools: List[Tool]) -> "ZeroShotRouter":
|
||||||
|
"""Construct a router from an LLM and tools."""
|
||||||
|
tool_strings = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
|
||||||
|
tool_names = ", ".join([tool.name for tool in tools])
|
||||||
|
template = BASE_TEMPLATE.format(tools=tool_strings, tool_names=tool_names)
|
||||||
|
prompt = PromptTemplate(template=template, input_variables=["input"])
|
||||||
|
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||||
|
return cls(llm_chain=llm_chain)
|
||||||
|
|
||||||
|
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
|
||||||
|
return get_action_and_input(text)
|
||||||
|
|
||||||
|
|
||||||
|
class MRKLChain(RoutingChain):
|
||||||
|
"""Chain that implements the MRKL system.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain import OpenAI, MRKLChain
|
||||||
|
from langchain.chains.mrkl.base import ChainConfig
|
||||||
|
llm = OpenAI(temperature=0)
|
||||||
|
prompt = PromptTemplate(...)
|
||||||
|
chains = [...]
|
||||||
|
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
|
||||||
|
"""
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_chains(
|
||||||
|
cls, llm: LLM, chains: List[ChainConfig], **kwargs: Any
|
||||||
|
) -> "MRKLChain":
|
||||||
|
"""User friendly way to initialize the MRKL chain.
|
||||||
|
|
||||||
|
This is intended to be an easy way to get up and running with the
|
||||||
|
MRKL chain.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
llm: The LLM to use as the router LLM.
|
||||||
|
chains: The chains the MRKL system has access to.
|
||||||
|
**kwargs: parameters to be passed to initialization.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
An initialized MRKL chain.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain
|
||||||
|
from langchain.chains.mrkl.base import ChainConfig
|
||||||
|
llm = OpenAI(temperature=0)
|
||||||
|
search = SerpAPIChain()
|
||||||
|
llm_math_chain = LLMMathChain(llm=llm)
|
||||||
|
chains = [
|
||||||
|
ChainConfig(
|
||||||
|
action_name = "Search",
|
||||||
|
action=search.search,
|
||||||
|
action_description="useful for searching"
|
||||||
|
),
|
||||||
|
ChainConfig(
|
||||||
|
action_name="Calculator",
|
||||||
|
action=llm_math_chain.run,
|
||||||
|
action_description="useful for doing math"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
mrkl = MRKLChain.from_chains(llm, chains)
|
||||||
|
"""
|
||||||
|
tools = [
|
||||||
|
Tool(name=c.action_name, func=c.action, description=c.action_description)
|
||||||
|
for c in chains
|
||||||
|
]
|
||||||
|
return cls.from_tools_and_llm(tools, llm, **kwargs)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_tools_and_llm(
|
||||||
|
cls, tools: List[Tool], llm: LLM, **kwargs: Any
|
||||||
|
) -> "MRKLChain":
|
||||||
|
"""User friendly way to initialize the MRKL chain.
|
||||||
|
|
||||||
|
This is intended to be an easy way to get up and running with the
|
||||||
|
MRKL chain.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
tools: The tools the MRKL system has access to.
|
||||||
|
llm: The LLM to use as the router LLM.
|
||||||
|
**kwargs: parameters to be passed to initialization.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
An initialized MRKL chain.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain import LLMMathChain, OpenAI, SerpAPIChain, MRKLChain
|
||||||
|
from langchain.routing_chains.tools import ToolConfig
|
||||||
|
llm = OpenAI(temperature=0)
|
||||||
|
search = SerpAPIChain()
|
||||||
|
llm_math_chain = LLMMathChain(llm=llm)
|
||||||
|
tools = [
|
||||||
|
ToolConfig(
|
||||||
|
tool_name = "Search",
|
||||||
|
tool=search.search,
|
||||||
|
tool_description="useful for searching"
|
||||||
|
),
|
||||||
|
ToolConfig(
|
||||||
|
tool_name="Calculator",
|
||||||
|
tool=llm_math_chain.run,
|
||||||
|
tool_description="useful for doing math"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
mrkl = MRKLChain.from_tools_and_llm(llm, tools)
|
||||||
|
"""
|
||||||
|
router = ZeroShotRouter.from_llm_and_tools(llm, tools)
|
||||||
|
return cls(router=router, tools=tools, **kwargs)
|
116
langchain/routing_chains/react/base.py
Normal file
116
langchain/routing_chains/react/base.py
Normal file
@ -0,0 +1,116 @@
|
|||||||
|
"""Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf."""
|
||||||
|
import re
|
||||||
|
from typing import Any, List, Optional, Tuple
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from langchain.chains.llm import LLMChain
|
||||||
|
from langchain.docstore.base import Docstore
|
||||||
|
from langchain.docstore.document import Document
|
||||||
|
from langchain.llms.base import LLM
|
||||||
|
from langchain.routing_chains.react.prompt import PROMPT
|
||||||
|
from langchain.routing_chains.router import LLMRouter
|
||||||
|
from langchain.routing_chains.routing_chain import RoutingChain
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
|
||||||
|
class ReActDocstoreRouter(LLMRouter, BaseModel):
|
||||||
|
"""Router for the ReAct chin."""
|
||||||
|
|
||||||
|
i: int = 1
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_llm_and_tools(cls, llm: LLM, tools: List[Tool]) -> "ReActDocstoreRouter":
|
||||||
|
"""Construct a router from an LLM and tools."""
|
||||||
|
if len(tools) != 2:
|
||||||
|
raise ValueError(f"Exactly two tools must be specified, but got {tools}")
|
||||||
|
tool_names = {tool.name for tool in tools}
|
||||||
|
if tool_names != {"Lookup", "Search"}:
|
||||||
|
raise ValueError(
|
||||||
|
f"Tool names should be Lookup and Search, got {tool_names}"
|
||||||
|
)
|
||||||
|
|
||||||
|
llm_chain = LLMChain(llm=llm, prompt=PROMPT)
|
||||||
|
return cls(llm_chain=llm_chain)
|
||||||
|
|
||||||
|
def _fix_text(self, text: str) -> str:
|
||||||
|
return text + f"\nAction {self.i}:"
|
||||||
|
|
||||||
|
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
|
||||||
|
action_prefix = f"Action {self.i}: "
|
||||||
|
if not text.split("\n")[-1].startswith(action_prefix):
|
||||||
|
return None
|
||||||
|
self.i += 1
|
||||||
|
action_block = text.split("\n")[-1]
|
||||||
|
|
||||||
|
action_str = action_block[len(action_prefix) :]
|
||||||
|
# Parse out the action and the directive.
|
||||||
|
re_matches = re.search(r"(.*?)\[(.*?)\]", action_str)
|
||||||
|
if re_matches is None:
|
||||||
|
raise ValueError(f"Could not parse action directive: {action_str}")
|
||||||
|
return re_matches.group(1), re_matches.group(2)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def finish_tool_name(self) -> str:
|
||||||
|
"""Name of the tool of when to finish the chain."""
|
||||||
|
return "Finish"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def observation_prefix(self) -> str:
|
||||||
|
"""Prefix to append the observation with."""
|
||||||
|
return f"Observation {self.i - 1}: "
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _stop(self) -> List[str]:
|
||||||
|
return [f"\nObservation {self.i}: "]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def router_prefix(self) -> str:
|
||||||
|
"""Prefix to append the router call with."""
|
||||||
|
return f"Thought {self.i}:"
|
||||||
|
|
||||||
|
|
||||||
|
class DocstoreExplorer:
|
||||||
|
"""Class to assist with exploration of a document store."""
|
||||||
|
|
||||||
|
def __init__(self, docstore: Docstore):
|
||||||
|
"""Initialize with a docstore, and set initial document to None."""
|
||||||
|
self.docstore = docstore
|
||||||
|
self.document: Optional[Document] = None
|
||||||
|
|
||||||
|
def search(self, term: str) -> str:
|
||||||
|
"""Search for a term in the docstore, and if found save."""
|
||||||
|
result = self.docstore.search(term)
|
||||||
|
if isinstance(result, Document):
|
||||||
|
self.document = result
|
||||||
|
return self.document.summary
|
||||||
|
else:
|
||||||
|
self.document = None
|
||||||
|
return result
|
||||||
|
|
||||||
|
def lookup(self, term: str) -> str:
|
||||||
|
"""Lookup a term in document (if saved)."""
|
||||||
|
if self.document is None:
|
||||||
|
raise ValueError("Cannot lookup without a successful search first")
|
||||||
|
return self.document.lookup(term)
|
||||||
|
|
||||||
|
|
||||||
|
class ReActChain(RoutingChain):
|
||||||
|
"""Chain that implements the ReAct paper.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain import ReActChain, OpenAI
|
||||||
|
react = ReAct(llm=OpenAI())
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, llm: LLM, docstore: Docstore, **kwargs: Any):
|
||||||
|
"""Initialize with the LLM and a docstore."""
|
||||||
|
docstore_explorer = DocstoreExplorer(docstore)
|
||||||
|
tools = [
|
||||||
|
Tool(name="Search", func=docstore_explorer.search),
|
||||||
|
Tool(name="Lookup", func=docstore_explorer.lookup),
|
||||||
|
]
|
||||||
|
router = ReActDocstoreRouter.from_llm_and_tools(llm, tools)
|
||||||
|
super().__init__(router=router, tools=tools, **kwargs)
|
97
langchain/routing_chains/router.py
Normal file
97
langchain/routing_chains/router.py
Normal file
@ -0,0 +1,97 @@
|
|||||||
|
"""Chain that takes in an input and produces an action and action input."""
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import List, NamedTuple, Optional, Tuple
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from langchain.chains.llm import LLMChain
|
||||||
|
from langchain.llms.base import LLM
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
|
||||||
|
class RouterOutput(NamedTuple):
|
||||||
|
"""Output of a router."""
|
||||||
|
|
||||||
|
tool: str
|
||||||
|
tool_input: str
|
||||||
|
log: str
|
||||||
|
|
||||||
|
|
||||||
|
class Router(ABC):
|
||||||
|
"""Chain responsible for deciding the action to take."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def route(self, text: str) -> RouterOutput:
|
||||||
|
"""Given input, decided how to route it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: input string
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
RouterOutput specifying what tool to use.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def observation_prefix(self) -> str:
|
||||||
|
"""Prefix to append the observation with."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def router_prefix(self) -> str:
|
||||||
|
"""Prefix to append the router call with."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def finish_tool_name(self) -> str:
|
||||||
|
"""Name of the tool to use to finish the chain."""
|
||||||
|
return "Final Answer"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def starter_string(self) -> str:
|
||||||
|
"""Put this string after user input but before first router call."""
|
||||||
|
return "\n"
|
||||||
|
|
||||||
|
|
||||||
|
class LLMRouter(Router, BaseModel, ABC):
|
||||||
|
"""Router that uses an LLM."""
|
||||||
|
|
||||||
|
llm_chain: LLMChain
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]:
|
||||||
|
"""Extract tool and tool input from llm output."""
|
||||||
|
|
||||||
|
def _fix_text(self, text: str) -> str:
|
||||||
|
"""Fix the text."""
|
||||||
|
raise ValueError("fix_text not implemented for this router.")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def _stop(self) -> List[str]:
|
||||||
|
return [f"\n{self.observation_prefix}"]
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
@abstractmethod
|
||||||
|
def from_llm_and_tools(cls, llm: LLM, tools: List[Tool]) -> "Router":
|
||||||
|
"""Construct a router from an LLM and tools."""
|
||||||
|
|
||||||
|
def route(self, text: str) -> RouterOutput:
|
||||||
|
"""Given input, decided how to route it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: input string
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
RouterOutput specifying what tool to use.
|
||||||
|
"""
|
||||||
|
input_key = self.llm_chain.input_keys[0]
|
||||||
|
inputs = {input_key: text, "stop": self._stop}
|
||||||
|
full_output = self.llm_chain.predict(**inputs)
|
||||||
|
parsed_output = self._extract_tool_and_input(full_output)
|
||||||
|
while parsed_output is None:
|
||||||
|
full_output = self._fix_text(full_output)
|
||||||
|
inputs = {input_key: text + full_output, "stop": self._stop}
|
||||||
|
output = self.llm_chain.predict(**inputs)
|
||||||
|
full_output += output
|
||||||
|
parsed_output = self._extract_tool_and_input(full_output)
|
||||||
|
tool, tool_input = parsed_output
|
||||||
|
return RouterOutput(tool, tool_input, full_output)
|
81
langchain/routing_chains/routing_chain.py
Normal file
81
langchain/routing_chains/routing_chain.py
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
"""Router-Expert framework."""
|
||||||
|
from typing import Dict, List
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Extra
|
||||||
|
|
||||||
|
from langchain.chains.base import Chain
|
||||||
|
from langchain.input import ChainedInput, get_color_mapping
|
||||||
|
from langchain.routing_chains.router import Router
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
|
||||||
|
class RoutingChain(Chain, BaseModel):
|
||||||
|
"""Chain that uses a router to use tools."""
|
||||||
|
|
||||||
|
router: Router
|
||||||
|
"""Router to use."""
|
||||||
|
tools: List[Tool]
|
||||||
|
"""Tools this chain has access to."""
|
||||||
|
input_key: str = "question" #: :meta private:
|
||||||
|
output_key: str = "answer" #: :meta private:
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
"""Configuration for this pydantic object."""
|
||||||
|
|
||||||
|
extra = Extra.forbid
|
||||||
|
arbitrary_types_allowed = True
|
||||||
|
|
||||||
|
@property
|
||||||
|
def input_keys(self) -> List[str]:
|
||||||
|
"""Expect input key.
|
||||||
|
|
||||||
|
:meta private:
|
||||||
|
"""
|
||||||
|
return [self.input_key]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def output_keys(self) -> List[str]:
|
||||||
|
"""Expect output key.
|
||||||
|
|
||||||
|
:meta private:
|
||||||
|
"""
|
||||||
|
return [self.output_key]
|
||||||
|
|
||||||
|
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
|
||||||
|
# Construct a mapping of tool name to tool for easy lookup
|
||||||
|
name_to_tool_map = {tool.name: tool.func for tool in self.tools}
|
||||||
|
# Construct the initial string to pass into the router. This is made up
|
||||||
|
# of the user input, the special starter string, and then the router prefix.
|
||||||
|
# The starter string is a special string that may be used by a router to
|
||||||
|
# immediately follow the user input. The router prefix is a string that
|
||||||
|
# prompts the router to start routing.
|
||||||
|
starter_string = (
|
||||||
|
inputs[self.input_key]
|
||||||
|
+ self.router.starter_string
|
||||||
|
+ self.router.router_prefix
|
||||||
|
)
|
||||||
|
# We use the ChainedInput class to iteratively add to the input over time.
|
||||||
|
chained_input = ChainedInput(starter_string, verbose=self.verbose)
|
||||||
|
# We construct a mapping from each tool to a color, used for logging.
|
||||||
|
color_mapping = get_color_mapping(
|
||||||
|
[tool.name for tool in self.tools], excluded_colors=["green"]
|
||||||
|
)
|
||||||
|
# We now enter the router loop (until it returns something).
|
||||||
|
while True:
|
||||||
|
# Call the router to see what to do.
|
||||||
|
output = self.router.route(chained_input.input)
|
||||||
|
# Add the log to the Chained Input.
|
||||||
|
chained_input.add(output.log, color="green")
|
||||||
|
# If the tool chosen is the finishing tool, then we end and return.
|
||||||
|
if output.tool == self.router.finish_tool_name:
|
||||||
|
return {self.output_key: output.tool_input}
|
||||||
|
# Otherwise we lookup the tool
|
||||||
|
chain = name_to_tool_map[output.tool]
|
||||||
|
# We then call the tool on the tool input to get an observation
|
||||||
|
observation = chain(output.tool_input)
|
||||||
|
# We then log the observation
|
||||||
|
chained_input.add(f"\n{self.router.observation_prefix}")
|
||||||
|
chained_input.add(observation, color=color_mapping[output.tool])
|
||||||
|
# We then add the router prefix into the prompt to get the router to start
|
||||||
|
# thinking, and start the loop all over.
|
||||||
|
chained_input.add(f"\n{self.router.router_prefix}")
|
88
langchain/routing_chains/self_ask_with_search/base.py
Normal file
88
langchain/routing_chains/self_ask_with_search/base.py
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
"""Chain that does self ask with search."""
|
||||||
|
from typing import Any, List, Tuple
|
||||||
|
|
||||||
|
from langchain.chains.llm import LLMChain
|
||||||
|
from langchain.chains.serpapi import SerpAPIChain
|
||||||
|
from langchain.llms.base import LLM
|
||||||
|
from langchain.routing_chains.router import LLMRouter
|
||||||
|
from langchain.routing_chains.routing_chain import RoutingChain
|
||||||
|
from langchain.routing_chains.self_ask_with_search.prompt import PROMPT
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
|
|
||||||
|
class SelfAskWithSearchRouter(LLMRouter):
|
||||||
|
"""Router for the self-ask-with-search paper."""
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_llm_and_tools(
|
||||||
|
cls, llm: LLM, tools: List[Tool]
|
||||||
|
) -> "SelfAskWithSearchRouter":
|
||||||
|
"""Construct a router from an LLM and tools."""
|
||||||
|
if len(tools) != 1:
|
||||||
|
raise ValueError(f"Exactly one tool must be specified, but got {tools}")
|
||||||
|
tool_names = {tool.name for tool in tools}
|
||||||
|
if tool_names != {"Intermediate Answer"}:
|
||||||
|
raise ValueError(
|
||||||
|
f"Tool name should be Intermediate Answer, got {tool_names}"
|
||||||
|
)
|
||||||
|
|
||||||
|
llm_chain = LLMChain(llm=llm, prompt=PROMPT)
|
||||||
|
return cls(llm_chain=llm_chain, tools=tools)
|
||||||
|
|
||||||
|
def _extract_tool_and_input(self, text: str) -> Tuple[str, str]:
|
||||||
|
followup = "Follow up:"
|
||||||
|
if "\n" not in text:
|
||||||
|
last_line = text
|
||||||
|
else:
|
||||||
|
last_line = text.split("\n")[-1]
|
||||||
|
|
||||||
|
if followup not in last_line:
|
||||||
|
finish_string = "So the final answer is: "
|
||||||
|
if finish_string not in last_line:
|
||||||
|
raise ValueError("We should probably never get here")
|
||||||
|
return "Final Answer", text[len(finish_string) :]
|
||||||
|
|
||||||
|
if ":" not in last_line:
|
||||||
|
after_colon = last_line
|
||||||
|
else:
|
||||||
|
after_colon = text.split(":")[-1]
|
||||||
|
|
||||||
|
if " " == after_colon[0]:
|
||||||
|
after_colon = after_colon[1:]
|
||||||
|
if "?" != after_colon[-1]:
|
||||||
|
print("we probably should never get here..." + text)
|
||||||
|
|
||||||
|
return "Intermediate Answer", after_colon
|
||||||
|
|
||||||
|
@property
|
||||||
|
def observation_prefix(self) -> str:
|
||||||
|
"""Prefix to append the observation with."""
|
||||||
|
return "Intermediate answer: "
|
||||||
|
|
||||||
|
@property
|
||||||
|
def router_prefix(self) -> str:
|
||||||
|
"""Prefix to append the router call with."""
|
||||||
|
return ""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def starter_string(self) -> str:
|
||||||
|
"""Put this string after user input but before first router call."""
|
||||||
|
return "\nAre follow up questions needed here:"
|
||||||
|
|
||||||
|
|
||||||
|
class SelfAskWithSearchChain(RoutingChain):
|
||||||
|
"""Chain that does self ask with search.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
from langchain import SelfAskWithSearchChain, OpenAI, SerpAPIChain
|
||||||
|
search_chain = SerpAPIChain()
|
||||||
|
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, llm: LLM, search_chain: SerpAPIChain, **kwargs: Any):
|
||||||
|
"""Initialize with just an LLM and a search chain."""
|
||||||
|
search_tool = Tool(name="Intermediate Answer", func=search_chain.run)
|
||||||
|
router = SelfAskWithSearchRouter.from_llm_and_tools(llm, [search_tool])
|
||||||
|
super().__init__(router=router, tools=[search_tool], **kwargs)
|
10
langchain/routing_chains/tools.py
Normal file
10
langchain/routing_chains/tools.py
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
"""Interface for tools."""
|
||||||
|
from typing import Callable, NamedTuple, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class Tool(NamedTuple):
|
||||||
|
"""Interface for tools."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
func: Callable[[str], str]
|
||||||
|
description: Optional[str] = None
|
@ -1,8 +1,8 @@
|
|||||||
"""Integration test for self ask with search."""
|
"""Integration test for self ask with search."""
|
||||||
|
|
||||||
from langchain.chains.react.base import ReActChain
|
|
||||||
from langchain.docstore.wikipedia import Wikipedia
|
from langchain.docstore.wikipedia import Wikipedia
|
||||||
from langchain.llms.openai import OpenAI
|
from langchain.llms.openai import OpenAI
|
||||||
|
from langchain.routing_chains.react.base import ReActChain
|
||||||
|
|
||||||
|
|
||||||
def test_react() -> None:
|
def test_react() -> None:
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
"""Integration test for self ask with search."""
|
"""Integration test for self ask with search."""
|
||||||
from langchain.chains.self_ask_with_search.base import SelfAskWithSearchChain
|
|
||||||
from langchain.chains.serpapi import SerpAPIChain
|
from langchain.chains.serpapi import SerpAPIChain
|
||||||
from langchain.llms.openai import OpenAI
|
from langchain.llms.openai import OpenAI
|
||||||
|
from langchain.routing_chains.self_ask_with_search.base import SelfAskWithSearchChain
|
||||||
|
|
||||||
|
|
||||||
def test_self_ask_with_search() -> None:
|
def test_self_ask_with_search() -> None:
|
||||||
|
1
tests/unit_tests/routing_chains/__init__.py
Normal file
1
tests/unit_tests/routing_chains/__init__.py
Normal file
@ -0,0 +1 @@
|
|||||||
|
"""Test routing chain functionality."""
|
@ -2,9 +2,10 @@
|
|||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from langchain.chains.mrkl.base import ChainConfig, MRKLChain, get_action_and_input
|
|
||||||
from langchain.chains.mrkl.prompt import BASE_TEMPLATE
|
|
||||||
from langchain.prompts import PromptTemplate
|
from langchain.prompts import PromptTemplate
|
||||||
|
from langchain.routing_chains.mrkl.base import ZeroShotRouter, get_action_and_input
|
||||||
|
from langchain.routing_chains.mrkl.prompt import BASE_TEMPLATE
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
from tests.unit_tests.llms.fake_llm import FakeLLM
|
from tests.unit_tests.llms.fake_llm import FakeLLM
|
||||||
|
|
||||||
|
|
||||||
@ -29,7 +30,7 @@ def test_get_final_answer() -> None:
|
|||||||
"Final Answer: 1994"
|
"Final Answer: 1994"
|
||||||
)
|
)
|
||||||
action, action_input = get_action_and_input(llm_output)
|
action, action_input = get_action_and_input(llm_output)
|
||||||
assert action == "Final Answer: "
|
assert action == "Final Answer"
|
||||||
assert action_input == "1994"
|
assert action_input == "1994"
|
||||||
|
|
||||||
|
|
||||||
@ -52,19 +53,15 @@ def test_bad_action_line() -> None:
|
|||||||
def test_from_chains() -> None:
|
def test_from_chains() -> None:
|
||||||
"""Test initializing from chains."""
|
"""Test initializing from chains."""
|
||||||
chain_configs = [
|
chain_configs = [
|
||||||
ChainConfig(
|
Tool(name="foo", func=lambda x: "foo", description="foobar1"),
|
||||||
action_name="foo", action=lambda x: "foo", action_description="foobar1"
|
Tool(name="bar", func=lambda x: "bar", description="foobar2"),
|
||||||
),
|
|
||||||
ChainConfig(
|
|
||||||
action_name="bar", action=lambda x: "bar", action_description="foobar2"
|
|
||||||
),
|
|
||||||
]
|
]
|
||||||
mrkl_chain = MRKLChain.from_chains(FakeLLM(), chain_configs)
|
router_chain = ZeroShotRouter.from_llm_and_tools(FakeLLM(), chain_configs)
|
||||||
expected_tools_prompt = "foo: foobar1\nbar: foobar2"
|
expected_tools_prompt = "foo: foobar1\nbar: foobar2"
|
||||||
expected_tool_names = "foo, bar"
|
expected_tool_names = "foo, bar"
|
||||||
expected_template = BASE_TEMPLATE.format(
|
expected_template = BASE_TEMPLATE.format(
|
||||||
tools=expected_tools_prompt, tool_names=expected_tool_names
|
tools=expected_tools_prompt, tool_names=expected_tool_names
|
||||||
)
|
)
|
||||||
prompt = mrkl_chain.prompt
|
prompt = router_chain.llm_chain.prompt
|
||||||
assert isinstance(prompt, PromptTemplate)
|
assert isinstance(prompt, PromptTemplate)
|
||||||
assert prompt.template == expected_template
|
assert prompt.template == expected_template
|
@ -4,12 +4,12 @@ from typing import Any, List, Mapping, Optional, Union
|
|||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from langchain.chains.llm import LLMChain
|
|
||||||
from langchain.chains.react.base import ReActChain, predict_until_observation
|
|
||||||
from langchain.docstore.base import Docstore
|
from langchain.docstore.base import Docstore
|
||||||
from langchain.docstore.document import Document
|
from langchain.docstore.document import Document
|
||||||
from langchain.llms.base import LLM
|
from langchain.llms.base import LLM
|
||||||
from langchain.prompts.prompt import PromptTemplate
|
from langchain.prompts.prompt import PromptTemplate
|
||||||
|
from langchain.routing_chains.react.base import ReActChain, ReActDocstoreRouter
|
||||||
|
from langchain.routing_chains.tools import Tool
|
||||||
|
|
||||||
_PAGE_CONTENT = """This is a page about LangChain.
|
_PAGE_CONTENT = """This is a page about LangChain.
|
||||||
|
|
||||||
@ -51,33 +51,32 @@ class FakeDocstore(Docstore):
|
|||||||
|
|
||||||
def test_predict_until_observation_normal() -> None:
|
def test_predict_until_observation_normal() -> None:
|
||||||
"""Test predict_until_observation when observation is made normally."""
|
"""Test predict_until_observation when observation is made normally."""
|
||||||
outputs = ["foo\nAction 1: search[foo]"]
|
outputs = ["foo\nAction 1: Search[foo]"]
|
||||||
fake_llm = FakeListLLM(outputs)
|
fake_llm = FakeListLLM(outputs)
|
||||||
fake_llm_chain = LLMChain(llm=fake_llm, prompt=_FAKE_PROMPT)
|
tools = [
|
||||||
ret_text, action, directive = predict_until_observation(fake_llm_chain, "", 1)
|
Tool("Search", lambda x: x),
|
||||||
assert ret_text == outputs[0]
|
Tool("Lookup", lambda x: x),
|
||||||
assert action == "search"
|
]
|
||||||
assert directive == "foo"
|
router_chain = ReActDocstoreRouter.from_llm_and_tools(fake_llm, tools)
|
||||||
|
output = router_chain.route("")
|
||||||
|
assert output.log == outputs[0]
|
||||||
|
assert output.tool == "Search"
|
||||||
|
assert output.tool_input == "foo"
|
||||||
|
|
||||||
|
|
||||||
def test_predict_until_observation_repeat() -> None:
|
def test_predict_until_observation_repeat() -> None:
|
||||||
"""Test when no action is generated initially."""
|
"""Test when no action is generated initially."""
|
||||||
outputs = ["foo", " search[foo]"]
|
outputs = ["foo", " Search[foo]"]
|
||||||
fake_llm = FakeListLLM(outputs)
|
fake_llm = FakeListLLM(outputs)
|
||||||
fake_llm_chain = LLMChain(llm=fake_llm, prompt=_FAKE_PROMPT)
|
tools = [
|
||||||
ret_text, action, directive = predict_until_observation(fake_llm_chain, "", 1)
|
Tool("Search", lambda x: x),
|
||||||
assert ret_text == "foo\nAction 1: search[foo]"
|
Tool("Lookup", lambda x: x),
|
||||||
assert action == "search"
|
]
|
||||||
assert directive == "foo"
|
router_chain = ReActDocstoreRouter.from_llm_and_tools(fake_llm, tools)
|
||||||
|
output = router_chain.route("")
|
||||||
|
assert output.log == "foo\nAction 1: Search[foo]"
|
||||||
def test_predict_until_observation_error() -> None:
|
assert output.tool == "Search"
|
||||||
"""Test handling of generation of text that cannot be parsed."""
|
assert output.tool_input == "foo"
|
||||||
outputs = ["foo\nAction 1: foo"]
|
|
||||||
fake_llm = FakeListLLM(outputs)
|
|
||||||
fake_llm_chain = LLMChain(llm=fake_llm, prompt=_FAKE_PROMPT)
|
|
||||||
with pytest.raises(ValueError):
|
|
||||||
predict_until_observation(fake_llm_chain, "", 1)
|
|
||||||
|
|
||||||
|
|
||||||
def test_react_chain() -> None:
|
def test_react_chain() -> None:
|
||||||
@ -101,5 +100,5 @@ def test_react_chain_bad_action() -> None:
|
|||||||
]
|
]
|
||||||
fake_llm = FakeListLLM(responses)
|
fake_llm = FakeListLLM(responses)
|
||||||
react_chain = ReActChain(llm=fake_llm, docstore=FakeDocstore())
|
react_chain = ReActChain(llm=fake_llm, docstore=FakeDocstore())
|
||||||
with pytest.raises(ValueError):
|
with pytest.raises(KeyError):
|
||||||
react_chain.run("when was langchain made")
|
react_chain.run("when was langchain made")
|
Loading…
Reference in New Issue
Block a user