mirror of
https://github.com/hwchase17/langchain
synced 2024-11-04 06:00:26 +00:00
Adding custom tools to SQL Agent (#10198)
Changes in: - `create_sql_agent` function so that user can easily add custom tools as complement for the toolkit. - updating **sql use case** notebook to showcase 2 examples of extra tools. Motivation for these changes is having the possibility of including domain expert knowledge to the agent, which improves accuracy and reduces time/tokens. --------- Co-authored-by: Manuel Soria <manuel.soria@greyscaleai.com> Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
parent
5dbae94e04
commit
dde1992fdd
@ -713,6 +713,395 @@
|
||||
"agent_executor.run(\"Describe the playlisttrack table\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Extending the SQL Toolkit\n",
|
||||
"\n",
|
||||
"Although the out-of-the-box SQL Toolkit contains the necessary tools to start working on a database, it is often the case that some extra tools may be useful for extending the agent's capabilities. This is particularly useful when trying to use **domain specific knowledge** in the solution, in order to improve its overall performance.\n",
|
||||
"\n",
|
||||
"Some examples include:\n",
|
||||
"\n",
|
||||
"- Including dynamic few shot examples\n",
|
||||
"- Finding misspellings in proper nouns to use as column filters\n",
|
||||
"\n",
|
||||
"We can create separate tools which tackle these specific use cases and include them as a complement to the standard SQL Toolkit. Let's see how to include these two custom tools.\n",
|
||||
"\n",
|
||||
"#### Including dynamic few-shot examples\n",
|
||||
"\n",
|
||||
"In order to include dynamic few-shot examples, we need a custom **Retriever Tool** that handles the vector database in order to retrieve the examples that are semantically similar to the user’s question.\n",
|
||||
"\n",
|
||||
"Let's start by creating a dictionary with some examples: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# few_shots = {'List all artists.': 'SELECT * FROM artists;',\n",
|
||||
"# \"Find all albums for the artist 'AC/DC'.\": \"SELECT * FROM albums WHERE ArtistId = (SELECT ArtistId FROM artists WHERE Name = 'AC/DC');\",\n",
|
||||
"# \"List all tracks in the 'Rock' genre.\": \"SELECT * FROM tracks WHERE GenreId = (SELECT GenreId FROM genres WHERE Name = 'Rock');\",\n",
|
||||
"# 'Find the total duration of all tracks.': 'SELECT SUM(Milliseconds) FROM tracks;',\n",
|
||||
"# 'List all customers from Canada.': \"SELECT * FROM customers WHERE Country = 'Canada';\",\n",
|
||||
"# 'How many tracks are there in the album with ID 5?': 'SELECT COUNT(*) FROM tracks WHERE AlbumId = 5;',\n",
|
||||
"# 'Find the total number of invoices.': 'SELECT COUNT(*) FROM invoices;',\n",
|
||||
"# 'List all tracks that are longer than 5 minutes.': 'SELECT * FROM tracks WHERE Milliseconds > 300000;',\n",
|
||||
"# 'Who are the top 5 customers by total purchase?': 'SELECT CustomerId, SUM(Total) AS TotalPurchase FROM invoices GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;',\n",
|
||||
"# 'Which albums are from the year 2000?': \"SELECT * FROM albums WHERE strftime('%Y', ReleaseDate) = '2000';\",\n",
|
||||
"# 'How many employees are there': 'SELECT COUNT(*) FROM \"employee\"'\n",
|
||||
"# }"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can then create a retriever using the list of questions, assigning the target SQL query as metadata:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.schema import Document\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings()\n",
|
||||
"\n",
|
||||
"few_shot_docs = [Document(page_content=question, metadata={'sql_query': few_shots[question]}) for question in few_shots.keys()]\n",
|
||||
"vector_db = FAISS.from_documents(few_shot_docs, embeddings)\n",
|
||||
"retriever = vector_db.as_retriever()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create our own custom tool and append it as a new tool in the `create_sql_agent` function:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents.agent_toolkits import create_retriever_tool\n",
|
||||
"\n",
|
||||
"tool_description = \"\"\"\n",
|
||||
"This tool will help you understand similar examples to adapt them to the user question.\n",
|
||||
"Input to this tool should be the user question.\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"retriever_tool = create_retriever_tool(\n",
|
||||
" retriever,\n",
|
||||
" name='sql_get_similar_examples',\n",
|
||||
" description=tool_description\n",
|
||||
" )\n",
|
||||
"custom_tool_list = [retriever_tool]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create the agent, adjusting the standard SQL Agent suffix to consider our use case. Although the most straightforward way to handle this would be to include it just in the tool description, this is often not enough and we need to specify it in the agent prompt using the `suffix` argument in the constructor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import create_sql_agent, AgentType\n",
|
||||
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")\n",
|
||||
"llm = ChatOpenAI(model_name='gpt-4',temperature=0)\n",
|
||||
"\n",
|
||||
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
|
||||
"\n",
|
||||
"custom_suffix = \"\"\"\n",
|
||||
"I should first get the similar examples I know.\n",
|
||||
"If the examples are enough to construct the query, I can build it.\n",
|
||||
"Otherwise, I can then look at the tables in the database to see what I can query.\n",
|
||||
"Then I should query the schema of the most relevant tables\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"agent = create_sql_agent(llm=llm,\n",
|
||||
" toolkit=toolkit,\n",
|
||||
" verbose=True,\n",
|
||||
" agent_type=AgentType.OPENAI_FUNCTIONS,\n",
|
||||
" extra_tools=custom_tool_list,\n",
|
||||
" suffix=custom_suffix\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try it out:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_get_similar_examples` with `How many employees do we have?`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[33;1m\u001b[1;3m[Document(page_content='How many employees are there', metadata={'sql_query': 'SELECT COUNT(*) FROM \"employee\"'}), Document(page_content='Find the total number of invoices.', metadata={'sql_query': 'SELECT COUNT(*) FROM invoices;'})]\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM employee`\n",
|
||||
"responded: {content}\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3mSELECT COUNT(*) FROM employee\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query` with `SELECT COUNT(*) FROM employee`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m[(8,)]\u001b[0m\u001b[32;1m\u001b[1;3mWe have 8 employees.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'We have 8 employees.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 23,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.run(\"How many employees do we have?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see, the agent first used the `sql_get_similar_examples` tool in order to retrieve similar examples. As the question was very similar to other few shot examples, the agent **didn't need to use any other tool** from the standard Toolkit, thus **saving time and tokens**."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Finding and correcting misspellings for proper nouns\n",
|
||||
"\n",
|
||||
"In order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly. \n",
|
||||
"\n",
|
||||
"We can achieve this by creating a vector store using all the distinct proper nouns that exist in the database. We can then have the agent query that vector store each time the user includes a proper noun in their question, to find the correct spelling for that word. In this way, the agent can make sure it understands which entity the user is referring to before building the target query.\n",
|
||||
"\n",
|
||||
"Let's follow a similar approach to the few shots, but without metadata: just embedding the proper nouns and then querying to get the most similar one to the misspelled user question.\n",
|
||||
"\n",
|
||||
"First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import ast\n",
|
||||
"import re\n",
|
||||
"\n",
|
||||
"def run_query_save_results(db, query):\n",
|
||||
" res = db.run(query)\n",
|
||||
" res = [el for sub in ast.literal_eval(res) for el in sub if el]\n",
|
||||
" res = [re.sub(r'\\b\\d+\\b', '', string).strip() for string in res]\n",
|
||||
" return res\n",
|
||||
"\n",
|
||||
"artists = run_query_save_results(db, \"SELECT Name FROM Artist\")\n",
|
||||
"albums = run_query_save_results(db, \"SELECT Title FROM Album\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can proceed with creating the custom **retreiver tool** and the final agent:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 51,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents.agent_toolkits import create_retriever_tool\n",
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"texts = (artists + albums)\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings()\n",
|
||||
"vector_db = FAISS.from_texts(texts, embeddings)\n",
|
||||
"retriever = vector_db.as_retriever()\n",
|
||||
"\n",
|
||||
"retriever_tool = create_retriever_tool(\n",
|
||||
" retriever,\n",
|
||||
" name='name_search',\n",
|
||||
" description='use to learn how a piece of data is actually written, can be from names, surnames addresses etc'\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"custom_tool_list = [retriever_tool]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 54,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import create_sql_agent, AgentType\n",
|
||||
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
|
||||
"from langchain.utilities import SQLDatabase\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"\n",
|
||||
"# db = SQLDatabase.from_uri(\"sqlite:///Chinook.db\")\n",
|
||||
"llm = ChatOpenAI(model_name='gpt-4', temperature=0)\n",
|
||||
"\n",
|
||||
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
|
||||
"\n",
|
||||
"custom_suffix = \"\"\"\n",
|
||||
"If a user asks for me to filter based on proper nouns, I should first check the spelling using the name_search tool.\n",
|
||||
"Otherwise, I can then look at the tables in the database to see what I can query.\n",
|
||||
"Then I should query the schema of the most relevant tables\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"agent = create_sql_agent(llm=llm,\n",
|
||||
" toolkit=toolkit,\n",
|
||||
" verbose=True,\n",
|
||||
" agent_type=AgentType.OPENAI_FUNCTIONS,\n",
|
||||
" extra_tools=custom_tool_list,\n",
|
||||
" suffix=custom_suffix\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's try it out:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 55,
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `name_search` with `alis in pains`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[33;1m\u001b[1;3m[Document(page_content='House of Pain', metadata={}), Document(page_content='Alice In Chains', metadata={}), Document(page_content='Aisha Duo', metadata={}), Document(page_content='House Of Pain', metadata={})]\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_list_tables` with ``\n",
|
||||
"responded: {content}\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[38;5;200m\u001b[1;3mAlbum, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_schema` with `Album, Artist`\n",
|
||||
"responded: {content}\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[33;1m\u001b[1;3m\n",
|
||||
"CREATE TABLE \"Album\" (\n",
|
||||
"\t\"AlbumId\" INTEGER NOT NULL, \n",
|
||||
"\t\"Title\" NVARCHAR(160) NOT NULL, \n",
|
||||
"\t\"ArtistId\" INTEGER NOT NULL, \n",
|
||||
"\tPRIMARY KEY (\"AlbumId\"), \n",
|
||||
"\tFOREIGN KEY(\"ArtistId\") REFERENCES \"Artist\" (\"ArtistId\")\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"/*\n",
|
||||
"3 rows from Album table:\n",
|
||||
"AlbumId\tTitle\tArtistId\n",
|
||||
"1\tFor Those About To Rock We Salute You\t1\n",
|
||||
"2\tBalls to the Wall\t2\n",
|
||||
"3\tRestless and Wild\t2\n",
|
||||
"*/\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"CREATE TABLE \"Artist\" (\n",
|
||||
"\t\"ArtistId\" INTEGER NOT NULL, \n",
|
||||
"\t\"Name\" NVARCHAR(120), \n",
|
||||
"\tPRIMARY KEY (\"ArtistId\")\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"/*\n",
|
||||
"3 rows from Artist table:\n",
|
||||
"ArtistId\tName\n",
|
||||
"1\tAC/DC\n",
|
||||
"2\tAccept\n",
|
||||
"3\tAerosmith\n",
|
||||
"*/\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'`\n",
|
||||
"responded: {content}\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3mSELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m[(1,)]\u001b[0m\u001b[32;1m\u001b[1;3mAlice In Chains has 1 album in the database.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Alice In Chains has 1 album in the database.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 55,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.run(\"How many albums does alis in pains have?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see, the agent used the `name_search` tool in order to check how to correctly query the database for this specific artist."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@ -867,7 +1256,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.8.17"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -1,5 +1,5 @@
|
||||
"""SQL agent."""
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any, Dict, List, Optional, Sequence
|
||||
|
||||
from langchain.agents.agent import AgentExecutor, BaseSingleActionAgent
|
||||
from langchain.agents.agent_toolkits.sql.prompt import (
|
||||
@ -21,6 +21,7 @@ from langchain.prompts.chat import (
|
||||
)
|
||||
from langchain.schema.language_model import BaseLanguageModel
|
||||
from langchain.schema.messages import AIMessage, SystemMessage
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
|
||||
def create_sql_agent(
|
||||
@ -38,10 +39,11 @@ def create_sql_agent(
|
||||
early_stopping_method: str = "force",
|
||||
verbose: bool = False,
|
||||
agent_executor_kwargs: Optional[Dict[str, Any]] = None,
|
||||
extra_tools: Sequence[BaseTool] = (),
|
||||
**kwargs: Dict[str, Any],
|
||||
) -> AgentExecutor:
|
||||
"""Construct an SQL agent from an LLM and tools."""
|
||||
tools = toolkit.get_tools()
|
||||
tools = toolkit.get_tools() + list(extra_tools)
|
||||
prefix = prefix.format(dialect=toolkit.dialect, top_k=top_k)
|
||||
agent: BaseSingleActionAgent
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user