forked from Archives/langchain
10dab053b4
This pull request adds an enum class for the various types of agents used in the project, located in the `agent_types.py` file. Currently, the project is using hardcoded strings for the initialization of these agents, which can lead to errors and make the code harder to maintain. With the introduction of the new enums, the code will be more readable and less error-prone. The new enum members include: - ZERO_SHOT_REACT_DESCRIPTION - REACT_DOCSTORE - SELF_ASK_WITH_SEARCH - CONVERSATIONAL_REACT_DESCRIPTION - CHAT_ZERO_SHOT_REACT_DESCRIPTION - CHAT_CONVERSATIONAL_REACT_DESCRIPTION In this PR, I have also replaced the hardcoded strings with the appropriate enum members throughout the codebase, ensuring a smooth transition to the new approach.
657 lines
19 KiB
Plaintext
657 lines
19 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "5436020b",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Defining Custom Tools\n",
|
||
"\n",
|
||
"When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
|
||
"\n",
|
||
"- name (str), is required\n",
|
||
"- description (str), is optional\n",
|
||
"- return_direct (bool), defaults to False\n",
|
||
"\n",
|
||
"The function that should be called when the tool is selected should take as input a single string and return a single string.\n",
|
||
"\n",
|
||
"There are two ways to define a tool, we will cover both in the example below."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "1aaba18c",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Import things that are needed generically\n",
|
||
"from langchain.agents import initialize_agent, Tool\n",
|
||
"from langchain.agents.agent_types import AgentType\n",
|
||
"from langchain.tools import BaseTool\n",
|
||
"from langchain.llms import OpenAI\n",
|
||
"from langchain import LLMMathChain, SerpAPIWrapper"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8e2c3874",
|
||
"metadata": {},
|
||
"source": [
|
||
"Initialize the LLM to use for the agent."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "36ed392e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"llm = OpenAI(temperature=0)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f8bc72c2",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Completely New Tools \n",
|
||
"First, we show how to create completely new tools from scratch.\n",
|
||
"\n",
|
||
"There are two ways to do this: either by using the Tool dataclass, or by subclassing the BaseTool class."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b63fcc3b",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Tool dataclass"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "56ff7670",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Load the tool configs that are needed.\n",
|
||
"search = SerpAPIWrapper()\n",
|
||
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
|
||
"tools = [\n",
|
||
" Tool(\n",
|
||
" name = \"Search\",\n",
|
||
" func=search.run,\n",
|
||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||
" ),\n",
|
||
" Tool(\n",
|
||
" name=\"Calculator\",\n",
|
||
" func=llm_math_chain.run,\n",
|
||
" description=\"useful for when you need to answer questions about math\"\n",
|
||
" )\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "5b93047d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Construct the agent. We will use the default agent type here.\n",
|
||
"# See documentation for a full list of options.\n",
|
||
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "6f96a891",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
|
||
"Action: Search\n",
|
||
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
|
||
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate her age raised to the 0.43 power\n",
|
||
"Action: Calculator\n",
|
||
"Action Input: 22^0.43\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||
"22^0.43\u001b[32;1m\u001b[1;3m\n",
|
||
"```python\n",
|
||
"import math\n",
|
||
"print(math.pow(22, 0.43))\n",
|
||
"```\n",
|
||
"\u001b[0m\n",
|
||
"Answer: \u001b[33;1m\u001b[1;3m3.777824273683966\n",
|
||
"\u001b[0m\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||
"\n",
|
||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.777824273683966\n",
|
||
"\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||
"Final Answer: Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"\"Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\""
|
||
]
|
||
},
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "6f12eaf0",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Subclassing the BaseTool class"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "c58a7c40",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class CustomSearchTool(BaseTool):\n",
|
||
" name = \"Search\"\n",
|
||
" description = \"useful for when you need to answer questions about current events\"\n",
|
||
"\n",
|
||
" def _run(self, query: str) -> str:\n",
|
||
" \"\"\"Use the tool.\"\"\"\n",
|
||
" return search.run(query)\n",
|
||
" \n",
|
||
" async def _arun(self, query: str) -> str:\n",
|
||
" \"\"\"Use the tool asynchronously.\"\"\"\n",
|
||
" raise NotImplementedError(\"BingSearchRun does not support async\")\n",
|
||
" \n",
|
||
"class CustomCalculatorTool(BaseTool):\n",
|
||
" name = \"Calculator\"\n",
|
||
" description = \"useful for when you need to answer questions about math\"\n",
|
||
"\n",
|
||
" def _run(self, query: str) -> str:\n",
|
||
" \"\"\"Use the tool.\"\"\"\n",
|
||
" return llm_math_chain.run(query)\n",
|
||
" \n",
|
||
" async def _arun(self, query: str) -> str:\n",
|
||
" \"\"\"Use the tool asynchronously.\"\"\"\n",
|
||
" raise NotImplementedError(\"BingSearchRun does not support async\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "3318a46f",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"tools = [CustomSearchTool(), CustomCalculatorTool()]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "ee2d0f3a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "6a2cebbf",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
|
||
"Action: Search\n",
|
||
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
|
||
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate her age raised to the 0.43 power\n",
|
||
"Action: Calculator\n",
|
||
"Action Input: 22^0.43\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||
"22^0.43\u001b[32;1m\u001b[1;3m\n",
|
||
"```python\n",
|
||
"import math\n",
|
||
"print(math.pow(22, 0.43))\n",
|
||
"```\n",
|
||
"\u001b[0m\n",
|
||
"Answer: \u001b[33;1m\u001b[1;3m3.777824273683966\n",
|
||
"\u001b[0m\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||
"\n",
|
||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.777824273683966\n",
|
||
"\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||
"Final Answer: Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"\"Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\""
|
||
]
|
||
},
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "824eaf74",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using the `tool` decorator\n",
|
||
"\n",
|
||
"To make it easier to define custom tools, a `@tool` decorator is provided. This decorator can be used to quickly create a `Tool` from a simple function. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "8f15307d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.agents import tool\n",
|
||
"\n",
|
||
"@tool\n",
|
||
"def search_api(query: str) -> str:\n",
|
||
" \"\"\"Searches the API for the query.\"\"\"\n",
|
||
" return \"Results\""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "0a23b91b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Tool(name='search_api', description='search_api(query: str) -> str - Searches the API for the query.', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1184e0cd0>, func=<function search_api at 0x1635f8700>, coroutine=None)"
|
||
]
|
||
},
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"search_api"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "cc6ee8c1",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can also provide arguments like the tool name and whether to return directly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "28cdf04d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"@tool(\"search\", return_direct=True)\n",
|
||
"def search_api(query: str) -> str:\n",
|
||
" \"\"\"Searches the API for the query.\"\"\"\n",
|
||
" return \"Results\""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "1085a4bd",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1184e0cd0>, func=<function search_api at 0x1635f8670>, coroutine=None)"
|
||
]
|
||
},
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"search_api"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "1d0430d6",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Modify existing tools\n",
|
||
"\n",
|
||
"Now, we show how to load existing tools and just modify them. In the example below, we do something really simple and change the Search tool to have the name `Google Search`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "79213f40",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.agents import load_tools"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "e1067dcb",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "6c66ffe8",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"tools[0].name = \"Google Search\""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "f45b5bc3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"id": "565e2b9b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
|
||
"Action: Google Search\n",
|
||
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
|
||
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
|
||
"Action: Google Search\n",
|
||
"Action Input: \"Camila Morrone age\"\u001b[0m\n",
|
||
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
|
||
"Action: Calculator\n",
|
||
"Action Input: 25^0.43\u001b[0m\n",
|
||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
|
||
"\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||
"Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"\"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\""
|
||
]
|
||
},
|
||
"execution_count": 12,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "376813ed",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Defining the priorities among Tools\n",
|
||
"When you made a Custom tool, you may want the Agent to use the custom tool more than normal tools.\n",
|
||
"\n",
|
||
"For example, you made a custom tool, which gets information on music from your database. When a user wants information on songs, You want the Agent to use `the custom tool` more than the normal `Search tool`. But the Agent might prioritize a normal Search tool.\n",
|
||
"\n",
|
||
"This can be accomplished by adding a statement such as `Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'` to the description.\n",
|
||
"\n",
|
||
"An example is below."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"id": "3450512e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Import things that are needed generically\n",
|
||
"from langchain.agents import initialize_agent, Tool\n",
|
||
"from langchain.agents.agent_types import AgentType\n",
|
||
"from langchain.llms import OpenAI\n",
|
||
"from langchain import LLMMathChain, SerpAPIWrapper\n",
|
||
"search = SerpAPIWrapper()\n",
|
||
"tools = [\n",
|
||
" Tool(\n",
|
||
" name = \"Search\",\n",
|
||
" func=search.run,\n",
|
||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||
" ),\n",
|
||
" Tool(\n",
|
||
" name=\"Music Search\",\n",
|
||
" func=lambda x: \"'All I Want For Christmas Is You' by Mariah Carey.\", #Mock Function\n",
|
||
" description=\"A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'\",\n",
|
||
" )\n",
|
||
"]\n",
|
||
"\n",
|
||
"agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 14,
|
||
"id": "4b9a7849",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||
"\u001b[32;1m\u001b[1;3m I should use a music search engine to find the answer\n",
|
||
"Action: Music Search\n",
|
||
"Action Input: most famous song of christmas\u001b[0m\n",
|
||
"Observation: \u001b[33;1m\u001b[1;3m'All I Want For Christmas Is You' by Mariah Carey.\u001b[0m\n",
|
||
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||
"Final Answer: 'All I Want For Christmas Is You' by Mariah Carey.\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"\"'All I Want For Christmas Is You' by Mariah Carey.\""
|
||
]
|
||
},
|
||
"execution_count": 14,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"agent.run(\"what is the most famous song of christmas\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bc477d43",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using tools to return directly\n",
|
||
"Often, it can be desirable to have a tool output returned directly to the user, if it’s called. You can do this easily with LangChain by setting the return_direct flag for a tool to be True."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"id": "3bb6185f",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"llm_math_chain = LLMMathChain(llm=llm)\n",
|
||
"tools = [\n",
|
||
" Tool(\n",
|
||
" name=\"Calculator\",\n",
|
||
" func=llm_math_chain.run,\n",
|
||
" description=\"useful for when you need to answer questions about math\",\n",
|
||
" return_direct=True\n",
|
||
" )\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"id": "113ddb84",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"llm = OpenAI(temperature=0)\n",
|
||
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"id": "582439a6",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||
"\u001b[32;1m\u001b[1;3m I need to calculate this\n",
|
||
"Action: Calculator\n",
|
||
"Action Input: 2**.12\u001b[0m\n",
|
||
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.2599210498948732\u001b[0m\n",
|
||
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'Answer: 1.2599210498948732'"
|
||
]
|
||
},
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"agent.run(\"whats 2**.12\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "537bc628",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.1"
|
||
},
|
||
"vscode": {
|
||
"interpreter": {
|
||
"hash": "e90c8aa204a57276aa905271aff2d11799d0acb3547adabc5892e639a5e45e34"
|
||
}
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|