[docs] update toolkit docs (#15294)

its probably too much work to do all toolkits before 0.1

Also, some agent toolkits (like pandas and python) will require more
work
pull/15102/head^2
Harrison Chase 8 months ago committed by GitHub
parent fbe4209ce1
commit 81a7a83b21
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -34,7 +34,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {
"tags": []
},
@ -49,7 +49,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Customizing Authentication\n",
"### Customizing Authentication\n",
"\n",
"Behind the scenes, a `googleapi` resource is created using the following methods. \n",
"you can manually build a `googleapi` resource for more auth control. "
@ -112,29 +112,58 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain.llms import OpenAI"
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, create_openai_functions_agent\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"agent = initialize_agent(\n",
"instructions = \"\"\"You my assistant.\"\"\"\n",
"base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n",
"prompt = base_prompt.partial(instructions=instructions)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"agent = create_openai_functions_agent(llm, toolkit.get_tools(), prompt)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(\n",
" agent=agent,\n",
" tools=toolkit.get_tools(),\n",
" llm=llm,\n",
" agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
" # This is set to False to prevent information about my email showing up on the screen\n",
" # Normally, it is helpful to have it set to True however.\n",
" verbose=False,\n",
")"
]
},
@ -145,18 +174,11 @@
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to load default session, using empty session: 0\n",
"WARNING:root:Failed to persist run: {\"detail\":\"Not Found\"}\n"
]
},
{
"data": {
"text/plain": [
"'I have created a draft email for you to edit. The draft Id is r5681294731961864018.'"
"{'input': 'Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot who is looking to collaborate on some research with her estranged friend, a cat. Under no circumstances may you send the message, however.',\n",
" 'output': 'I have created a draft email for you to edit. Please find the draft in your Gmail drafts folder. Remember, under no circumstances should you send the message.'}"
]
},
"execution_count": 19,
@ -165,41 +187,38 @@
}
],
"source": [
"agent.run(\n",
" \"Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot\"\n",
" \" who is looking to collaborate on some research with her\"\n",
" \" estranged friend, a cat. Under no circumstances may you send the message, however.\"\n",
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot\"\n",
" \" who is looking to collaborate on some research with her\"\n",
" \" estranged friend, a cat. Under no circumstances may you send the message, however.\"\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 20,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to load default session, using empty session: 0\n",
"WARNING:root:Failed to persist run: {\"detail\":\"Not Found\"}\n"
]
},
{
"data": {
"text/plain": [
"\"The latest email in your drafts is from hopefulparrot@gmail.com with the subject 'Collaboration Opportunity'. The body of the email reads: 'Dear [Friend], I hope this letter finds you well. I am writing to you in the hopes of rekindling our friendship and to discuss the possibility of collaborating on some research together. I know that we have had our differences in the past, but I believe that we can put them aside and work together for the greater good. I look forward to hearing from you. Sincerely, [Parrot]'\""
"{'input': 'Could you search in my drafts for the latest email? what is the title?',\n",
" 'output': 'The latest email in your drafts is titled \"Collaborative Research Proposal\".'}"
]
},
"execution_count": 24,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Could you search in my drafts for the latest email?\")"
"agent_executor.invoke(\n",
" {\"input\": \"Could you search in my drafts for the latest email? what is the title?\"}\n",
")"
]
},
{
@ -226,7 +245,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -22,71 +22,145 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 5,
"id": "f98e9c90-5c37-4fb9-af3e-d09693af8543",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.agents.agent_types import AgentType\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms.openai import OpenAI\n",
"from langchain_experimental.agents.agent_toolkits import create_python_agent\n",
"from langchain import hub\n",
"from langchain.agents import AgentExecutor\n",
"from langchain_experimental.tools import PythonREPLTool"
]
},
{
"cell_type": "markdown",
"id": "ca30d64c",
"id": "ba9adf51",
"metadata": {},
"source": [
"## Using `ZERO_SHOT_REACT_DESCRIPTION`\n",
"## Create the tool(s)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "003bce04",
"metadata": {},
"outputs": [],
"source": [
"tools = [PythonREPLTool()]"
]
},
{
"cell_type": "markdown",
"id": "4aceaeaf",
"metadata": {},
"source": [
"## Using OpenAI Functions Agent\n",
"\n",
"This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type."
"This is probably the most reliable type of agent, but is only compatible with function calling"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cc422f53-c51c-4694-a834-72ecd1e68363",
"metadata": {
"tags": []
},
"execution_count": 6,
"id": "3a054d1d",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = create_python_agent(\n",
" llm=OpenAI(temperature=0, max_tokens=1000),\n",
" tool=PythonREPLTool(),\n",
" verbose=True,\n",
" agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
")"
"from langchain.agents import create_openai_functions_agent\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "3454514b",
"metadata": {},
"outputs": [],
"source": [
"instructions = \"\"\"You are an agent designed to write and execute python code to answer questions.\n",
"You have access to a python REPL, which you can use to execute python code.\n",
"If you get an error, debug your code and try again.\n",
"Only use the output of your code to answer the question. \n",
"You might know the answer without running any code, but you should still run the code to get the answer.\n",
"If it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n",
"\"\"\"\n",
"base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n",
"prompt = base_prompt.partial(instructions=instructions)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "2a573e95",
"metadata": {},
"outputs": [],
"source": [
"agent = create_openai_functions_agent(ChatOpenAI(temperature=0), tools, prompt)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "cae41550",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "bb487e8e",
"id": "ca30d64c",
"metadata": {},
"source": [
"## Using OpenAI Functions\n",
"## Using ReAct Agent\n",
"\n",
"This shows how to initialize the agent using the OPENAI_FUNCTIONS agent type. Note that this is an alternative to the above."
"This is a less reliable type, but is compatible with most models"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6e651822",
"execution_count": 26,
"id": "bcaa0b18",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = create_python_agent(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\"),\n",
" tool=PythonREPLTool(),\n",
" verbose=True,\n",
" agent_type=AgentType.OPENAI_FUNCTIONS,\n",
" agent_executor_kwargs={\"handle_parsing_errors\": True},\n",
")"
"from langchain.agents import create_react_agent\n",
"from langchain.chat_models import ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "d2470880",
"metadata": {},
"outputs": [],
"source": [
"instructions = \"\"\"You are an agent designed to write and execute python code to answer questions.\n",
"You have access to a python REPL, which you can use to execute python code.\n",
"If you get an error, debug your code and try again.\n",
"Only use the output of your code to answer the question. \n",
"You might know the answer without running any code, but you should still run the code to get the answer.\n",
"If it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n",
"\"\"\"\n",
"base_prompt = hub.pull(\"langchain-ai/react-agent-template\")\n",
"prompt = base_prompt.partial(instructions=instructions)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "cc422f53-c51c-4694-a834-72ecd1e68363",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent = create_react_agent(ChatAnthropic(temperature=0), tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
@ -100,9 +174,10 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 31,
"id": "25cd4f92-ea9b-4fe6-9838-a4f85f81eebe",
"metadata": {
"scrolled": false,
"tags": []
},
"outputs": [
@ -112,20 +187,39 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Python_REPL` with `def fibonacci(n):\n",
" if n <= 0:\n",
" return 0\n",
" elif n == 1:\n",
" return 1\n",
" else:\n",
" return fibonacci(n-1) + fibonacci(n-2)\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m Sure, I can write some Python code to get the 10th Fibonacci number.\n",
"\n",
"```\n",
"Thought: Do I need to use a tool? Yes\n",
"Action: Python_REPL \n",
"Action Input: \n",
"def fib(n):\n",
" a, b = 0, 1\n",
" for i in range(n):\n",
" a, b = b, a + b\n",
" return a\n",
"\n",
"print(fib(10))\n",
"```\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m55\n",
"\u001b[0m\u001b[32;1m\u001b[1;3m Let me break this down step-by-step:\n",
"\n",
"1. I defined a fibonacci function called `fib` that takes in a number `n`. \n",
"2. Inside the function, I initialized two variables `a` and `b` to 0 and 1, which are the first two Fibonacci numbers.\n",
"3. Then I used a for loop to iterate up to `n`, updating `a` and `b` each iteration to the next Fibonacci numbers.\n",
"4. Finally, I return `a`, which after `n` iterations, contains the `n`th Fibonacci number.\n",
"\n",
"5. I called `fib(10)` to get the 10th Fibonacci number and printed the result.\n",
"\n",
"fibonacci(10)`\n",
"The key parts are defining the fibonacci calculation in the function, and then calling it with the desired input index to print the output.\n",
"\n",
"The observation shows the 10th Fibonacci number is 55, so that is the final answer.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m\u001b[0m\u001b[32;1m\u001b[1;3mThe 10th Fibonacci number is 55.\u001b[0m\n",
"```\n",
"Thought: Do I need to use a tool? No\n",
"Final Answer: 55\n",
"```\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@ -133,16 +227,16 @@
{
"data": {
"text/plain": [
"'The 10th Fibonacci number is 55.'"
"{'input': 'What is the 10th fibonacci number?', 'output': '55\\n```'}"
]
},
"execution_count": 4,
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"What is the 10th fibonacci number?\")"
"agent_executor.invoke({\"input\": \"What is the 10th fibonacci number?\"})"
]
},
{
@ -246,10 +340,12 @@
}
],
"source": [
"agent_executor.run(\n",
" \"\"\"Understand, write a single neuron neural network in PyTorch.\n",
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"\"\"Understand, write a single neuron neural network in PyTorch.\n",
"Take synthetic data for y=2x. Train for 1000 epochs and print every 100 epochs.\n",
"Return prediction for x = 5\"\"\"\n",
" }\n",
")"
]
},

Loading…
Cancel
Save