add simpler agent tutorial (#22249)

1/ added section at start with full code
2/ removed retriever tool (was just distracting)
3/ added section on starting a new conversation

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
pull/22331/head
Harrison Chase 3 months ago committed by GitHub
parent 2b9f1469d8
commit 0c9a034ed7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -12,57 +12,106 @@
},
{
"cell_type": "markdown",
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"id": "1df78a71",
"metadata": {},
"source": [
"# Build an Agent\n",
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.\n",
"Agents are systems that use LLMs as reasoning engines to determine which actions to take and the inputs to pass them.\n",
"After executing actions, the results can be fed back into the LLM to determine whether more actions are needed, or whether it is okay to finish.\n",
"\n",
"In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n",
"In this tutorial we will build an agent that can interact with a search engine. You will be able to ask this agent questions, watch it call the search tool, and have conversations with it.\n",
"\n",
"\n",
"## Concepts\n",
"\n",
"Concepts we will cover are:\n",
"- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
"- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n",
"- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n",
"- Using [LangGraph Agents](/docs/concepts/#agents) which use an LLM to think about what to do and then execute upon that\n",
"- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n",
"In following this tutorial, you will learn how to:\n",
"\n",
"## Setup\n",
"- Use [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n",
"- Use a Search [Tool](/docs/concepts/#tools) to look up information from the Internet\n",
"- Compose a [LangGraph Agent](/docs/concepts/#agents), which use an LLM to determine actions and then execute them\n",
"- Debug and trace your application using [LangSmith](/docs/concepts/#langsmith)\n",
"\n",
"### Jupyter Notebook\n",
"## End-to-end agent\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n",
"The code snippet below represents a fully functional agent that uses an LLM to decide which tools to use. It is equipped with a generic search tool. It has conversational memory - meaning that it can be used as a multi-turn chatbot.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"In the rest of the guide, we will walk through the individual components and what each part does - but if you want to just grab some code and get started, feel free to use this!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a79bb782",
"metadata": {},
"outputs": [],
"source": [
"# Import relevant functionality\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"from langchain_core.messages import HumanMessage\n",
"from langgraph.checkpoint.sqlite import SqliteSaver\n",
"from langgraph.prebuilt import chat_agent_executor\n",
"\n",
"### Installation\n",
"# Create the agent\n",
"memory = SqliteSaver.from_conn_string(\":memory:\")\n",
"model = ChatAnthropic(model_name=\"claude-3-sonnet-20240229\")\n",
"search = TavilySearchResults(max_results=2)\n",
"tools = [search]\n",
"agent_executor = chat_agent_executor.create_tool_calling_executor(\n",
" model, tools, checkpointer=memory\n",
")\n",
"\n",
"To install LangChain run:\n",
"# Use the agent\n",
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"hi im bob! and i live in sf\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")\n",
"\n",
"```{=mdx}\n",
"import Tabs from '@theme/Tabs';\n",
"import TabItem from '@theme/TabItem';\n",
"import CodeBlock from \"@theme/CodeBlock\";\n",
"\n",
"<Tabs>\n",
" <TabItem value=\"pip\" label=\"Pip\" default>\n",
" <CodeBlock language=\"bash\">pip install langchain</CodeBlock>\n",
" </TabItem>\n",
" <TabItem value=\"conda\" label=\"Conda\">\n",
" <CodeBlock language=\"bash\">conda install langchain -c conda-forge</CodeBlock>\n",
" </TabItem>\n",
"</Tabs>\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats the weather where I live?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"```\n",
"### Jupyter Notebook\n",
"\n",
"This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect interactive environments for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc), and observing these cases is a great way to better understand building with LLMs.\n",
"\n",
"This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n",
"\n",
"### Installation\n",
"\n",
"To install LangChain run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "60bb3eb1",
"metadata": {},
"outputs": [],
"source": [
"% pip install -U langchain-community langgraph langchain-anthropic"
]
},
{
"cell_type": "markdown",
"id": "2ee337ae",
"metadata": {},
"source": [
"For more details, see our [Installation guide](/docs/how_to/installation).\n",
"\n",
"### LangSmith\n",
@ -86,7 +135,25 @@
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
"```\n"
"```\n",
"\n",
"### Tavily\n",
"\n",
"We will be using [Tavily](/docs/integrations/tools/tavily_search) (a search engine) as a tool.\n",
"In order to use it, you will need to get and set an API key:\n",
"\n",
"```bash\n",
"export TAVILY_API_KEY=\"...\"\n",
"```\n",
"\n",
"Or, if in a notebook, you can set it with:\n",
"\n",
"```python\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"TAVILY_API_KEY\"] = getpass.getpass()\n",
"```"
]
},
{
@ -96,23 +163,12 @@
"source": [
"## Define tools\n",
"\n",
"We first need to create the tools we want to use. We will use two tools: [Tavily](/docs/integrations/tools/tavily_search) (to search online) and then a retriever over a local index we will create\n",
"\n",
"### [Tavily](/docs/integrations/tools/tavily_search)\n",
"\n",
"We have a built-in tool in LangChain to easily use Tavily search engine as tool.\n",
"Note that this requires an API key - they have a free tier, but if you don't have one or don't want to create one, you can always ignore this step.\n",
"\n",
"Once you create your API key, you will need to export that as:\n",
"\n",
"```bash\n",
"export TAVILY_API_KEY=\"...\"\n",
"```"
"We first need to create the tools we want to use. Our main tool of choice will be [Tavily](/docs/integrations/tools/tavily_search) - a search engine. We have a built-in tool in LangChain to easily use Tavily search engine as tool.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "482ce13d",
"metadata": {},
"outputs": [],
@ -122,7 +178,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "9cc86c0b",
"metadata": {},
"outputs": [],
@ -132,20 +188,20 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "e593bbf6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'url': 'https://weather.com/weather/tenday/l/San Francisco CA USCA0987:1:US',\n",
" 'content': \"Comfy & Cozy\\nThat's Not What Was Expected\\nOutside\\n'No-Name Storms' In Florida\\nGifts From On High\\nWhat To Do For Wheezing\\nSurviving The Season\\nStay Safe\\nAir Quality Index\\nAir quality is considered satisfactory, and air pollution poses little or no risk.\\n Health & Activities\\nSeasonal Allergies and Pollen Count Forecast\\nNo pollen detected in your area\\nCold & Flu Forecast\\nFlu risk is low in your area\\nWe recognize our responsibility to use data and technology for good. recents\\nSpecialty Forecasts\\n10 Day Weather-San Francisco, CA\\nToday\\nMon 18 | Day\\nConsiderable cloudiness. Tue 19\\nTue 19 | Day\\nLight rain early...then remaining cloudy with showers in the afternoon. Wed 27\\nWed 27 | Day\\nOvercast with rain showers at times.\"},\n",
" {'url': 'https://www.accuweather.com/en/us/san-francisco/94103/hourly-weather-forecast/347629',\n",
" 'content': 'Hourly weather forecast in San Francisco, CA. Check current conditions in San Francisco, CA with radar, hourly, and more.'}]"
"[{'url': 'https://www.weatherapi.com/',\n",
" 'content': \"{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1716929532, 'localtime': '2024-05-28 13:52'}, 'current': {'last_updated_epoch': 1716929100, 'last_updated': '2024-05-28 13:45', 'temp_c': 16.7, 'temp_f': 62.1, 'is_day': 1, 'condition': {'text': 'Partly cloudy', 'icon': '//cdn.weatherapi.com/weather/64x64/day/116.png', 'code': 1003}, 'wind_mph': 12.5, 'wind_kph': 20.2, 'wind_degree': 270, 'wind_dir': 'W', 'pressure_mb': 1019.0, 'pressure_in': 30.09, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 62, 'cloud': 25, 'feelslike_c': 16.7, 'feelslike_f': 62.1, 'windchill_c': 13.1, 'windchill_f': 55.6, 'heatindex_c': 14.5, 'heatindex_f': 58.2, 'dewpoint_c': 9.1, 'dewpoint_f': 48.4, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 4.0, 'gust_mph': 14.4, 'gust_kph': 23.2}}\"},\n",
" {'url': 'https://weatherspark.com/h/m/557/2024/5/Historical-Weather-in-May-2024-in-San-Francisco-California-United-States',\n",
" 'content': 'San Francisco Temperature History May 2024. The daily range of reported temperatures (gray bars) and 24-hour highs (red ticks) and lows (blue ticks), placed over the daily average high (faint red line) and low (faint blue line) temperature, with 25th to 75th and 10th to 90th percentile bands.'}]"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@ -154,108 +210,22 @@
"search.invoke(\"what is the weather in SF\")"
]
},
{
"cell_type": "markdown",
"id": "e8097977",
"metadata": {},
"source": [
"### Retriever\n",
"\n",
"We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/docs/tutorials/rag)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9c9ce713",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
"docs = loader.load()\n",
"documents = RecursiveCharacterTextSplitter(\n",
" chunk_size=1000, chunk_overlap=200\n",
").split_documents(docs)\n",
"vector = FAISS.from_documents(documents, OpenAIEmbeddings())\n",
"retriever = vector.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "dae53ec6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='import Clientfrom langsmith.evaluation import evaluateclient = Client()# Define dataset: these are your test casesdataset_name = \"Sample Dataset\"dataset = client.create_dataset(dataset_name, description=\"A sample dataset in LangSmith.\")client.create_examples( inputs=[ {\"postfix\": \"to LangSmith\"}, {\"postfix\": \"to Evaluations in LangSmith\"}, ], outputs=[ {\"output\": \"Welcome to LangSmith\"}, {\"output\": \"Welcome to Evaluations in LangSmith\"}, ], dataset_id=dataset.id,)# Define your evaluatordef exact_match(run, example): return {\"score\": run.outputs[\"output\"] == example.outputs[\"output\"]}experiment_results = evaluate( lambda input: \"Welcome \" + input[\\'postfix\\'], # Your AI system goes here data=dataset_name, # The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix=\"sample-experiment\", # The name of the experiment metadata={ \"version\": \"1.0.0\", \"revision_id\":', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'Getting started with LangSmith | 🦜️🛠️ LangSmith', 'description': 'Introduction', 'language': 'en'})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"how to upload a dataset\")[0]"
]
},
{
"cell_type": "markdown",
"id": "04aeca39",
"metadata": {},
"source": [
"Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "117594b5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools.retriever import create_retriever_tool"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7280b031",
"metadata": {},
"outputs": [],
"source": [
"retriever_tool = create_retriever_tool(\n",
" retriever,\n",
" \"langsmith_search\",\n",
" \"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c3b47c1d",
"metadata": {},
"source": [
"### Tools\n",
"\n",
"Now that we have created both, we can create a list of tools that we will use downstream."
"If we want, we can create other tools. Once we have all the tools we want, we can put them in a list that we will reference later."
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 4,
"id": "b8e8e710",
"metadata": {},
"outputs": [],
"source": [
"tools = [search, retriever_tool]"
"tools = [search]"
]
},
{
@ -276,7 +246,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 5,
"id": "69185491",
"metadata": {},
"outputs": [],
@ -284,9 +254,9 @@
"# | output: false\n",
"# | echo: false\n",
"\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4\")"
"model = ChatAnthropic(model=\"claude-3-sonnet-20240229\")"
]
},
{
@ -299,7 +269,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 6,
"id": "c96c960b",
"metadata": {},
"outputs": [
@ -309,7 +279,7 @@
"'Hello! How can I assist you today?'"
]
},
"execution_count": 11,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@ -331,7 +301,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 7,
"id": "ba692a74",
"metadata": {},
"outputs": [],
@ -349,7 +319,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 8,
"id": "b6a7e925",
"metadata": {},
"outputs": [
@ -379,7 +349,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 9,
"id": "688b465d",
"metadata": {},
"outputs": [
@ -388,7 +358,7 @@
"output_type": "stream",
"text": [
"ContentString: \n",
"ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in SF'}, 'id': 'call_nfE1XbCqZ8eJsB8rNdn4MQZQ'}]\n"
"ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_BjPOvStlyv61w24VkHQ4pqFG'}]\n"
]
}
],
@ -432,7 +402,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 10,
"id": "89cf72b4-6046-4b47-8f27-5522d8cb8036",
"metadata": {},
"outputs": [],
@ -456,18 +426,18 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 11,
"id": "114ba50d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!', id='1535b889-10a5-45d0-a1e1-dd2e60d4bc04'),\n",
" AIMessage(content='Hello! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 129, 'total_tokens': 139}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-2c94c074-bdc9-4f01-8fd7-71cfc4777d55-0')]"
"[HumanMessage(content='hi!', id='acd18479-7e70-4114-a293-c5233736c1d5'),\n",
" AIMessage(content='Hello! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 83, 'total_tokens': 93}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ebfca269-5cb2-47c1-8987-a24acf0b5015-0', usage_metadata={'input_tokens': 83, 'output_tokens': 10, 'total_tokens': 93})]"
]
},
"execution_count": 16,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -485,66 +455,25 @@
"source": [
"In order to see exactly what is happening under the hood (and to make sure it's not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/28311faa-e135-4d6a-ab6b-caecf6482aaa/r)\n",
"\n",
"Let's now try it out on an example where it should be invoking the retriever"
"Let's now try it out on an example where it should be invoking the tool"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "3fa4780a",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='how can langsmith help with testing?', id='04f4fe8f-391a-427c-88af-1fa064db304c'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_FNIgdO97wo51sKx3XZOGLHqT', 'function': {'arguments': '{\\n \"query\": \"how can LangSmith help with testing\"\\n}', 'name': 'langsmith_search'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 135, 'total_tokens': 157}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-51f6ea92-84e1-43a5-b1f2-bc0c12d8613f-0', tool_calls=[{'name': 'langsmith_search', 'args': {'query': 'how can LangSmith help with testing'}, 'id': 'call_FNIgdO97wo51sKx3XZOGLHqT'}]),\n",
" ToolMessage(content=\"Getting started with LangSmith | 🦜️🛠️ LangSmith\\n\\nSkip to main contentLangSmith API DocsSearchGo to AppQuick StartUser GuideTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyPricingSelf-HostingCookbookQuick StartOn this pageGetting started with LangSmithIntroduction\\u200bLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!Install LangSmith\\u200bWe offer Python and Typescript SDKs for all your LangSmith needs.PythonTypeScriptpip install -U langsmithyarn add langchain langsmithCreate an API key\\u200bTo create an API key head to the setting pages. Then click Create API Key.Setup your environment\\u200bShellexport LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key># The below examples use the OpenAI API, though it's not necessary in generalexport OPENAI_API_KEY=<your-openai-api-key>Log your first trace\\u200bWe provide multiple ways to log traces\\n\\nLearn about the workflows LangSmith supports at each stage of the LLM application lifecycle.Pricing: Learn about the pricing model for LangSmith.Self-Hosting: Learn about self-hosting options for LangSmith.Proxy: Learn about the proxy capabilities of LangSmith.Tracing: Learn about the tracing capabilities of LangSmith.Evaluation: Learn about the evaluation capabilities of LangSmith.Prompt Hub Learn about the Prompt Hub, a prompt management tool built into LangSmith.Additional Resources\\u200bLangSmith Cookbook: A collection of tutorials and end-to-end walkthroughs using LangSmith.LangChain Python: Docs for the Python LangChain library.LangChain Python API Reference: documentation to review the core APIs of LangChain.LangChain JS: Docs for the TypeScript LangChain libraryDiscord: Join us on our Discord to discuss all things LangChain!FAQ\\u200bHow do I migrate projects between organizations?\\u200bCurrently we do not support project migration betwen organizations. While you can manually imitate this by\\n\\nteam deals with sensitive data that cannot be logged. How can I ensure that only my team can access it?\\u200bIf you are interested in a private deployment of LangSmith or if you need to self-host, please reach out to us at sales@langchain.dev. Self-hosting LangSmith requires an annual enterprise license that also comes with support and formalized access to the LangChain team.Was this page helpful?NextUser GuideIntroductionInstall LangSmithCreate an API keySetup your environmentLog your first traceCreate your first evaluationNext StepsAdditional ResourcesFAQHow do I migrate projects between organizations?Why aren't my runs aren't showing up in my project?My team deals with sensitive data that cannot be logged. How can I ensure that only my team can access it?CommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.\", name='langsmith_search', id='f286c7e7-6514-4621-ac60-e4079b37ebe2', tool_call_id='call_FNIgdO97wo51sKx3XZOGLHqT'),\n",
" AIMessage(content=\"LangSmith is a platform that can significantly aid in testing by offering several features:\\n\\n1. **Tracing**: LangSmith provides robust tracing capabilities that enable you to monitor your application closely. This feature is particularly useful for tracking the behavior of your application and identifying any potential issues.\\n\\n2. **Evaluation**: LangSmith allows you to perform comprehensive evaluations of your application. This can help you assess the performance of your application under various conditions and make necessary adjustments to enhance its functionality.\\n\\n3. **Production Monitoring & Automations**: With LangSmith, you can keep a close eye on your application when it's in active use. The platform provides tools for automatic monitoring and managing routine tasks, helping to ensure your application runs smoothly.\\n\\n4. **Prompt Hub**: It's a prompt management tool built into LangSmith. This feature can be instrumental when testing various prompts in your application.\\n\\nOverall, LangSmith helps you build production-grade LLM applications with confidence, providing necessary tools for monitoring, evaluation, and automation.\", response_metadata={'token_usage': {'completion_tokens': 200, 'prompt_tokens': 782, 'total_tokens': 982}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-4b80db7e-9a26-4043-8b6b-922f847f9c80-0')]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response = agent_executor.invoke(\n",
" {\"messages\": [HumanMessage(content=\"how can langsmith help with testing?\")]}\n",
")\n",
"response[\"messages\"]"
]
},
{
"cell_type": "markdown",
"id": "f2d94242",
"metadata": {},
"source": [
"Let's take a look at the [LangSmith trace](https://smith.langchain.com/public/853f62d0-3421-4dba-b30a-7277ce2bdcdf/r) to see what is going on under the hood.\n",
"\n",
"Note that the state we get back at the end also contains the tool call and the tool response message.\n",
"\n",
"Now let's try one where it needs to call the search tool:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 12,
"id": "77c2f769",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='whats the weather in sf?', id='e6b716e6-da57-41de-a227-fee281fda588'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_TGDKm0saxuGKJD5OYOXWRvLe', 'function': {'arguments': '{\\n \"query\": \"current weather in San Francisco\"\\n}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 134, 'total_tokens': 157}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-fd7d5854-2eab-4fca-ad9e-b3de8d587614-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_TGDKm0saxuGKJD5OYOXWRvLe'}]),\n",
" ToolMessage(content='[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1714426800, \\'localtime\\': \\'2024-04-29 14:40\\'}, \\'current\\': {\\'last_updated_epoch\\': 1714426200, \\'last_updated\\': \\'2024-04-29 14:30\\', \\'temp_c\\': 17.8, \\'temp_f\\': 64.0, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Sunny\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/113.png\\', \\'code\\': 1000}, \\'wind_mph\\': 23.0, \\'wind_kph\\': 37.1, \\'wind_degree\\': 290, \\'wind_dir\\': \\'WNW\\', \\'pressure_mb\\': 1019.0, \\'pressure_in\\': 30.09, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 50, \\'cloud\\': 0, \\'feelslike_c\\': 17.8, \\'feelslike_f\\': 64.0, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 5.0, \\'gust_mph\\': 27.5, \\'gust_kph\\': 44.3}}\"}, {\"url\": \"https://www.wunderground.com/hourly/us/ca/san-francisco/94125/date/2024-4-29\", \"content\": \"Current Weather for Popular Cities . San Francisco, CA warning 59 \\\\u00b0 F Mostly Cloudy; Manhattan, NY 56 \\\\u00b0 F Fair; Schiller Park, IL (60176) warning 58 \\\\u00b0 F Mostly Cloudy; Boston, MA 52 \\\\u00b0 F Sunny ...\"}]', name='tavily_search_results_json', id='aa0d8c3d-23b5-425a-ad05-3c174fc04892', tool_call_id='call_TGDKm0saxuGKJD5OYOXWRvLe'),\n",
" AIMessage(content='The current weather in San Francisco, California is sunny with a temperature of 64.0°F (17.8°C). The wind is coming from the WNW at a speed of 23.0 mph. The humidity level is at 50%. There is no precipitation and the cloud cover is 0%. The visibility is 16.0 km. The UV index is 5.0. Please note that this information is as of 14:30 on April 29, 2024, according to [Weather API](https://www.weatherapi.com/).', response_metadata={'token_usage': {'completion_tokens': 117, 'prompt_tokens': 620, 'total_tokens': 737}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-2359b41b-cab6-40c3-b6d9-7bdf7195a601-0')]"
"[HumanMessage(content='whats the weather in sf?', id='880db162-5d1c-476c-82dd-b125caee1656'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_i3ZKnTDPB1RxqwE6PWmgz5TQ', 'function': {'arguments': '{\\n \"query\": \"current weather in San Francisco\"\\n}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 88, 'total_tokens': 111}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-93b6be79-c981-4b7b-8f0a-252255f23961-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_i3ZKnTDPB1RxqwE6PWmgz5TQ'}], usage_metadata={'input_tokens': 88, 'output_tokens': 23, 'total_tokens': 111}),\n",
" ToolMessage(content='[{\"url\": \"https://www.weatherapi.com/\", \"content\": \"{\\'location\\': {\\'name\\': \\'San Francisco\\', \\'region\\': \\'California\\', \\'country\\': \\'United States of America\\', \\'lat\\': 37.78, \\'lon\\': -122.42, \\'tz_id\\': \\'America/Los_Angeles\\', \\'localtime_epoch\\': 1716929532, \\'localtime\\': \\'2024-05-28 13:52\\'}, \\'current\\': {\\'last_updated_epoch\\': 1716929100, \\'last_updated\\': \\'2024-05-28 13:45\\', \\'temp_c\\': 16.7, \\'temp_f\\': 62.1, \\'is_day\\': 1, \\'condition\\': {\\'text\\': \\'Partly cloudy\\', \\'icon\\': \\'//cdn.weatherapi.com/weather/64x64/day/116.png\\', \\'code\\': 1003}, \\'wind_mph\\': 12.5, \\'wind_kph\\': 20.2, \\'wind_degree\\': 270, \\'wind_dir\\': \\'W\\', \\'pressure_mb\\': 1019.0, \\'pressure_in\\': 30.09, \\'precip_mm\\': 0.0, \\'precip_in\\': 0.0, \\'humidity\\': 62, \\'cloud\\': 25, \\'feelslike_c\\': 16.7, \\'feelslike_f\\': 62.1, \\'windchill_c\\': 13.1, \\'windchill_f\\': 55.6, \\'heatindex_c\\': 14.5, \\'heatindex_f\\': 58.2, \\'dewpoint_c\\': 9.1, \\'dewpoint_f\\': 48.4, \\'vis_km\\': 16.0, \\'vis_miles\\': 9.0, \\'uv\\': 4.0, \\'gust_mph\\': 14.4, \\'gust_kph\\': 23.2}}\"}, {\"url\": \"https://forecast.weather.gov/MapClick.php?lat=37.7772&lon=-122.4168\", \"content\": \"Current conditions at SAN FRANCISCO DOWNTOWN (SFOC1) Lat: 37.77056\\\\u00b0NLon: 122.42694\\\\u00b0WElev: 150.0ft. NA. 52\\\\u00b0F. 11\\\\u00b0C. Humidity: 90%: ... 2am PDT May 28, 2024-6pm PDT Jun 3, 2024 . ... Radar & Satellite Image. Hourly Weather Forecast. National Digital Forecast Database. High Temperature. Chance of Precipitation. ACTIVE ALERTS Toggle menu ...\"}]', name='tavily_search_results_json', id='302dfc48-60bc-4db5-815a-2e97b8a95607', tool_call_id='call_i3ZKnTDPB1RxqwE6PWmgz5TQ'),\n",
" AIMessage(content='The current weather in San Francisco, California is partly cloudy with a temperature of 16.7°C (62.1°F). The wind is coming from the west at a speed of 20.2 kph (12.5 mph). The humidity is at 62%. [Source](https://www.weatherapi.com/)', response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 691, 'total_tokens': 758}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-953864dd-9af6-48aa-bc61-8b63388fca03-0', usage_metadata={'input_tokens': 691, 'output_tokens': 67, 'total_tokens': 758})]"
]
},
"execution_count": 18,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -764,6 +693,38 @@
"Example [LangSmith trace](https://smith.langchain.com/public/fa73960b-0f7d-4910-b73d-757a12f33b2b/r)"
]
},
{
"cell_type": "markdown",
"id": "ae908088",
"metadata": {},
"source": [
"If I want to start a new conversation, all I have to do is change the `thread_id` used"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "24460239",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content=\"As an AI, I don't have access to personal data about individuals unless it has been shared with me in the course of our conversation. I am designed to respect user privacy and confidentiality. So, I don't know your name.\", response_metadata={'token_usage': {'completion_tokens': 48, 'prompt_tokens': 86, 'total_tokens': 134}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b3c8d577-fdbf-4f0f-8fd8-ecb3a5ac8920-0', usage_metadata={'input_tokens': 86, 'output_tokens': 48, 'total_tokens': 134})]}}\n",
"----\n"
]
}
],
"source": [
"config = {\"configurable\": {\"thread_id\": \"xyz123\"}}\n",
"for chunk in agent_executor.stream(\n",
" {\"messages\": [HumanMessage(content=\"whats my name?\")]}, config\n",
"):\n",
" print(chunk)\n",
" print(\"----\")"
]
},
{
"cell_type": "markdown",
"id": "c029798f",
@ -804,7 +765,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.1"
"version": "3.10.1"
}
},
"nbformat": 4,

Loading…
Cancel
Save