diff --git a/docs/docs/modules/agents/quick_start.ipynb b/docs/docs/modules/agents/quick_start.ipynb index bcdf9adaff..887a857b60 100644 --- a/docs/docs/modules/agents/quick_start.ipynb +++ b/docs/docs/modules/agents/quick_start.ipynb @@ -78,26 +78,26 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 24, "id": "e593bbf6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "[{'url': 'https://www.whereandwhen.net/when/north-america/california/san-francisco-ca/january/',\n", - " 'content': 'Best time to go to San Francisco? Weather in San Francisco in january 2024 How was the weather last january? Here is the day by day recorded weather in San Francisco in january 2023: 8% 46% 29% 12% 8% Evolution of daily average temperature and precipitation in San Francisco in january Seasonal average climate and temperature of San Francisco in januaryWeather in San Francisco in january 2024. The weather in San Francisco in january comes from statistical datas on the past years. You can view the weather statistics the entire month, but also by using the tabs for the beginning, the middle and the end of the month. ... 08-01-2023 52°F to 58°F. 09-01-2023 54°F to 61°F. 10-01-2023 52°F to ...'},\n", - " {'url': 'https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/january-1/',\n", - " 'content': 'San Francisco Weather in January Data: 1999 - 2019: avg. Sun hours San Francisco weather and climate for further months San Francisco weather in January San Francisco weather by month // weather averages 9.6 (49.2) 6.2 (43.2) 14 (57.3) 113 you can find all information about the weather in San Francisco in January:Data: 1991 - 2021 Min. Temperature °C (°F), Max. Temperature °C (°F), Precipitation / Rainfall mm (in), Humidity, Rainy days. Data: 1999 - 2019: avg. Sun hours San Francisco weather and climate for further months San Francisco in February San Francisco in March San Francisco in April San Francisco in May San Francisco in June San Francisco in July'}]" + "[{'url': 'https://www.metoffice.gov.uk/weather/forecast/9q8yym8kr',\n", + " 'content': 'Thu 11 Jan Thu 11 Jan Seven day forecast for San Francisco San Francisco (United States of America) weather Find a forecast Sat 6 Jan Sat 6 Jan Sun 7 Jan Sun 7 Jan Mon 8 Jan Mon 8 Jan Tue 9 Jan Tue 9 Jan Wed 10 Jan Wed 10 Jan Thu 11 Jan Find a forecast Please choose your location from the nearest places to : Forecast days Today Today Sat 6 Jan Sat 6 JanSan Francisco 7 day weather forecast including weather warnings, temperature, rain, wind, visibility, humidity and UV ... (11 January 2024) Time 00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 08:00 09:00 10:00 11:00 12:00 ... Oakland Int. 11.5 miles; San Francisco International 11.5 miles; Corte Madera 12.3 miles; Redwood City 23.4 miles;'},\n", + " {'url': 'https://www.latimes.com/travel/story/2024-01-11/east-brother-light-station-lighthouse-california',\n", + " 'content': \"May 18, 2023 Jan. 4, 2024 Subscribe for unlimited accessSite Map Follow Us MORE FROM THE L.A. TIMES Jan. 8, 2024 Travel & Experiences This must be Elysian Valley (a.k.a. Frogtown) Jan. 5, 2024 Food June 30, 2023The East Brother Light Station in the San Francisco Bay is not a destination for everyone. ... Jan. 11, 2024 3 AM PT ... Champagne and hors d'oeuvres are served in late afternoon — outdoors if ...\"}]" ] }, - "execution_count": 3, + "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "search.run(\"what is the weather in SF\")" + "search.invoke(\"what is the weather in SF\")" ] }, { @@ -140,7 +140,7 @@ { "data": { "text/plain": [ - "Document(page_content=\"dataset uploading.Once we have a dataset, how can we use it to test changes to a prompt or chain? The most basic approach is to run the chain over the data points and visualize the outputs. Despite technological advancements, there still is no substitute for looking at outputs by eye. Currently, running the chain over the data points needs to be done client-side. The LangSmith client makes it easy to pull down a dataset and then run a chain over them, logging the results to a new project associated with the dataset. From there, you can review them. We've made it easy to assign feedback to runs and mark them as correct or incorrect directly in the web app, displaying aggregate statistics for each test project.We also make it easier to evaluate these runs. To that end, we've added a set of evaluators to the open-source LangChain library. These evaluators can be specified when initiating a test run and will evaluate the results once the test run completes. If we‚Äôre being honest, most\", metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'})" + "Document(page_content=\"dataset uploading.Once we have a dataset, how can we use it to test changes to a prompt or chain? The most basic approach is to run the chain over the data points and visualize the outputs. Despite technological advancements, there still is no substitute for looking at outputs by eye. Currently, running the chain over the data points needs to be done client-side. The LangSmith client makes it easy to pull down a dataset and then run a chain over them, logging the results to a new project associated with the dataset. From there, you can review them. We've made it easy to assign feedback to runs and mark them as correct or incorrect directly in the web app, displaying aggregate statistics for each test project.We also make it easier to evaluate these runs. To that end, we've added a set of evaluators to the open-source LangChain library. These evaluators can be specified when initiating a test run and will evaluate the results once the test run completes. If we’re being honest, most of\", metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'})" ] }, "execution_count": 5, @@ -245,12 +245,27 @@ "execution_count": 10, "id": "af83d3e3", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')),\n", + " MessagesPlaceholder(variable_name='chat_history', optional=True),\n", + " HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')),\n", + " MessagesPlaceholder(variable_name='agent_scratchpad')]" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ "from langchain import hub\n", "\n", "# Get the prompt to use - you can modify this!\n", - "prompt = hub.pull(\"hwchase17/openai-functions-agent\")" + "prompt = hub.pull(\"hwchase17/openai-functions-agent\")\n", + "prompt.messages" ] }, { @@ -338,7 +353,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 14, "id": "3fa4780a", "metadata": { "scrolled": true @@ -355,19 +370,17 @@ "Invoking: `langsmith_search` with `{'query': 'LangSmith testing'}`\n", "\n", "\n", - "\u001b[0m\u001b[33;1m\u001b[1;3m[Document(page_content='LangSmith Overview and User Guide | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'}), Document(page_content='Skip to main content\\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default‚ÄãAt LangChain, all of us have LangSmith‚Äôs tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'}), Document(page_content=\"applications can be expensive. LangSmith tracks the total token usage for a chain and the token usage of each step. This makes it easy to identify potentially costly parts of the chain.Collaborative debugging‚ÄãIn the past, sharing a faulty chain with a colleague for debugging was challenging when performed locally. With LangSmith, we've added a ‚ÄúShare‚Äù button that makes the chain and LLM runs accessible to anyone with the shared link.Collecting examples‚ÄãMost of the time we go to debug, it's because something bad or unexpected outcome has happened in our application. These failures are valuable data points! By identifying how our chain can fail and monitoring these failures, we can test future chain versions against these known issues.Why is this so impactful? When building LLM applications, it‚Äôs often common to start without a dataset of any kind. This is part of the power of LLMs! They are amazing zero-shot learners, making it possible to get started as easily as possible.\", metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'}), Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring‚ÄãAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We‚Äôve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing ‚Äî mirroring', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'})]\u001b[0m\u001b[32;1m\u001b[1;3mLangSmith can help with testing in several ways:\n", - "\n", - "1. **Tracing**: LangSmith provides tracing capabilities that allow you to track the total token usage for a chain and the token usage of each step. This makes it easy to identify potentially costly parts of the chain during testing.\n", + "\u001b[0m\u001b[33;1m\u001b[1;3m[Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'}), Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default\\u200bAt LangChain, all of us have LangSmith’s tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'}), Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'}), Document(page_content='inputs, and see what happens. At some point though, our application is performing\\nwell and we want to be more rigorous about testing changes. We can use a dataset\\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', 'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en'})]\u001b[0m\u001b[32;1m\u001b[1;3mLangSmith can help with testing in several ways. Here are some ways LangSmith can assist with testing:\n", "\n", - "2. **Collaborative debugging**: LangSmith simplifies the process of sharing a faulty chain with a colleague for debugging. It has a \"Share\" button that makes the chain and language model runs accessible to anyone with the shared link, making collaboration and debugging more efficient.\n", + "1. Tracing: LangSmith provides tracing capabilities that can be used to monitor and debug your application during testing. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\n", "\n", - "3. **Collecting examples**: When testing LLM (Language Model) applications, failures and unexpected outcomes are valuable data points. LangSmith helps in identifying how a chain can fail and monitoring these failures. By testing future chain versions against known issues, you can improve the reliability and performance of your application.\n", + "2. Evaluation: LangSmith allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets. This can help you test and fine-tune your models for improved quality or reduced costs.\n", "\n", - "4. **Editing examples and expanding evaluation sets**: LangSmith allows you to quickly edit examples and add them to datasets. This helps in expanding the surface area of your evaluation sets or fine-tuning a model for improved quality or reduced costs during testing.\n", + "3. Monitoring: Once your application is ready for production, LangSmith can be used to monitor your application. You can log feedback programmatically with runs, track performance over time, and pinpoint underperforming data points. This information can be used to improve your application and add to datasets for future testing.\n", "\n", - "5. **Monitoring**: LangSmith can be used to monitor your application in production. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can be assigned string tags or key-value metadata, allowing you to attach correlation IDs or AB test variants and filter runs accordingly. You can also associate feedback programmatically with runs, track performance over time, and pinpoint underperforming data points.\n", + "4. Rigorous Testing: When your application is performing well and you want to be more rigorous about testing changes, LangSmith can simplify the process. You can use existing datasets or construct small datasets by hand to test different scenarios and evaluate the performance of your application.\n", "\n", - "These features of LangSmith make it a valuable tool for testing and evaluating the performance of LLM applications.\u001b[0m\n", + "For more detailed information on how to use LangSmith for testing, you can refer to the [LangSmith Overview and User Guide](https://docs.smith.langchain.com/overview).\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] @@ -376,10 +389,10 @@ "data": { "text/plain": [ "{'input': 'how can langsmith help with testing?',\n", - " 'output': 'LangSmith can help with testing in several ways:\\n\\n1. **Tracing**: LangSmith provides tracing capabilities that allow you to track the total token usage for a chain and the token usage of each step. This makes it easy to identify potentially costly parts of the chain during testing.\\n\\n2. **Collaborative debugging**: LangSmith simplifies the process of sharing a faulty chain with a colleague for debugging. It has a \"Share\" button that makes the chain and language model runs accessible to anyone with the shared link, making collaboration and debugging more efficient.\\n\\n3. **Collecting examples**: When testing LLM (Language Model) applications, failures and unexpected outcomes are valuable data points. LangSmith helps in identifying how a chain can fail and monitoring these failures. By testing future chain versions against known issues, you can improve the reliability and performance of your application.\\n\\n4. **Editing examples and expanding evaluation sets**: LangSmith allows you to quickly edit examples and add them to datasets. This helps in expanding the surface area of your evaluation sets or fine-tuning a model for improved quality or reduced costs during testing.\\n\\n5. **Monitoring**: LangSmith can be used to monitor your application in production. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can be assigned string tags or key-value metadata, allowing you to attach correlation IDs or AB test variants and filter runs accordingly. You can also associate feedback programmatically with runs, track performance over time, and pinpoint underperforming data points.\\n\\nThese features of LangSmith make it a valuable tool for testing and evaluating the performance of LLM applications.'}" + " 'output': 'LangSmith can help with testing in several ways. Here are some ways LangSmith can assist with testing:\\n\\n1. Tracing: LangSmith provides tracing capabilities that can be used to monitor and debug your application during testing. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\\n\\n2. Evaluation: LangSmith allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets. This can help you test and fine-tune your models for improved quality or reduced costs.\\n\\n3. Monitoring: Once your application is ready for production, LangSmith can be used to monitor your application. You can log feedback programmatically with runs, track performance over time, and pinpoint underperforming data points. This information can be used to improve your application and add to datasets for future testing.\\n\\n4. Rigorous Testing: When your application is performing well and you want to be more rigorous about testing changes, LangSmith can simplify the process. You can use existing datasets or construct small datasets by hand to test different scenarios and evaluate the performance of your application.\\n\\nFor more detailed information on how to use LangSmith for testing, you can refer to the [LangSmith Overview and User Guide](https://docs.smith.langchain.com/overview).'}" ] }, - "execution_count": 33, + "execution_count": 14, "metadata": {}, "output_type": "execute_result" } @@ -390,7 +403,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 15, "id": "77c2f769", "metadata": {}, "outputs": [ @@ -405,7 +418,7 @@ "Invoking: `tavily_search_results_json` with `{'query': 'weather in San Francisco'}`\n", "\n", "\n", - "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://weather.com/weather/tenday/l/San Francisco CA USCA0987:1:US', 'content': 'recents Specialty Forecasts 10 Day Weather-San Francisco, CA Today Mon 18 | Day Fri 22 Fri 22 | Day Foggy early, then partly cloudy later in the day. High around 60F. Winds W at 10 to 15 mph. Considerable cloudiness with occasional rain showers. High 59F. Winds SSE at 5 to 10 mph. Chance of rain 50%. Thu 28 | Night Cloudy with showers. Low 46F. Winds S at 5 to 10 mph. Chance of rain 40%. Fri 29 Fri 29 | DaySan Francisco, CA 10-Day Weather Forecast - The Weather Channel | Weather.com 10 Day Weather - San Francisco, CA As of 12:09 pm PST Today 60°/ 54° 23% Tue 19 | Day 60° 23% S 12 mph...'}, {'url': 'https://www.sfchronicle.com/weather/article/us-forecast-18572409.php', 'content': 'San Diego, CA;64;53;65;48;Mist in the morning;N;6;72%;41%;3 San Francisco, CA;58;45;56;43;Partly sunny;ESE;6;79%;1%;2 Juneau, AK;40;36;41;36;Breezy with rain;SSE;15;90%;99%;0 Kansas City, MO;61;57;60;38;Periods of rain;E;13;83%;100%;1 St. Louis, MO;61;52;67;54;A little p.m. rain;SE;11;73%;90%;1 Tampa, FL;78;60;77;67;Rather cloudy;E;9;76%;22%;2 Salt Lake City, UT;38;22;35;22;Low clouds;SE;6;74%;0%;1 San Antonio, TX;72;64;76;47;Rain and a t-storm;NW;9;71%;94%;2US Forecast for Sunday, December 24, 2023'}]\u001b[0m\u001b[32;1m\u001b[1;3mThe weather in San Francisco is currently partly cloudy with a high around 60°F. The wind is coming from the west at 10 to 15 mph. There is a chance of rain showers later in the day. The temperature is expected to drop to a low of 46°F tonight with cloudy skies and showers.\u001b[0m\n", + "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://www.whereandwhen.net/when/north-america/california/san-francisco-ca/january/', 'content': 'Best time to go to San Francisco? Weather in San Francisco in january 2024 How was the weather last january? Here is the day by day recorded weather in San Francisco in january 2023: Seasonal average climate and temperature of San Francisco in january 8% 46% 29% 12% 8% Evolution of daily average temperature and precipitation in San Francisco in januaryWeather in San Francisco in january 2024. The weather in San Francisco in january comes from statistical datas on the past years. You can view the weather statistics the entire month, but also by using the tabs for the beginning, the middle and the end of the month. ... 11-01-2023 50°F to 54°F. 12-01-2023 50°F to 59°F. 13-01-2023 54°F to ...'}, {'url': 'https://www.latimes.com/travel/story/2024-01-11/east-brother-light-station-lighthouse-california', 'content': \"May 18, 2023 Jan. 4, 2024 Subscribe for unlimited accessSite Map Follow Us MORE FROM THE L.A. TIMES Jan. 8, 2024 Travel & Experiences This must be Elysian Valley (a.k.a. Frogtown) Jan. 5, 2024 Food June 30, 2023The East Brother Light Station in the San Francisco Bay is not a destination for everyone. ... Jan. 11, 2024 3 AM PT ... Champagne and hors d'oeuvres are served in late afternoon — outdoors if ...\"}]\u001b[0m\u001b[32;1m\u001b[1;3mI'm sorry, I couldn't find the current weather in San Francisco. However, you can check the weather in San Francisco by visiting a reliable weather website or using a weather app on your phone.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] @@ -414,10 +427,10 @@ "data": { "text/plain": [ "{'input': 'whats the weather in sf?',\n", - " 'output': 'The weather in San Francisco is currently partly cloudy with a high around 60°F. The wind is coming from the west at 10 to 15 mph. There is a chance of rain showers later in the day. The temperature is expected to drop to a low of 46°F tonight with cloudy skies and showers.'}" + " 'output': \"I'm sorry, I couldn't find the current weather in San Francisco. However, you can check the weather in San Francisco by visiting a reliable weather website or using a weather app on your phone.\"}" ] }, - "execution_count": 34, + "execution_count": 15, "metadata": {}, "output_type": "execute_result" } @@ -438,7 +451,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 16, "id": "c4073e35", "metadata": {}, "outputs": [ @@ -462,7 +475,7 @@ " 'output': 'Hello Bob! How can I assist you today?'}" ] }, - "execution_count": 14, + "execution_count": 16, "metadata": {}, "output_type": "execute_result" } @@ -474,7 +487,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 17, "id": "9dc5ed68", "metadata": {}, "outputs": [], @@ -484,7 +497,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 18, "id": "550e0c6e", "metadata": {}, "outputs": [ @@ -503,13 +516,13 @@ { "data": { "text/plain": [ - "{'input': \"what's my name?\",\n", - " 'chat_history': [HumanMessage(content='hi! my name is bob'),\n", + "{'chat_history': [HumanMessage(content='hi! my name is bob'),\n", " AIMessage(content='Hello Bob! How can I assist you today?')],\n", + " 'input': \"what's my name?\",\n", " 'output': 'Your name is Bob.'}" ] }, - "execution_count": 16, + "execution_count": 18, "metadata": {}, "output_type": "execute_result" } @@ -517,11 +530,11 @@ "source": [ "agent_executor.invoke(\n", " {\n", - " \"input\": \"what's my name?\",\n", " \"chat_history\": [\n", " HumanMessage(content=\"hi! my name is bob\"),\n", " AIMessage(content=\"Hello Bob! How can I assist you today?\"),\n", " ],\n", + " \"input\": \"what's my name?\",\n", " }\n", ")" ] @@ -536,7 +549,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 19, "id": "8edd96e6", "metadata": {}, "outputs": [], @@ -547,7 +560,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 20, "id": "6e76552a", "metadata": {}, "outputs": [], @@ -557,7 +570,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 21, "id": "828d1e95", "metadata": {}, "outputs": [], @@ -574,7 +587,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 22, "id": "1f5932b6", "metadata": {}, "outputs": [ @@ -598,7 +611,7 @@ " 'output': 'Hello Bob! How can I assist you today?'}" ] }, - "execution_count": 26, + "execution_count": 22, "metadata": {}, "output_type": "execute_result" } @@ -614,7 +627,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 23, "id": "ae627966", "metadata": {}, "outputs": [ @@ -639,7 +652,7 @@ " 'output': 'Your name is Bob.'}" ] }, - "execution_count": 27, + "execution_count": 23, "metadata": {}, "output_type": "execute_result" } @@ -662,14 +675,6 @@ "\n", "That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! Head back to the [main agent page](./) to find more resources on conceptual guides, different types of agents, how to create custom tools, and more!" ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "53569538", - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { @@ -688,7 +693,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.1" + "version": "3.11.4" } }, "nbformat": 4,