docs[minor]: LCEL rewrite of chatbot use-case (#16414)

CC @baskaryan @hwchase17

TODO:
- [x] Draft of main quickstart
- [x] Index intro page
- [x] Add subpage guide for Memory management
- [x] Add subpage guide for Retrieval
- [x] Add subpage guide for Tool usage
- [x] Add LangSmith traces illustrating query transformation
pull/16750/head
Jacob Lee 8 months ago committed by GitHub
parent 85e93e05ed
commit 12d2b2ebcf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,747 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "22fd28c9-9b48-476c-bca8-20efef7fdb14",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: Chatbots\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "ee7f95e4",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/chatbots.ipynb)\n",
"\n",
"## Use case\n",
"\n",
"Chatbots are one of the central LLM use-cases. The core features of chatbots are that they can have long-running conversations and have access to information that users want to know about.\n",
"\n",
"Aside from basic prompting and LLMs, memory and retrieval are the core components of a chatbot. Memory allows a chatbot to remember past interactions, and retrieval provides a chatbot with up-to-date, domain-specific information."
]
},
{
"cell_type": "markdown",
"id": "56615b45",
"metadata": {},
"source": [
"![Image description](../../static/img/chat_use_case.png)"
]
},
{
"cell_type": "markdown",
"id": "ff48f490",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"The chat model interface is based around messages rather than raw text. Several components are important to consider for chat:\n",
"\n",
"* `chat model`: See [here](/docs/integrations/chat) for a list of chat model integrations and [here](/docs/modules/model_io/chat) for documentation on the chat model interface in LangChain. You can use `LLMs` (see [here](/docs/modules/model_io/llms)) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.\n",
"* `prompt template`: Prompt templates make it easy to assemble prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.\n",
"* `memory`: [See here](/docs/modules/memory/) for in-depth documentation on memory types\n",
"* `retriever` (optional): [See here](/docs/modules/data_connection/retrievers) for in-depth documentation on retrieval systems. These are useful if you want to build a chatbot with domain-specific knowledge.\n",
"\n",
"## Quickstart\n",
"\n",
"Here's a quick preview of how we can create chatbot interfaces. First let's install some dependencies and set the required credentials:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5070a1fd",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"# import dotenv\n",
"# dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"id": "88197b95",
"metadata": {},
"source": [
"With a plain chat model, we can get chat completions by [passing one or more messages](/docs/modules/model_io/chat) to the model.\n",
"\n",
"The chat model will respond with a message."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "5b0d84ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.schema import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI()\n",
"chat(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7935d9a5",
"metadata": {},
"source": [
"And if we pass in a list of messages:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "afd27a9f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" SystemMessage(\n",
" content=\"You are a helpful assistant that translates English to French.\"\n",
" ),\n",
" HumanMessage(content=\"I love programming.\"),\n",
"]\n",
"chat(messages)"
]
},
{
"cell_type": "markdown",
"id": "c7a1d169",
"metadata": {},
"source": [
"We can then wrap our chat model in a `ConversationChain`, which has built-in memory for remembering past user inputs and model outputs."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "fdb05d74",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Je adore la programmation.'"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationChain\n",
"\n",
"conversation = ConversationChain(llm=chat)\n",
"conversation.run(\"Translate this sentence from English to French: I love programming.\")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "d801a173",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Ich liebe Programmieren.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.run(\"Translate it to German.\")"
]
},
{
"cell_type": "markdown",
"id": "9e86788c",
"metadata": {},
"source": [
"## Memory \n",
"\n",
"As we mentioned above, the core component of chatbots is the memory system. One of the simplest and most commonly used forms of memory is `ConversationBufferMemory`:\n",
"\n",
"* This memory allows for storing of messages in a `buffer`\n",
"* When called in a chain, it returns all of the messages it has stored\n",
"\n",
"LangChain comes with many other types of memory, too. [See here](/docs/modules/memory/) for in-depth documentation on memory types.\n",
"\n",
"For now let's take a quick look at ConversationBufferMemory. We can manually add a few chat messages to the memory like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1380a4ea",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"\n",
"memory = ConversationBufferMemory()\n",
"memory.chat_memory.add_user_message(\"hi!\")\n",
"memory.chat_memory.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "markdown",
"id": "a3d5d1f8",
"metadata": {},
"source": [
"And now we can load from our memory. The key method exposed by all `Memory` classes is `load_memory_variables`. This takes in any initial chain input and returns a list of memory variables which are added to the chain input. \n",
"\n",
"Since this simple memory type doesn't actually take into account the chain input when loading memory, we can pass in an empty input for now:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "982467e7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'Human: hi!\\nAI: whats up?'}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "7c1b20d4",
"metadata": {},
"source": [
"We can also keep a sliding window of the most recent `k` interactions using `ConversationBufferWindowMemory`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f72b9ff7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': 'Human: not much you\\nAI: not much'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.memory import ConversationBufferWindowMemory\n",
"\n",
"memory = ConversationBufferWindowMemory(k=1)\n",
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
"memory.save_context({\"input\": \"not much you\"}, {\"output\": \"not much\"})\n",
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "7b84f90a",
"metadata": {},
"source": [
"`ConversationSummaryMemory` is an extension of this theme.\n",
"\n",
"It creates a summary of the conversation over time. \n",
"\n",
"This memory is most useful for longer conversations where the full message history would consume many tokens."
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "ca2596ed",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationSummaryMemory\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"memory = ConversationSummaryMemory(llm=llm)\n",
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
"memory.save_context(\n",
" {\"input\": \"im working on better docs for chatbots\"},\n",
" {\"output\": \"oh, that sounds like a lot of work\"},\n",
")\n",
"memory.save_context(\n",
" {\"input\": \"yes, but it's worth the effort\"},\n",
" {\"output\": \"agreed, good docs are important!\"},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "060f69b7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': '\\nThe human greets the AI, to which the AI responds. The human then mentions they are working on better docs for chatbots, to which the AI responds that it sounds like a lot of work. The human agrees that it is worth the effort, and the AI agrees that good docs are important.'}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "4bf036f6",
"metadata": {},
"source": [
"`ConversationSummaryBufferMemory` extends this a bit further:\n",
"\n",
"It uses token length rather than number of interactions to determine when to flush interactions."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "38b42728",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationSummaryBufferMemory\n",
"\n",
"memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)\n",
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
"memory.save_context({\"input\": \"not much you\"}, {\"output\": \"not much\"})"
]
},
{
"cell_type": "markdown",
"id": "ff0db09f",
"metadata": {},
"source": [
"## Conversation \n",
"\n",
"We can unpack what goes under the hood with `ConversationChain`. \n",
"\n",
"We can specify our memory, `ConversationSummaryMemory` and we can specify the prompt. "
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "fccd6995",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a nice chatbot having a conversation with a human.\n",
"Human: hi\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'question': 'hi',\n",
" 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False),\n",
" AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False)],\n",
" 'text': 'Hello! How can I assist you today?'}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.prompts import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" MessagesPlaceholder,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"\n",
"# LLM\n",
"llm = ChatOpenAI()\n",
"\n",
"# Prompt\n",
"prompt = ChatPromptTemplate(\n",
" messages=[\n",
" SystemMessagePromptTemplate.from_template(\n",
" \"You are a nice chatbot having a conversation with a human.\"\n",
" ),\n",
" # The `variable_name` here is what must align with memory\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\"),\n",
" ]\n",
")\n",
"\n",
"# Notice that we `return_messages=True` to fit into the MessagesPlaceholder\n",
"# Notice that `\"chat_history\"` aligns with the MessagesPlaceholder name\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
"conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)\n",
"\n",
"# Notice that we just pass in the `question` variables - `chat_history` gets populated by memory\n",
"conversation({\"question\": \"hi\"})"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "eb0cadfd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a nice chatbot having a conversation with a human.\n",
"Human: hi\n",
"AI: Hello! How can I assist you today?\n",
"Human: Translate this sentence from English to French: I love programming.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'question': 'Translate this sentence from English to French: I love programming.',\n",
" 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False),\n",
" AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False),\n",
" HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False),\n",
" AIMessage(content='Sure! The translation of \"I love programming\" from English to French is \"J\\'adore programmer.\"', additional_kwargs={}, example=False)],\n",
" 'text': 'Sure! The translation of \"I love programming\" from English to French is \"J\\'adore programmer.\"'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation(\n",
" {\"question\": \"Translate this sentence from English to French: I love programming.\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "c56d6219",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a nice chatbot having a conversation with a human.\n",
"Human: hi\n",
"AI: Hello! How can I assist you today?\n",
"Human: Translate this sentence from English to French: I love programming.\n",
"AI: Sure! The translation of \"I love programming\" from English to French is \"J'adore programmer.\"\n",
"Human: Now translate the sentence to German.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'question': 'Now translate the sentence to German.',\n",
" 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False),\n",
" AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False),\n",
" HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False),\n",
" AIMessage(content='Sure! The translation of \"I love programming\" from English to French is \"J\\'adore programmer.\"', additional_kwargs={}, example=False),\n",
" HumanMessage(content='Now translate the sentence to German.', additional_kwargs={}, example=False),\n",
" AIMessage(content='Certainly! The translation of \"I love programming\" from English to German is \"Ich liebe das Programmieren.\"', additional_kwargs={}, example=False)],\n",
" 'text': 'Certainly! The translation of \"I love programming\" from English to German is \"Ich liebe das Programmieren.\"'}"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation({\"question\": \"Now translate the sentence to German.\"})"
]
},
{
"cell_type": "markdown",
"id": "43858489",
"metadata": {},
"source": [
"We can see the chat history preserved in the prompt using the [LangSmith trace](https://smith.langchain.com/public/dce34c57-21ca-4283-9020-a8e0d78a59de/r).\n",
"\n",
"![Image description](../../static/img/chat_use_case_2.png)"
]
},
{
"cell_type": "markdown",
"id": "3f35cc16",
"metadata": {},
"source": [
"## Chat Retrieval\n",
"\n",
"Now, suppose we want to [chat with documents](https://twitter.com/mayowaoshin/status/1640385062708424708?s=20) or some other source of knowledge.\n",
"\n",
"This is popular use case, combining chat with [document retrieval](/docs/use_cases/question_answering).\n",
"\n",
"It allows us to chat with specific information that the model was not trained on."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a01e7b5",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet tiktoken chromadb"
]
},
{
"cell_type": "markdown",
"id": "88e220de",
"metadata": {},
"source": [
"Load a blog post."
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "1b99b36c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "3662ce79",
"metadata": {},
"source": [
"Split and store this in a vector."
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "058f1541",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
]
},
{
"cell_type": "markdown",
"id": "603d9441",
"metadata": {},
"source": [
"Create our memory, as before, but's let's use `ConversationSummaryMemory`."
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "f89fd3f5",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationSummaryMemory(\n",
" llm=llm, memory_key=\"chat_history\", return_messages=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "28503423",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI()\n",
"retriever = vectorstore.as_retriever()\n",
"qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "a9c3bd5e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'How do agents use Task decomposition?',\n",
" 'chat_history': [SystemMessage(content='', additional_kwargs={})],\n",
" 'answer': 'Agents can use task decomposition in several ways:\\n\\n1. Simple prompting: Agents can use Language Model based prompting to break down tasks into subgoals. For example, by providing prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\", the agent can generate a sequence of smaller steps that lead to the completion of the overall task.\\n\\n2. Task-specific instructions: Agents can be given task-specific instructions to guide their planning process. For example, if the task is to write a novel, the agent can be instructed to \"Write a story outline.\" This provides a high-level structure for the task and helps in breaking it down into smaller components.\\n\\n3. Human inputs: Agents can also take inputs from humans to decompose tasks. This can be done through direct communication or by leveraging human expertise. Humans can provide guidance and insights to help the agent break down complex tasks into manageable subgoals.\\n\\nOverall, task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.'}"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa(\"How do agents use Task decomposition?\")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "a29a7713",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'What are the various ways to implement memory to support it?',\n",
" 'chat_history': [SystemMessage(content='The human asks how agents use task decomposition. The AI explains that agents can use task decomposition in several ways, including simple prompting, task-specific instructions, and human inputs. Task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.', additional_kwargs={})],\n",
" 'answer': 'There are several ways to implement memory to support task decomposition:\\n\\n1. Long-Term Memory Management: This involves storing and organizing information in a long-term memory system. The agent can retrieve past experiences, knowledge, and learned strategies to guide the task decomposition process.\\n\\n2. Internet Access: The agent can use internet access to search for relevant information and gather resources to aid in task decomposition. This allows the agent to access a vast amount of information and utilize it in the decomposition process.\\n\\n3. GPT-3.5 Powered Agents: The agent can delegate simple tasks to GPT-3.5 powered agents. These agents can perform specific tasks or provide assistance in task decomposition, allowing the main agent to focus on higher-level planning and decision-making.\\n\\n4. File Output: The agent can store the results of task decomposition in files or documents. This allows for easy retrieval and reference during the execution of the task.\\n\\nThese memory resources help the agent in organizing and managing information, making informed decisions, and effectively decomposing complex tasks into smaller, manageable subgoals.'}"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa(\"What are the various ways to implement memory to support it?\")"
]
},
{
"cell_type": "markdown",
"id": "d5e8d5f4",
"metadata": {},
"source": [
"Again, we can use the [LangSmith trace](https://smith.langchain.com/public/18460363-0c70-4c72-81c7-3b57253bb58c/r) to explore the prompt structure.\n",
"\n",
"### Going deeper \n",
"\n",
"* Agents, such as the [conversational retrieval agent](/docs/use_cases/question_answering/conversational_retrieval_agents), can be used for retrieval when necessary while also holding a conversation.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ff8925f-4c21-4680-a9cd-3670ad4852b3",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,39 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chatbots\n",
"\n",
"## Overview\n",
"\n",
"Chatbots are one of the most popular use-cases for LLMs. The core features of chatbots are that they can have long-running, stateful conversations and can answer user questions using relevant information.\n",
"\n",
"## Architectures\n",
"\n",
"Designing a chatbot involves considering various techniques with different benefits and tradeoffs depending on what sorts of questions you expect it to handle.\n",
"\n",
"For example, chatbots commonly use [retrieval-augmented generation](/docs/use_cases/question_answering/), or RAG, over private data to better answer domain-specific questions. You also might choose to route between multiple data sources to ensure it only uses the most topical context for final question answering, or choose to use a more specialized type of chat history or memory than just passing messages back and forth.\n",
"\n",
"![Image description](../../../static/img/chat_use_case.png)\n",
"\n",
"Optimizations like this can make your chatbot more powerful, but add latency and complexity. The aim of this guide is to give you an overview of how to implement various features and help you tailor your chatbot to your particular use-case.\n",
"\n",
"## Table of contents\n",
"\n",
"- [Quickstart](/docs/use_cases/chatbots/quickstart): We recommend starting here. Many of the following guides assume you fully understand the architecture shown in the Quickstart.\n",
"- [Memory management](/docs/use_cases/chatbots/memory_management): This section covers various strategies your chatbot can use to handle information from previous conversation turns.\n",
"- [Retrieval](/docs/use_cases/chatbots/retrieval): This section covers how to enable your chatbot to use outside data sources as context.\n",
"- [Tool usage](/docs/use_cases/chatbots/tool_usage): This section covers how to turn your chatbot into a conversational agent by adding the ability to interact with other systems and APIs using tools."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -0,0 +1,780 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Memory management\n",
"\n",
"A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:\n",
"\n",
"- Simply stuffing previous messages into a chat model prompt.\n",
"- The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.\n",
"- More complex modifications like synthesizing summaries for long running conversations.\n",
"\n",
"We'll go into more detail on a few techniques below!\n",
"\n",
"## Setup\n",
"\n",
"You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"import dotenv\n",
"\n",
"dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's also set up a chat model that we'll use for the below examples."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Message passing\n",
"\n",
"The simplest form of memory is simply passing chat history messages into a chain. Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said \"J\\'adore la programmation,\" which means \"I love programming\" in French.')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" ),\n",
" AIMessage(content=\"J'adore la programmation.\"),\n",
" HumanMessage(content=\"What did you just say?\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that by passing the previous conversation into a chain, it can use it as context to answer questions. This is the basic concept underpinning chatbot memory - the rest of the guide will demonstrate convenient techniques for passing or reformatting messages.\n",
"\n",
"## Chat history\n",
"\n",
"It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](/docs/modules/memory/chat_messages/) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n",
"\n",
"Here's an example of the API:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='Translate this sentence from English to French: I love programming.'),\n",
" AIMessage(content=\"J'adore la programmation.\")]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.memory import ChatMessageHistory\n",
"\n",
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\n",
" \"Translate this sentence from English to French: I love programming.\"\n",
")\n",
"\n",
"demo_ephemeral_chat_history.add_ai_message(\"J'adore la programmation.\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use it directly to store conversation turns for our chain:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"input1 = \"Translate this sentence from English to French: I love programming.\"\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(input1)\n",
"\n",
"response = chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" }\n",
")\n",
"\n",
"demo_ephemeral_chat_history.add_ai_message(response)\n",
"\n",
"input2 = \"What did I just ask you?\"\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(input2)\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Automatic history management\n",
"\n",
"The previous examples pass messages to the chain explicitly. This is a completely acceptable approach, but it does require external management of new messages. LangChain also includes an wrapper for LCEL chains that can handle this process automatically called `RunnableWithMessageHistory`.\n",
"\n",
"To show how it works, let's slightly modify the above prompt to take a final `input` variable that populates a `HumanMessage` template after the chat history. This means that we will expect a `chat_history` parameter that contains all messages BEFORE the current messages instead of all messages:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" We'll pass the latest input to the conversation here and let the `RunnableWithMessageHistory` class wrap our chain and do the work of appending that `input` variable to the chat history.\n",
" \n",
" Next, let's declare our wrapped chain:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"demo_ephemeral_chat_history_for_chain = ChatMessageHistory()\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history_for_chain,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This class takes a few parameters in addition to the chain that we want to wrap:\n",
"\n",
"- A factory function that returns a message history for a given session id. This allows your chain to handle multiple users at once by loading different messages for different conversations.\n",
"- An `input_messages_key` that specifies which part of the input should be tracked and stored in the chat history. In this example, we want to track the string passed in as `input`.\n",
"- A `history_messages_key` that specifies what the previous messages should be injected into the prompt as. Our prompt has a `MessagesPlaceholder` named `chat_history`, so we specify this property to match.\n",
"- (For chains with multiple outputs) an `output_messages_key` which specifies which output to store as history. This is the inverse of `input_messages_key`.\n",
"\n",
"We can invoke this new chain as normal, with an additional `configurable` field that specifies the particular `session_id` to pass to the factory function. This is unused for the demo, but in real-world chains, you'll want to return a chat history corresponding to the passed session:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" in French is \"J\\'adore la programmation.\"')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_message_history.invoke(\n",
" {\"input\": \"Translate this sentence from English to French: I love programming.\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_message_history.invoke(\n",
" {\"input\": \"What did I just ask you?\"}, {\"configurable\": {\"session_id\": \"unused\"}}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Modifying chat history\n",
"\n",
"Modifying stored chat messages can help your chatbot handle a variety of situations. Here are some examples:\n",
"\n",
"### Trimming messages\n",
"\n",
"LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is to only load and store the most recent `n` messages. Let's use an example history with some preloaded messages:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n",
" AIMessage(content='Hello!'),\n",
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!')]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"Hey there! I'm Nemo.\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Hello!\")\n",
"demo_ephemeral_chat_history.add_user_message(\"How are you today?\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Fine thanks!\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's use this message history with the `RunnableWithMessageHistory` chain we declared above:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Nemo.')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")\n",
"\n",
"chain_with_message_history.invoke(\n",
" {\"input\": \"What's my name?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see the chain remembers the preloaded name.\n",
"\n",
"But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the `clear` method to remove messages and re-add them to the history. We don't have to, but let's put this method at the front of our chain to ensure it's always called:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"def trim_messages(chain_input):\n",
" stored_messages = demo_ephemeral_chat_history.messages\n",
" if len(stored_messages) <= 2:\n",
" return False\n",
"\n",
" demo_ephemeral_chat_history.clear()\n",
"\n",
" for message in stored_messages[-2:]:\n",
" demo_ephemeral_chat_history.add_message(message)\n",
"\n",
" return True\n",
"\n",
"\n",
"chain_with_trimming = (\n",
" RunnablePassthrough.assign(messages_trimmed=trim_messages)\n",
" | chain_with_message_history\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's call this new chain and check the messages afterwards:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
" {\"input\": \"Where does P. Sherman live?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.'),\n",
" HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\")]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And we can see that our history has removed the two oldest messages while still adding the most recent conversation at the end. The next time the chain is called, `trim_messages` will be called again, and only the two most recent messages will be passed to the model. In this case, this means that the model will forget the name we gave it the next time we invoke it:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_trimming.invoke(\n",
" {\"input\": \"What is my name?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content=\"P. Sherman's address is 42 Wallaby Way, Sydney.\"),\n",
" HumanMessage(content='What is my name?'),\n",
" AIMessage(content=\"I'm sorry, I don't have access to your personal information.\")]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Summary memory\n",
"\n",
"We can use this same pattern in other ways too. For example, we could use an additional LLM call to generate a summary of the conversation before calling our chain. Let's recreate our chat history and chatbot chain:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n",
" AIMessage(content='Hello!'),\n",
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!')]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"Hey there! I'm Nemo.\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Hello!\")\n",
"demo_ephemeral_chat_history.add_user_message(\"How are you today?\")\n",
"demo_ephemeral_chat_history.add_ai_message(\"Fine thanks!\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll slightly modify the prompt to make the LLM aware that will receive a condensed summary instead of a chat history:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"user\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain_with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: demo_ephemeral_chat_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now, let's create a function that will distill previous interactions into a summary. We can add this one to the front of the chain too:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"def summarize_messages(chain_input):\n",
" stored_messages = demo_ephemeral_chat_history.messages\n",
" if len(stored_messages) == 0:\n",
" return False\n",
" summarization_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\n",
" \"user\",\n",
" \"Distill the above chat messages into a single summary message. Include as many specific details as you can.\",\n",
" ),\n",
" ]\n",
" )\n",
" summarization_chain = summarization_prompt | chat\n",
"\n",
" summary_message = summarization_chain.invoke({\"chat_history\": stored_messages})\n",
"\n",
" demo_ephemeral_chat_history.clear()\n",
"\n",
" demo_ephemeral_chat_history.add_message(summary_message)\n",
"\n",
" return True\n",
"\n",
"\n",
"chain_with_summarization = (\n",
" RunnablePassthrough.assign(messages_summarized=summarize_messages)\n",
" | chain_with_message_history\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see if it remembers the name we gave it:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='You introduced yourself as Nemo. How can I assist you today, Nemo?')"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_summarization.invoke(\n",
" {\"input\": \"What did I say my name was?\"},\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content='The conversation is between Nemo and an AI. Nemo introduces himself and the AI responds with a greeting. Nemo then asks the AI how it is doing, and the AI responds that it is fine.'),\n",
" HumanMessage(content='What did I say my name was?'),\n",
" AIMessage(content='You introduced yourself as Nemo. How can I assist you today, Nemo?')]"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that invoking the chain again will generate another summary generated from the initial summary plus new messages and so on. You could also design a hybrid approach where a certain number of messages are retained in chat history while others are summarized."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -0,0 +1,936 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/chatbots.ipynb)\n",
"\n",
"# Quickstart"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"We'll go over an example of how to design and implement an LLM-powered chatbot. Here are a few of the high-level components we'll be working with:\n",
"\n",
"- `Chat Models`. The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs. See [here](/docs/integrations/chat) for a list of chat model integrations and [here](/docs/modules/model_io/chat) for documentation on the chat model interface in LangChain. You can use `LLMs` (see [here](/docs/modules/model_io/llms)) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.\n",
"- `Prompt Templates`, which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.\n",
"- `Chat History`, which allows a chatbot to \"remember\" past interactions and take them into account when responding to followup questions. [See here](/docs/modules/memory/chat_messages/) for more information.\n",
"- `Retrievers` (optional), which are useful if you want to build a chatbot that can use domain-specific, up-to-date knowledge as context to augment its responses. [See here](/docs/modules/data_connection/retrievers) for in-depth documentation on retrieval systems.\n",
"\n",
"We'll cover how to fit the above components together to create a powerful conversational chatbot.\n",
"\n",
"## Quickstart\n",
"\n",
"To start, let's install some dependencies and set the required credentials:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"import dotenv\n",
"\n",
"dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's initialize the chat model which will serve as the chatbot's brain:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we invoke our chat model, the output is an `AIMessage`:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime programmer.\")"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model on its own does not have any concept of state. For example, if you ask a followup question:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said: \"What did you just say?\"')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat.invoke([HumanMessage(content=\"What did you just say?\")])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that it doesn't take the previous conversation turn into context, and cannot answer the question.\n",
"\n",
"To get around this, we need to pass the entire conversation history into the model. Let's see what happens when we do that:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said \"J\\'adore la programmation\" which means \"I love programming\" in French.')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage\n",
"\n",
"chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" ),\n",
" AIMessage(content=\"J'adore la programmation.\"),\n",
" HumanMessage(content=\"What did you just say?\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we can see that we get a good response!\n",
"\n",
"This is the basic idea underpinning a chatbot's ability to interact conversationally.\n",
"\n",
"## Prompt templates\n",
"\n",
"Let's define a prompt template to make formatting a bit easier. We can create a chain by piping it into the model:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `MessagesPlaceholder` above inserts chat messages passed into the chain's input as `chat_history` directly into the prompt. Then, we can invoke the chain like this:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said \"J\\'adore la programmation,\" which means \"I love programming\" in French.')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French: I love programming.\"\n",
" ),\n",
" AIMessage(content=\"J'adore la programmation.\"),\n",
" HumanMessage(content=\"What did you just say?\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Message history\n",
"\n",
"As a shortcut for managing the chat history, we can use a [`MessageHistory`](/docs/modules/memory/chat_messages/) class, which is responsible for saving and loading chat messages. There are many built-in message history integrations that persist messages to a variety of databases, but for this quickstart we'll use a in-memory, demo message history called `ChatMessageHistory`.\n",
"\n",
"Here's an example of using it directly:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.memory import ChatMessageHistory\n",
"\n",
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"hi!\")\n",
"\n",
"demo_ephemeral_chat_history.add_ai_message(\"whats up?\")\n",
"\n",
"demo_ephemeral_chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we do that, we can pass the stored messages directly into our chain as a parameter:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='\"I love programming\" translates to \"J\\'adore programmer\" in French.')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.add_user_message(\n",
" \"Translate this sentence from English to French: I love programming.\"\n",
")\n",
"\n",
"response = chain.invoke({\"messages\": demo_ephemeral_chat_history.messages})\n",
"\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I said, \"I love programming\" translates to \"J\\'adore programmer\" in French.')"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.add_ai_message(response)\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"What did you just say?\")\n",
"\n",
"chain.invoke({\"messages\": demo_ephemeral_chat_history.messages})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we have a basic chatbot!\n",
"\n",
"While this chain can serve as a useful chatbot on its own with just the model's internal knowledge, it's often useful to introduce some form of `retrieval-augmented generation`, or RAG for short, over domain-specific knowledge to make our chatbot more focused. We'll cover this next.\n",
"\n",
"## Retrievers\n",
"\n",
"We can set up and use a [`Retriever`](/docs/modules/data_connection/retrievers/) to pull domain-specific knowledge for our chatbot. To show this, let's expand the simple chatbot we created above to be able to answer questions about LangSmith.\n",
"\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store it in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](https://python.langchain.com/docs/use_cases/question_answering/).\n",
"\n",
"Let's set up our retriever. First, we'll install some required deps:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet chromadb beautifulsoup4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll use a document loader to pull data from a webpage:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we split it into smaller chunks that the LLM's context window can handle and store it in a vector database:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we embed and store those chunks in a vector database:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, let's create a retriever from our initialized vectorstore:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='inputs, and see what happens. At some point though, our application is performing\\nwell and we want to be more rigorous about testing changes. We can use a dataset\\nthat weve constructed along the way (see above). Alternatively, we could spend some\\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default\\u200bAt LangChain, all of us have LangSmiths tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# k is the number of chunks to retrieve\n",
"retriever = vectorstore.as_retriever(k=4)\n",
"\n",
"docs = retriever.invoke(\"how can langsmith help with testing?\")\n",
"\n",
"docs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions.\n",
"\n",
"### Handling documents\n",
"\n",
"Let's modify our previous prompt to accept documents as context. We'll use a `create_stuff_documents_chain` helper function to \"stuff\" all of the input documents into the prompt, which also conveniently handles formatting. Other arguments (like `messages`) will be passed directly through into the prompt:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")\n",
"\n",
"question_answering_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"Answer the user's questions based on the below context:\\n\\n{context}\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")\n",
"\n",
"document_chain = create_stuff_documents_chain(chat, question_answering_prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can invoke this `document_chain` with the raw documents we retrieved above:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'LangSmith can assist with testing in several ways. Firstly, it simplifies the construction of datasets for testing and evaluation, allowing you to quickly edit examples and add them to datasets. This helps expand the surface area of your evaluation sets and fine-tune your model for improved quality or reduced costs.\\n\\nAdditionally, LangSmith can be used to monitor your application in much the same way as it is used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. This monitoring capability can be invaluable for testing the performance and reliability of your application as it moves towards production.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.memory import ChatMessageHistory\n",
"\n",
"demo_ephemeral_chat_history = ChatMessageHistory()\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"how can langsmith help with testing?\")\n",
"\n",
"document_chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" \"context\": docs,\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Awesome! We see an answer synthesized from information in the input documents.\n",
"\n",
"### Creating a retrieval chain\n",
"\n",
"Next, let's integrate our retriever into the chain. Our retriever should retrieve information relevant to the last message we pass in from the user, so we extract it and use that as input to fetch relevant docs, which we add to the current chain as `context`. We pass `context` plus the previous `messages` into our document chain to generate a final answer.\n",
"\n",
"We also use the `RunnablePassthrough.assign()` method to pass intermediate steps through at each invocation. Here's what it looks like:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"from typing import Dict\n",
"\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"def parse_retriever_input(params: Dict):\n",
" return params[\"messages\"][-1].content\n",
"\n",
"\n",
"retrieval_chain = RunnablePassthrough.assign(\n",
" context=parse_retriever_input | retriever,\n",
").assign(\n",
" answer=document_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='how can langsmith help with testing?')],\n",
" 'context': [Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='inputs, and see what happens. At some point though, our application is performing\\nwell and we want to be more rigorous about testing changes. We can use a dataset\\nthat weve constructed along the way (see above). Alternatively, we could spend some\\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default\\u200bAt LangChain, all of us have LangSmiths tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'LangSmith can aid in testing by providing the ability to quickly edit examples and add them to datasets, thereby expanding the range of evaluation sets. This feature enables you to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, offering an alternative approach for rigorous testing of changes. It also facilitates monitoring of application performance, allowing you to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response = retrieval_chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" }\n",
")\n",
"\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='how can langsmith help with testing?'),\n",
" AIMessage(content='LangSmith can aid in testing by providing the ability to quickly edit examples and add them to datasets, thereby expanding the range of evaluation sets. This feature enables you to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, offering an alternative approach for rigorous testing of changes. It also facilitates monitoring of application performance, allowing you to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'),\n",
" HumanMessage(content='tell me more about that!')],\n",
" 'context': [Document(page_content='however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content=\"against these known issues.Why is this so impactful? When building LLM applications, its often common to start without a dataset of any kind. This is part of the power of LLMs! They are amazing zero-shot learners, making it possible to get started as easily as possible. But this can also be a curse -- as you adjust the prompt, you're wandering blind. You dont have any examples to benchmark your changes against.LangSmith addresses this problem by including an “Add to Dataset” button for each\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='environments through process.env1.The benefit here is that all calls to LLMs, chains, agents, tools, and retrievers are logged to LangSmith. Around 90% of the time we dont even look at the traces, but the 10% of the time that we do… its so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging\\u200bDebugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'LangSmith offers a feature that allows users to quickly edit examples and add them to datasets, which can help expand the surface area of evaluation sets or fine-tune a model for improved quality or reduced costs. This enables users to build small datasets by hand, providing a means for rigorous testing of changes or improvements to their application. Additionally, LangSmith facilitates the monitoring of application performance, allowing users to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. This comprehensive approach ensures thorough testing and monitoring of your application.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.add_ai_message(response[\"answer\"])\n",
"\n",
"demo_ephemeral_chat_history.add_user_message(\"tell me more about that!\")\n",
"\n",
"retrieval_chain.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Nice! Our chatbot can now answer domain-specific questions in a conversational way.\n",
"\n",
"As an aside, if you don't want to return all the intermediate steps, you can define your retrieval chain like this using a pipe directly into the document chain instead of the final `.assign()` call:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'LangSmith offers the capability to edit examples and add them to datasets, which is beneficial for expanding the range of evaluation sets. This feature allows for the fine-tuning of models, enabling improved quality and reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, providing an alternative approach for rigorous testing of changes. It also supports monitoring of application performance, including logging traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. This comprehensive functionality contributes to the effectiveness of testing and quality assurance processes.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieval_chain_with_only_answer = (\n",
" RunnablePassthrough.assign(\n",
" context=parse_retriever_input | retriever,\n",
" )\n",
" | document_chain\n",
")\n",
"\n",
"retrieval_chain_with_only_answer.invoke(\n",
" {\n",
" \"messages\": demo_ephemeral_chat_history.messages,\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query transformation\n",
"\n",
"There's more optimization we'll cover here - in the above example, when we asked a followup question, `tell me more about that!`, you might notice that the retrieved docs don't directly include information about testing. This is because we're passing `tell me more about that!` verbatim as a query to the retriever. The output in the retrieval chain is still okay because the document chain retrieval chain can generate an answer based on the chat history, but we could be retrieving more rich and informative documents:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='inputs, and see what happens. At some point though, our application is performing\\nwell and we want to be more rigorous about testing changes. We can use a dataset\\nthat weve constructed along the way (see above). Alternatively, we could spend some\\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default\\u200bAt LangChain, all of us have LangSmiths tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"how can langsmith help with testing?\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content=\"against these known issues.Why is this so impactful? When building LLM applications, its often common to start without a dataset of any kind. This is part of the power of LLMs! They are amazing zero-shot learners, making it possible to get started as easily as possible. But this can also be a curse -- as you adjust the prompt, you're wandering blind. You dont have any examples to benchmark your changes against.LangSmith addresses this problem by including an “Add to Dataset” button for each\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='environments through process.env1.The benefit here is that all calls to LLMs, chains, agents, tools, and retrievers are logged to LangSmith. Around 90% of the time we dont even look at the traces, but the 10% of the time that we do… its so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging\\u200bDebugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"tell me more about that!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get around this common problem, let's add a `query transformation` step that removes references from the input. We'll wrap our old retriever as follows:"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableBranch\n",
"\n",
"# We need a prompt that we can pass into an LLM to generate a transformed search query\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")\n",
"\n",
"query_transform_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" (\n",
" \"user\",\n",
" \"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.\",\n",
" ),\n",
" ]\n",
")\n",
"\n",
"query_transforming_retriever_chain = RunnableBranch(\n",
" (\n",
" # Both empty string and empty list evaluate to False\n",
" lambda x: not x.get(\"messages\", False),\n",
" # If no messages, then we just pass the last message's content to retriever\n",
" (lambda x: x[\"messages\"][-1].content) | retriever,\n",
" ),\n",
" # If messages, then we pass inputs to LLM chain to transform the query, then pass to retriever\n",
" prompt | chat | StrOutputParser() | retriever,\n",
").with_config(run_name=\"chat_retriever_chain\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's recreate our earlier chain with this new `query_transforming_retriever_chain`. Note that this new chain accepts a dict as input and parses a string to pass to the retriever, so we don't have to do additional parsing at the top level:"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"document_chain = create_stuff_documents_chain(chat, question_answering_prompt)\n",
"\n",
"conversational_retrieval_chain = RunnablePassthrough.assign(\n",
" context=query_transforming_retriever_chain,\n",
").assign(\n",
" answer=document_chain,\n",
")\n",
"\n",
"demo_ephemeral_chat_history = ChatMessageHistory()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, let's invoke it!"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='how can langsmith help with testing?'),\n",
" AIMessage(content='LangSmith can help with testing by allowing you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets. This means you can fine-tune a model for improved quality or reduced costs. Additionally, LangSmith provides monitoring capabilities to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise during testing. This allows you to monitor your application and ensure that it is performing as expected, making it easier to identify and address any issues that may arise during testing.')],\n",
" 'context': [Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default\\u200bAt LangChain, all of us have LangSmiths tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'LangSmith can help with testing by allowing you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets. This means you can fine-tune a model for improved quality or reduced costs. Additionally, LangSmith provides monitoring capabilities to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise during testing. This allows you to monitor your application and ensure that it is performing as expected, making it easier to identify and address any issues that may arise during testing.'}"
]
},
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.add_user_message(\"how can langsmith help with testing?\")\n",
"\n",
"response = conversational_retrieval_chain.invoke(\n",
" {\"messages\": demo_ephemeral_chat_history.messages},\n",
")\n",
"\n",
"demo_ephemeral_chat_history.add_ai_message(response[\"answer\"])\n",
"\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='how can langsmith help with testing?'),\n",
" AIMessage(content='LangSmith can help with testing by allowing you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets. This means you can fine-tune a model for improved quality or reduced costs. Additionally, LangSmith provides monitoring capabilities to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise during testing. This allows you to monitor your application and ensure that it is performing as expected, making it easier to identify and address any issues that may arise during testing.'),\n",
" HumanMessage(content='tell me more about that!')],\n",
" 'context': [Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='inputs, and see what happens. At some point though, our application is performing\\nwell and we want to be more rigorous about testing changes. We can use a dataset\\nthat weve constructed along the way (see above). Alternatively, we could spend some\\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='datasets\\u200bLangSmith makes it easy to curate datasets. However, these arent just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime environments without access to the environment, check out the FAQs.↩PreviousLangSmithNextTracingOn by defaultDebuggingWhat was the exact input to the LLM?If I edit the prompt, how does', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'Certainly! LangSmith simplifies the process of constructing and curating datasets, which can be used for testing purposes. You can use the datasets constructed in LangSmith or spend some time manually constructing a small dataset by hand. These datasets can then be used to rigorously test changes in your application.\\n\\nFurthermore, LangSmith allows for manual review and annotation of runs through annotation queues. This feature enables you to select runs based on criteria like model type or automatic evaluation scores and queue them up for human review. As a reviewer, you can quickly step through the runs, view the input, output, and any existing tags, and add your own feedback. This is particularly useful for assessing subjective qualities that automatic evaluators may struggle with during testing.\\n\\nOverall, LangSmith provides a comprehensive set of tools to aid in testing, from dataset construction to manual review and annotation, making the testing process more efficient and effective.'}"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demo_ephemeral_chat_history.add_user_message(\"tell me more about that!\")\n",
"\n",
"conversational_retrieval_chain.invoke(\n",
" {\"messages\": demo_ephemeral_chat_history.messages}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To help you understand what's happening internally, [this LangSmith trace](https://smith.langchain.com/public/e8b88115-d8d8-4332-b987-7c3003234746/r) shows the first invocation. You can see that the user's initial query is passed directly to the retriever, which return suitable docs.\n",
"\n",
"The invocation for followup question, [illustrated by this LangSmith trace](https://smith.langchain.com/public/ebb566ad-b69d-496e-b307-66bf70224c31/r) rephrases the user's initial question to something more relevant to testing with LangSmith, resulting in higher quality docs.\n",
"\n",
"And we now have a chatbot capable of conversational retrieval!\n",
"\n",
"## Next steps\n",
"\n",
"You now know how to build a conversational chatbot that can integrate past messages and domain-specific knowledge into its generations. There are many other optimizations you can make around this - check out the following pages for more information:\n",
"\n",
"- [Memory management](/docs/use_cases/chatbots/memory_management): This includes a guide on automatically updating chat history, as well as trimming, summarizing, or otherwise modifying long conversations to keep your bot focused.\n",
"- [Retrieval](/docs/use_cases/chatbots/retrieval): A deeper dive into using different types of retrieval with your chatbot\n",
"- [Tool usage](/docs/use_cases/chatbots/tool_usage): How to allows your chatbots to use tools that interact with other APIs and systems."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -0,0 +1,766 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Retrieval\n",
"\n",
"Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/use_cases/question_answering/) that go into greater depth!\n",
"\n",
"## Setup\n",
"\n",
"You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai chromadb beautifulsoup4\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"import dotenv\n",
"\n",
"dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's also set up a chat model that we'll use for the below examples."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a retriever\n",
"\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](https://python.langchain.com/docs/use_cases/question_answering/).\n",
"\n",
"Let's use a document loader to pull text from the docs:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we split it into smaller chunks that the LLM's context window can handle and store it in a vector database:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we embed and store those chunks in a vector database:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, let's create a retriever from our initialized vectorstore:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# k is the number of chunks to retrieve\n",
"retriever = vectorstore.as_retriever(k=4)\n",
"\n",
"docs = retriever.invoke(\"Can LangSmith help test my LLM applications?\")\n",
"\n",
"docs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions. And now we've got a retriever that can return related data from the LangSmith docs!\n",
"\n",
"## Document chains\n",
"\n",
"Now that we have a retriever that can return LangChain docs, let's create a chain that can use them as context to answer questions. We'll use a `create_stuff_documents_chain` helper function to \"stuff\" all of the input documents into the prompt. It will also handle formatting the docs as strings.\n",
"\n",
"In addition to a chat model, the function also expects a prompt that has a `context` variables, as well as a placeholder for chat history messages named `messages`. We'll create an appropriate prompt and pass it as shown below:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"SYSTEM_TEMPLATE = \"\"\"\n",
"Answer the user's questions based on the below context. \n",
"If the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\"\"\"\n",
"\n",
"question_answering_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" SYSTEM_TEMPLATE,\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")\n",
"\n",
"document_chain = create_stuff_documents_chain(chat, question_answering_prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can invoke this `document_chain` by itself to answer questions. Let's use the docs we retrieved above and the same question, `how can langsmith help with testing?`:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Yes, LangSmith can help test and evaluate your LLM applications. It simplifies the initial setup, and you can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"document_chain.invoke(\n",
" {\n",
" \"context\": docs,\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks good! For comparison, we can try it with no context docs and compare the result:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"I don't have enough information to answer your question.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"document_chain.invoke(\n",
" {\n",
" \"context\": [],\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that the LLM does not return any results.\n",
"\n",
"## Retrieval chains\n",
"\n",
"Let's combine this document chain with the retriever. Here's one way this can look:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from typing import Dict\n",
"\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"def parse_retriever_input(params: Dict):\n",
" return params[\"messages\"][-1].content\n",
"\n",
"\n",
"retrieval_chain = RunnablePassthrough.assign(\n",
" context=parse_retriever_input | retriever,\n",
").assign(\n",
" answer=document_chain,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Given a list of input messages, we extract the content of the last message in the list and pass that to the retriever to fetch some documents. Then, we pass those documents as context to our document chain to generate a final response.\n",
"\n",
"Invoking this chain combines both steps outlined above:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],\n",
" 'context': [Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'Yes, LangSmith can help test and evaluate your LLM applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retrieval_chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks good!\n",
"\n",
"## Query transformation\n",
"\n",
"Our retrieval chain is capable of answering questions about LangSmith, but there's a problem - chatbots interact with users conversationally, and therefore have to deal with followup questions.\n",
"\n",
"The chain in its current form will struggle with this. Consider a followup question to our original question like `Tell me more!`. If we invoke our retriever with that query directly, we get documents irrelevant to LLM application testing:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='playground. Here, you can modify the prompt and re-run it to observe the resulting changes to the output - as many times as needed!Currently, this feature supports only OpenAI and Anthropic models and works for LLM and Chat Model calls. We plan to extend its functionality to more LLM types, chains, agents, and retrievers in the future.What is the exact sequence of events?\\u200bIn complicated chains and agents, it can often be hard to understand what is going on under the hood. What calls are being', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"Tell me more!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is because the retriever has no innate concept of state, and will only pull documents most similar to the query given. To solve this, we can transform the query into a standalone query without any external references an LLM.\n",
"\n",
"Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='\"LangSmith LLM application testing and evaluation\"')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"query_transform_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" (\n",
" \"user\",\n",
" \"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.\",\n",
" ),\n",
" ]\n",
")\n",
"\n",
"query_transformation_chain = query_transform_prompt | chat\n",
"\n",
"query_transformation_chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n",
" AIMessage(\n",
" content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n",
" ),\n",
" HumanMessage(content=\"Tell me more!\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Awesome! That transformed query would pull up context documents related to LLM application testing.\n",
"\n",
"Let's add this to our retrieval chain. We can wrap our retriever as follows:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableBranch\n",
"\n",
"query_transforming_retriever_chain = RunnableBranch(\n",
" (\n",
" # Both empty string and empty list evaluate to False\n",
" lambda x: not x.get(\"messages\", False),\n",
" # If no messages, then we just pass the last message's content to retriever\n",
" (lambda x: x[\"messages\"][-1].content) | retriever,\n",
" ),\n",
" # If messages, then we pass inputs to LLM chain to transform the query, then pass to retriever\n",
" query_transform_prompt | chat | StrOutputParser() | retriever,\n",
").with_config(run_name=\"chat_retriever_chain\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, we can use this query transformation chain to make our retrieval chain better able to handle such followup questions:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"SYSTEM_TEMPLATE = \"\"\"\n",
"Answer the user's questions based on the below context. \n",
"If the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n",
"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\"\"\"\n",
"\n",
"question_answering_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" SYSTEM_TEMPLATE,\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" ]\n",
")\n",
"\n",
"document_chain = create_stuff_documents_chain(chat, question_answering_prompt)\n",
"\n",
"conversational_retrieval_chain = RunnablePassthrough.assign(\n",
" context=query_transforming_retriever_chain,\n",
").assign(\n",
" answer=document_chain,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Awesome! Let's invoke this new chain with the same inputs as earlier:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],\n",
" 'context': [Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='datasets\\u200bLangSmith makes it easy to curate datasets. However, these arent just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime environments without access to the environment, check out the FAQs.↩PreviousLangSmithNextTracingOn by defaultDebuggingWhat was the exact input to the LLM?If I edit the prompt, how does', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'Yes, LangSmith offers testing and evaluation features to help you assess and improve the performance of your LLM applications. It can assist in monitoring your application, logging traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise.'}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_retrieval_chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n",
" ]\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?'),\n",
" AIMessage(content='Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'),\n",
" HumanMessage(content='Tell me more!')],\n",
" 'context': [Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n",
" Document(page_content='LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n",
" 'answer': 'LangSmith simplifies the initial setup for building reliable LLM applications, but it acknowledges that there is still work needed to bring the performance of prompts, chains, and agents up to the level where they are reliable enough to be used in production. It also provides the ability to manually review and annotate runs through annotation queues, allowing you to select runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can step through the runs, view the input, output, and any existing tags before adding your own feedback. This feature is particularly useful for assessing subjective qualities that automatic evaluators struggle with.'}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_retrieval_chain.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n",
" AIMessage(\n",
" content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n",
" ),\n",
" HumanMessage(content=\"Tell me more!\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can check out [this LangSmith trace](https://smith.langchain.com/public/8a4b8ab0-f2dc-4b50-9473-6c6c0c9c289d/r) to see the internal query transformation step for yourself.\n",
"\n",
"## Streaming\n",
"\n",
"Because this chain is constructed with LCEL, you can use familiar methods like `.stream()` with it:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?'), AIMessage(content='Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'), HumanMessage(content='Tell me more!')]}\n",
"{'context': [Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}), Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}), Document(page_content='Skip to main content🦜🛠 LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}), Document(page_content='LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]}\n",
"{'answer': ''}\n",
"{'answer': 'Lang'}\n",
"{'answer': 'Smith'}\n",
"{'answer': ' simpl'}\n",
"{'answer': 'ifies'}\n",
"{'answer': ' the'}\n",
"{'answer': ' initial'}\n",
"{'answer': ' setup'}\n",
"{'answer': ' for'}\n",
"{'answer': ' building'}\n",
"{'answer': ' reliable'}\n",
"{'answer': ' L'}\n",
"{'answer': 'LM'}\n",
"{'answer': ' applications'}\n",
"{'answer': ','}\n",
"{'answer': ' but'}\n",
"{'answer': ' there'}\n",
"{'answer': ' is'}\n",
"{'answer': ' still'}\n",
"{'answer': ' work'}\n",
"{'answer': ' needed'}\n",
"{'answer': ' to'}\n",
"{'answer': ' bring'}\n",
"{'answer': ' the'}\n",
"{'answer': ' performance'}\n",
"{'answer': ' of'}\n",
"{'answer': ' prompts'}\n",
"{'answer': ','}\n",
"{'answer': ' chains'}\n",
"{'answer': ','}\n",
"{'answer': ' and'}\n",
"{'answer': ' agents'}\n",
"{'answer': ' up'}\n",
"{'answer': ' to'}\n",
"{'answer': ' the'}\n",
"{'answer': ' level'}\n",
"{'answer': ' where'}\n",
"{'answer': ' they'}\n",
"{'answer': ' are'}\n",
"{'answer': ' reliable'}\n",
"{'answer': ' enough'}\n",
"{'answer': ' to'}\n",
"{'answer': ' be'}\n",
"{'answer': ' used'}\n",
"{'answer': ' in'}\n",
"{'answer': ' production'}\n",
"{'answer': '.'}\n",
"{'answer': ' Lang'}\n",
"{'answer': 'Smith'}\n",
"{'answer': ' also'}\n",
"{'answer': ' provides'}\n",
"{'answer': ' the'}\n",
"{'answer': ' capability'}\n",
"{'answer': ' to'}\n",
"{'answer': ' manually'}\n",
"{'answer': ' review'}\n",
"{'answer': ' and'}\n",
"{'answer': ' annotate'}\n",
"{'answer': ' runs'}\n",
"{'answer': ' through'}\n",
"{'answer': ' annotation'}\n",
"{'answer': ' queues'}\n",
"{'answer': ','}\n",
"{'answer': ' allowing'}\n",
"{'answer': ' you'}\n",
"{'answer': ' to'}\n",
"{'answer': ' select'}\n",
"{'answer': ' runs'}\n",
"{'answer': ' based'}\n",
"{'answer': ' on'}\n",
"{'answer': ' criteria'}\n",
"{'answer': ' like'}\n",
"{'answer': ' model'}\n",
"{'answer': ' type'}\n",
"{'answer': ' or'}\n",
"{'answer': ' automatic'}\n",
"{'answer': ' evaluation'}\n",
"{'answer': ' scores'}\n",
"{'answer': ' for'}\n",
"{'answer': ' human'}\n",
"{'answer': ' review'}\n",
"{'answer': '.'}\n",
"{'answer': ' This'}\n",
"{'answer': ' feature'}\n",
"{'answer': ' is'}\n",
"{'answer': ' useful'}\n",
"{'answer': ' for'}\n",
"{'answer': ' assessing'}\n",
"{'answer': ' subjective'}\n",
"{'answer': ' qualities'}\n",
"{'answer': ' that'}\n",
"{'answer': ' automatic'}\n",
"{'answer': ' evalu'}\n",
"{'answer': 'ators'}\n",
"{'answer': ' struggle'}\n",
"{'answer': ' with'}\n",
"{'answer': '.'}\n",
"{'answer': ''}\n"
]
}
],
"source": [
"stream = conversational_retrieval_chain.stream(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n",
" AIMessage(\n",
" content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n",
" ),\n",
" HumanMessage(content=\"Tell me more!\"),\n",
" ],\n",
" }\n",
")\n",
"\n",
"for chunk in stream:\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Further reading\n",
"\n",
"This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out [this section](/docs/modules/data_connection/) of the docs."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -0,0 +1,465 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tool usage\n",
"\n",
"This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n",
"\n",
"Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/use_cases/chatbots/quickstart) in this section and be familiar with [the documentation on agents](/docs/modules/agents/).\n",
"\n",
"## Setup\n",
"\n",
"For this guide, we'll be using an [OpenAI tools agent](/docs/modules/agents/agent_types/openai_tools) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n",
"\n",
"You'll need to [sign up for an account](https://tavily.com/) on the Tavily website, and install the following packages:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%pip install --upgrade --quiet langchain-openai tavily-python\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
"import dotenv\n",
"\n",
"dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will also need your OpenAI key set as `OPENAI_API_KEY` and your Tavily API key set as `TAVILY_API_KEY`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating an agent\n",
"\n",
"Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed.\n",
"\n",
"First, let's initialize Tavily and an OpenAI chat model capable of tool calling:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"tools = [TavilySearchResults(max_results=1)]\n",
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To make our agent conversational, we must also choose a prompt with a placeholder for our chat history. Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"# Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"messages\"),\n",
" MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! Now let's assemble our agent:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
"\n",
"agent = create_openai_tools_agent(chat, tools, prompt)\n",
"\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running the agent\n",
"\n",
"Now that we've set up our agent, let's try interacting with it! It can handle both trivial queries that require no lookup:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mHello Nemo! It's great to meet you. How can I assist you today?\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"I'm Nemo!\")],\n",
" 'output': \"Hello Nemo! It's great to meet you. How can I assist you today?\"}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"agent_executor.invoke({\"messages\": [HumanMessage(content=\"I'm Nemo!\")]})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Or, it can use of the passed search tool to get up to date information if needed:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `tavily_search_results_json` with `{'query': 'current conservation status of the Great Barrier Reef'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://www.barrierreef.org/news/blog/this-is-the-critical-decade-for-coral-reef-survival', 'content': \"global coral reef conservation. © 2024 Great Barrier Reef Foundation. Website by bigfish.tv #Related News · 29 January 2024 290m more baby corals to help restore and protect the Great Barrier Reef Great Barrier Reef Foundation Managing Director Anna Marsden says its not too late if we act now.The Status of Coral Reefs of the World: 2020 report is the largest analysis of global coral reef health ever undertaken. It found that 14 per cent of the world's coral has been lost since 2009. The report also noted, however, that some of these corals recovered during the 10 years to 2019.\"}]\u001b[0m\u001b[32;1m\u001b[1;3mThe current conservation status of the Great Barrier Reef is a critical concern. According to the Great Barrier Reef Foundation, the Status of Coral Reefs of the World: 2020 report found that 14% of the world's coral has been lost since 2009. However, the report also noted that some of these corals recovered during the 10 years to 2019. For more information, you can visit the following link: [Great Barrier Reef Foundation - Conservation Status](https://www.barrierreef.org/news/blog/this-is-the-critical-decade-for-coral-reef-survival)\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?')],\n",
" 'output': \"The current conservation status of the Great Barrier Reef is a critical concern. According to the Great Barrier Reef Foundation, the Status of Coral Reefs of the World: 2020 report found that 14% of the world's coral has been lost since 2009. However, the report also noted that some of these corals recovered during the 10 years to 2019. For more information, you can visit the following link: [Great Barrier Reef Foundation - Conservation Status](https://www.barrierreef.org/news/blog/this-is-the-critical-decade-for-coral-reef-survival)\"}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" content=\"What is the current conservation status of the Great Barrier Reef?\"\n",
" )\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conversational responses\n",
"\n",
"Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mYour name is Nemo!\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"I'm Nemo!\"),\n",
" AIMessage(content='Hello Nemo! How can I assist you today?'),\n",
" HumanMessage(content='What is my name?')],\n",
" 'output': 'Your name is Nemo!'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"agent_executor.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(content=\"I'm Nemo!\"),\n",
" AIMessage(content=\"Hello Nemo! How can I assist you today?\"),\n",
" HumanMessage(content=\"What is my name?\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If preferred, you can also wrap the agent executor in a `RunnableWithMessageHistory` class to internally manage history messages. First, we need to slightly modify the prompt to take a separate input variable so that the wrapper can parse which input value to store as history:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"# Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agent\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
" ]\n",
")\n",
"\n",
"agent = create_openai_tools_agent(chat, tools, prompt)\n",
"\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, because our agent executor has multiple outputs, we also have to set the `output_messages_key` property when initializing the wrapper:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"demo_ephemeral_chat_history_for_chain = ChatMessageHistory()\n",
"\n",
"conversational_agent_executor = RunnableWithMessageHistory(\n",
" agent_executor,\n",
" lambda session_id: demo_ephemeral_chat_history_for_chain,\n",
" input_messages_key=\"input\",\n",
" output_messages_key=\"output\",\n",
" history_messages_key=\"chat_history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mHi Nemo! It's great to meet you. How can I assist you today?\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"I'm Nemo!\",\n",
" 'chat_history': [],\n",
" 'output': \"Hi Nemo! It's great to meet you. How can I assist you today?\"}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_agent_executor.invoke(\n",
" {\n",
" \"input\": \"I'm Nemo!\",\n",
" },\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mYour name is Nemo! How can I assist you today, Nemo?\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'What is my name?',\n",
" 'chat_history': [HumanMessage(content=\"I'm Nemo!\"),\n",
" AIMessage(content=\"Hi Nemo! It's great to meet you. How can I assist you today?\")],\n",
" 'output': 'Your name is Nemo! How can I assist you today, Nemo?'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_agent_executor.invoke(\n",
" {\n",
" \"input\": \"What is my name?\",\n",
" },\n",
" {\"configurable\": {\"session_id\": \"unused\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Further reading\n",
"\n",
"Other types agents can also support conversational responses too - for more, check out the [agents section](/docs/modules/agents).\n",
"\n",
"For more on tool usage, you can also check out [this use case section](/docs/use_cases/tool_use/)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading…
Cancel
Save