mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
737 lines
25 KiB
Plaintext
737 lines
25 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "raw",
|
|
"id": "22fd28c9-9b48-476c-bca8-20efef7fdb14",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"sidebar_position: 1\n",
|
|
"title: Chatbots\n",
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ee7f95e4",
|
|
"metadata": {},
|
|
"source": [
|
|
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/use_cases/chatbots.ipynb)\n",
|
|
"\n",
|
|
"## Use case\n",
|
|
"\n",
|
|
"Chatbots are one of the central LLM use-cases. The core features of chatbots are that they can have long-running conversations and have access to information that users want to know about.\n",
|
|
"\n",
|
|
"Aside from basic prompting and LLMs, memory and retrieval are the core components of a chatbot. Memory allows a chatbot to remember past interactions, and retrieval provides a chatbot with up-to-date, domain-specific information."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "56615b45",
|
|
"metadata": {},
|
|
"source": [
|
|
"![Image description](/img/chat_use_case.png)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ff48f490",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Overview\n",
|
|
"\n",
|
|
"The chat model interface is based around messages rather than raw text. Several components are important to consider for chat:\n",
|
|
"\n",
|
|
"* `chat model`: See [here](/docs/integrations/chat) for a list of chat model integrations and [here](/docs/modules/model_io/models/chat) for documentation on the chat model interface in LangChain. You can use `LLMs` (see [here](/docs/modules/model_io/models/llms)) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.\n",
|
|
"* `prompt template`: Prompt templates make it easy to assemble prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.\n",
|
|
"* `memory`: [See here](/docs/modules/memory/) for in-depth documentation on memory types\n",
|
|
"* `retriever` (optional): [See here](/docs/modules/data_connection/retrievers) for in-depth documentation on retrieval systems. These are useful if you want to build a chatbot with domain-specific knowledge.\n",
|
|
"\n",
|
|
"## Quickstart\n",
|
|
"\n",
|
|
"Here's a quick preview of how we can create chatbot interfaces. First let's install some dependencies and set the required credentials:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "5070a1fd",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install langchain openai \n",
|
|
"\n",
|
|
"# Set env var OPENAI_API_KEY or load from a .env file:\n",
|
|
"# import dotenv\n",
|
|
"# dotenv.load_dotenv()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "88197b95",
|
|
"metadata": {},
|
|
"source": [
|
|
"With a plain chat model, we can get chat completions by [passing one or more messages](/docs/modules/model_io/models/chat) to the model.\n",
|
|
"\n",
|
|
"The chat model will respond with a message."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"id": "5b0d84ae",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)"
|
|
]
|
|
},
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.schema import (\n",
|
|
" AIMessage,\n",
|
|
" HumanMessage,\n",
|
|
" SystemMessage\n",
|
|
")\n",
|
|
"from langchain.chat_models import ChatOpenAI\n",
|
|
"\n",
|
|
"chat = ChatOpenAI()\n",
|
|
"chat([HumanMessage(content=\"Translate this sentence from English to French: I love programming.\")])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7935d9a5",
|
|
"metadata": {},
|
|
"source": [
|
|
"And if we pass in a list of messages:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"id": "afd27a9f",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)"
|
|
]
|
|
},
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"messages = [\n",
|
|
" SystemMessage(content=\"You are a helpful assistant that translates English to French.\"),\n",
|
|
" HumanMessage(content=\"I love programming.\")\n",
|
|
"]\n",
|
|
"chat(messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c7a1d169",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can then wrap our chat model in a `ConversationChain`, which has built-in memory for remembering past user inputs and model outputs."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 20,
|
|
"id": "fdb05d74",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Je adore la programmation.'"
|
|
]
|
|
},
|
|
"execution_count": 20,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.chains import ConversationChain \n",
|
|
" \n",
|
|
"conversation = ConversationChain(llm=chat) \n",
|
|
"conversation.run(\"Translate this sentence from English to French: I love programming.\") "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"id": "d801a173",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Ich liebe Programmieren.'"
|
|
]
|
|
},
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"conversation.run(\"Translate it to German.\") "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9e86788c",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Memory \n",
|
|
"\n",
|
|
"As we mentioned above, the core component of chatbots is the memory system. One of the simplest and most commonly used forms of memory is `ConversationBufferMemory`:\n",
|
|
"* This memory allows for storing of messages in a `buffer`\n",
|
|
"* When called in a chain, it returns all of the messages it has stored\n",
|
|
"\n",
|
|
"LangChain comes with many other types of memory, too. [See here](/docs/modules/memory/) for in-depth documentation on memory types.\n",
|
|
"\n",
|
|
"For now let's take a quick look at ConversationBufferMemory. We can manually add a few chat messages to the memory like so:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "1380a4ea",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.memory import ConversationBufferMemory\n",
|
|
"\n",
|
|
"memory = ConversationBufferMemory()\n",
|
|
"memory.chat_memory.add_user_message(\"hi!\")\n",
|
|
"memory.chat_memory.add_ai_message(\"whats up?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a3d5d1f8",
|
|
"metadata": {},
|
|
"source": [
|
|
"And now we can load from our memory. The key method exposed by all `Memory` classes is `load_memory_variables`. This takes in any initial chain input and returns a list of memory variables which are added to the chain input. \n",
|
|
"\n",
|
|
"Since this simple memory type doesn't actually take into account the chain input when loading memory, we can pass in an empty input for now:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "982467e7",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'history': 'Human: hi!\\nAI: whats up?'}"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"memory.load_memory_variables({})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7c1b20d4",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can also keep a sliding window of the most recent `k` interactions using `ConversationBufferWindowMemory`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "f72b9ff7",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'history': 'Human: not much you\\nAI: not much'}"
|
|
]
|
|
},
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.memory import ConversationBufferWindowMemory\n",
|
|
"\n",
|
|
"memory = ConversationBufferWindowMemory(k=1)\n",
|
|
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
|
|
"memory.save_context({\"input\": \"not much you\"}, {\"output\": \"not much\"})\n",
|
|
"memory.load_memory_variables({})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7b84f90a",
|
|
"metadata": {},
|
|
"source": [
|
|
"`ConversationSummaryMemory` is an extension of this theme.\n",
|
|
"\n",
|
|
"It creates a summary of the conversation over time. \n",
|
|
"\n",
|
|
"This memory is most useful for longer conversations where the full message history would consume many tokens."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 27,
|
|
"id": "ca2596ed",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.llms import OpenAI\n",
|
|
"from langchain.memory import ConversationSummaryMemory\n",
|
|
"\n",
|
|
"llm = OpenAI(temperature=0)\n",
|
|
"memory = ConversationSummaryMemory(llm=llm)\n",
|
|
"memory.save_context({\"input\": \"hi\"},{\"output\": \"whats up\"})\n",
|
|
"memory.save_context({\"input\": \"im working on better docs for chatbots\"},{\"output\": \"oh, that sounds like a lot of work\"})\n",
|
|
"memory.save_context({\"input\": \"yes, but it's worth the effort\"},{\"output\": \"agreed, good docs are important!\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "060f69b7",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'history': '\\nThe human greets the AI, to which the AI responds. The human then mentions they are working on better docs for chatbots, to which the AI responds that it sounds like a lot of work. The human agrees that it is worth the effort, and the AI agrees that good docs are important.'}"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"memory.load_memory_variables({})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4bf036f6",
|
|
"metadata": {},
|
|
"source": [
|
|
"`ConversationSummaryBufferMemory` extends this a bit further:\n",
|
|
"\n",
|
|
"It uses token length rather than number of interactions to determine when to flush interactions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "38b42728",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.memory import ConversationSummaryBufferMemory\n",
|
|
"memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)\n",
|
|
"memory.save_context({\"input\": \"hi\"}, {\"output\": \"whats up\"})\n",
|
|
"memory.save_context({\"input\": \"not much you\"}, {\"output\": \"not much\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ff0db09f",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Conversation \n",
|
|
"\n",
|
|
"We can unpack what goes under the hood with `ConversationChain`. \n",
|
|
"\n",
|
|
"We can specify our memory, `ConversationSummaryMemory` and we can specify the prompt. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"id": "fccd6995",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
|
|
"Prompt after formatting:\n",
|
|
"\u001b[32;1m\u001b[1;3mSystem: You are a nice chatbot having a conversation with a human.\n",
|
|
"Human: hi\u001b[0m\n",
|
|
"\n",
|
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'question': 'hi',\n",
|
|
" 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False),\n",
|
|
" AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False)],\n",
|
|
" 'text': 'Hello! How can I assist you today?'}"
|
|
]
|
|
},
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.prompts import (\n",
|
|
" ChatPromptTemplate,\n",
|
|
" MessagesPlaceholder,\n",
|
|
" SystemMessagePromptTemplate,\n",
|
|
" HumanMessagePromptTemplate,\n",
|
|
")\n",
|
|
"from langchain.chains import LLMChain\n",
|
|
"\n",
|
|
"# LLM\n",
|
|
"llm = ChatOpenAI()\n",
|
|
"\n",
|
|
"# Prompt \n",
|
|
"prompt = ChatPromptTemplate(\n",
|
|
" messages=[\n",
|
|
" SystemMessagePromptTemplate.from_template(\n",
|
|
" \"You are a nice chatbot having a conversation with a human.\"\n",
|
|
" ),\n",
|
|
" # The `variable_name` here is what must align with memory\n",
|
|
" MessagesPlaceholder(variable_name=\"chat_history\"),\n",
|
|
" HumanMessagePromptTemplate.from_template(\"{question}\")\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"# Notice that we `return_messages=True` to fit into the MessagesPlaceholder\n",
|
|
"# Notice that `\"chat_history\"` aligns with the MessagesPlaceholder name\n",
|
|
"memory = ConversationBufferMemory(memory_key=\"chat_history\",return_messages=True)\n",
|
|
"conversation = LLMChain(\n",
|
|
" llm=llm,\n",
|
|
" prompt=prompt,\n",
|
|
" verbose=True,\n",
|
|
" memory=memory\n",
|
|
")\n",
|
|
"\n",
|
|
"# Notice that we just pass in the `question` variables - `chat_history` gets populated by memory\n",
|
|
"conversation({\"question\": \"hi\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"id": "eb0cadfd",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
|
|
"Prompt after formatting:\n",
|
|
"\u001b[32;1m\u001b[1;3mSystem: You are a nice chatbot having a conversation with a human.\n",
|
|
"Human: hi\n",
|
|
"AI: Hello! How can I assist you today?\n",
|
|
"Human: Translate this sentence from English to French: I love programming.\u001b[0m\n",
|
|
"\n",
|
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'question': 'Translate this sentence from English to French: I love programming.',\n",
|
|
" 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False),\n",
|
|
" AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False),\n",
|
|
" HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False),\n",
|
|
" AIMessage(content='Sure! The translation of \"I love programming\" from English to French is \"J\\'adore programmer.\"', additional_kwargs={}, example=False)],\n",
|
|
" 'text': 'Sure! The translation of \"I love programming\" from English to French is \"J\\'adore programmer.\"'}"
|
|
]
|
|
},
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"conversation({\"question\": \"Translate this sentence from English to French: I love programming.\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 26,
|
|
"id": "c56d6219",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
|
|
"Prompt after formatting:\n",
|
|
"\u001b[32;1m\u001b[1;3mSystem: You are a nice chatbot having a conversation with a human.\n",
|
|
"Human: hi\n",
|
|
"AI: Hello! How can I assist you today?\n",
|
|
"Human: Translate this sentence from English to French: I love programming.\n",
|
|
"AI: Sure! The translation of \"I love programming\" from English to French is \"J'adore programmer.\"\n",
|
|
"Human: Now translate the sentence to German.\u001b[0m\n",
|
|
"\n",
|
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'question': 'Now translate the sentence to German.',\n",
|
|
" 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False),\n",
|
|
" AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False),\n",
|
|
" HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False),\n",
|
|
" AIMessage(content='Sure! The translation of \"I love programming\" from English to French is \"J\\'adore programmer.\"', additional_kwargs={}, example=False),\n",
|
|
" HumanMessage(content='Now translate the sentence to German.', additional_kwargs={}, example=False),\n",
|
|
" AIMessage(content='Certainly! The translation of \"I love programming\" from English to German is \"Ich liebe das Programmieren.\"', additional_kwargs={}, example=False)],\n",
|
|
" 'text': 'Certainly! The translation of \"I love programming\" from English to German is \"Ich liebe das Programmieren.\"'}"
|
|
]
|
|
},
|
|
"execution_count": 26,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"conversation({\"question\": \"Now translate the sentence to German.\"})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "43858489",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see the chat history preserved in the prompt using the [LangSmith trace](https://smith.langchain.com/public/dce34c57-21ca-4283-9020-a8e0d78a59de/r).\n",
|
|
"\n",
|
|
"![Image description](/img/chat_use_case_2.png)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3f35cc16",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Chat Retrieval\n",
|
|
"\n",
|
|
"Now, suppose we want to [chat with documents](https://twitter.com/mayowaoshin/status/1640385062708424708?s=20) or some other source of knowledge.\n",
|
|
"\n",
|
|
"This is popular use case, combining chat with [document retrieval](/docs/use_cases/question_answering).\n",
|
|
"\n",
|
|
"It allows us to chat with specific information that the model was not trained on."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "1a01e7b5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install tiktoken chromadb"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "88e220de",
|
|
"metadata": {},
|
|
"source": [
|
|
"Load a blog post."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 31,
|
|
"id": "1b99b36c",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.document_loaders import WebBaseLoader\n",
|
|
"\n",
|
|
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
|
|
"data = loader.load()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3662ce79",
|
|
"metadata": {},
|
|
"source": [
|
|
"Split and store this in a vector."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 32,
|
|
"id": "058f1541",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
|
"\n",
|
|
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
|
"all_splits = text_splitter.split_documents(data)\n",
|
|
"\n",
|
|
"from langchain.embeddings import OpenAIEmbeddings\n",
|
|
"from langchain.vectorstores import Chroma\n",
|
|
"\n",
|
|
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "603d9441",
|
|
"metadata": {},
|
|
"source": [
|
|
"Create our memory, as before, but's let's use `ConversationSummaryMemory`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 37,
|
|
"id": "f89fd3f5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"memory = ConversationSummaryMemory(llm=llm,memory_key=\"chat_history\",return_messages=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 38,
|
|
"id": "28503423",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.chat_models import ChatOpenAI\n",
|
|
"from langchain.chains import ConversationalRetrievalChain\n",
|
|
"\n",
|
|
"llm = ChatOpenAI()\n",
|
|
"retriever = vectorstore.as_retriever()\n",
|
|
"qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 39,
|
|
"id": "a9c3bd5e",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'question': 'How do agents use Task decomposition?',\n",
|
|
" 'chat_history': [SystemMessage(content='', additional_kwargs={})],\n",
|
|
" 'answer': 'Agents can use task decomposition in several ways:\\n\\n1. Simple prompting: Agents can use Language Model based prompting to break down tasks into subgoals. For example, by providing prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\", the agent can generate a sequence of smaller steps that lead to the completion of the overall task.\\n\\n2. Task-specific instructions: Agents can be given task-specific instructions to guide their planning process. For example, if the task is to write a novel, the agent can be instructed to \"Write a story outline.\" This provides a high-level structure for the task and helps in breaking it down into smaller components.\\n\\n3. Human inputs: Agents can also take inputs from humans to decompose tasks. This can be done through direct communication or by leveraging human expertise. Humans can provide guidance and insights to help the agent break down complex tasks into manageable subgoals.\\n\\nOverall, task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.'}"
|
|
]
|
|
},
|
|
"execution_count": 39,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"qa(\"How do agents use Task decomposition?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 40,
|
|
"id": "a29a7713",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'question': 'What are the various ways to implemet memory to support it?',\n",
|
|
" 'chat_history': [SystemMessage(content='The human asks how agents use task decomposition. The AI explains that agents can use task decomposition in several ways, including simple prompting, task-specific instructions, and human inputs. Task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.', additional_kwargs={})],\n",
|
|
" 'answer': 'There are several ways to implement memory to support task decomposition:\\n\\n1. Long-Term Memory Management: This involves storing and organizing information in a long-term memory system. The agent can retrieve past experiences, knowledge, and learned strategies to guide the task decomposition process.\\n\\n2. Internet Access: The agent can use internet access to search for relevant information and gather resources to aid in task decomposition. This allows the agent to access a vast amount of information and utilize it in the decomposition process.\\n\\n3. GPT-3.5 Powered Agents: The agent can delegate simple tasks to GPT-3.5 powered agents. These agents can perform specific tasks or provide assistance in task decomposition, allowing the main agent to focus on higher-level planning and decision-making.\\n\\n4. File Output: The agent can store the results of task decomposition in files or documents. This allows for easy retrieval and reference during the execution of the task.\\n\\nThese memory resources help the agent in organizing and managing information, making informed decisions, and effectively decomposing complex tasks into smaller, manageable subgoals.'}"
|
|
]
|
|
},
|
|
"execution_count": 40,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"qa(\"What are the various ways to implemet memory to support it?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d5e8d5f4",
|
|
"metadata": {},
|
|
"source": [
|
|
"Again, we can use the [LangSmith trace](https://smith.langchain.com/public/18460363-0c70-4c72-81c7-3b57253bb58c/r) to explore the prompt structure.\n",
|
|
"\n",
|
|
"### Going deeper \n",
|
|
"\n",
|
|
"* Agents, such as the [conversational retrieval agent](/docs/use_cases/question_answering/how_to/conversational_retrieval_agents), can be used for retrieval when necessary while also holding a conversation.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "1ff8925f-4c21-4680-a9cd-3670ad4852b3",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.1"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|