2022-11-26 02:28:55 +00:00
{
"cells": [
{
"cell_type": "markdown",
"id": "00695447",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"source": [
2023-08-29 16:58:26 +00:00
"# Memory in LLMChain\n",
2022-11-26 02:28:55 +00:00
"\n",
2023-09-03 22:06:20 +00:00
"This notebook goes over how to use the Memory class with an `LLMChain`. \n",
2023-08-29 16:58:26 +00:00
"\n",
"We will add the [ConversationBufferMemory](https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html#langchain.memory.buffer.ConversationBufferMemory) class, although this can be any memory class."
2022-11-26 02:28:55 +00:00
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9f1aaf47",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"outputs": [],
"source": [
2023-07-25 22:20:32 +00:00
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
2023-03-07 15:59:37 +00:00
"from langchain.memory import ConversationBufferMemory\n",
2023-07-25 22:20:32 +00:00
"from langchain.prompts import PromptTemplate"
2022-11-26 02:28:55 +00:00
]
},
{
"cell_type": "markdown",
"id": "4b066ced",
"metadata": {},
"source": [
2023-09-03 22:06:20 +00:00
"The most important step is setting up the prompt correctly. In the below prompt, we have two input keys: one for the actual input, another for the input from the Memory class. Importantly, we make sure the keys in the `PromptTemplate` and the `ConversationBufferMemory` match up (`chat_history`)."
2022-11-26 02:28:55 +00:00
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e5501eda",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"outputs": [],
"source": [
"template = \"\"\"You are a chatbot having a conversation with a human.\n",
"\n",
"{chat_history}\n",
"Human: {human_input}\n",
"Chatbot:\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
2023-06-16 18:52:56 +00:00
" input_variables=[\"chat_history\", \"human_input\"], template=template\n",
2022-11-26 02:28:55 +00:00
")\n",
2022-11-26 13:58:54 +00:00
"memory = ConversationBufferMemory(memory_key=\"chat_history\")"
2022-11-26 02:28:55 +00:00
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f6566275",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"outputs": [],
"source": [
2023-07-25 22:20:32 +00:00
"llm = OpenAI()\n",
2022-11-26 02:28:55 +00:00
"llm_chain = LLMChain(\n",
2023-07-25 22:20:32 +00:00
" llm=llm,\n",
2023-06-16 18:52:56 +00:00
" prompt=prompt,\n",
" verbose=True,\n",
2022-11-26 02:28:55 +00:00
" memory=memory,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e2b189dc",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:
- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.
There is also a full reference section, as well as extra resources
(glossary, gallery, etc)
Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
2023-01-02 16:24:09 +00:00
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
2022-11-26 02:28:55 +00:00
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
"\n",
"\n",
"Human: Hi there my friend\n",
"Chatbot:\u001b[0m\n",
"\n",
2023-07-25 22:20:32 +00:00
"\u001b[1m> Finished chain.\u001b[0m\n"
2022-11-26 02:28:55 +00:00
]
},
{
"data": {
"text/plain": [
2023-07-25 22:20:32 +00:00
"' Hi there! How can I help you today?'"
2022-11-26 02:28:55 +00:00
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(human_input=\"Hi there my friend\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a902729f",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:
- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.
There is also a full reference section, as well as extra resources
(glossary, gallery, etc)
Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
2023-01-02 16:24:09 +00:00
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
2022-11-26 02:28:55 +00:00
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
"\n",
"Human: Hi there my friend\n",
2023-07-25 22:20:32 +00:00
"AI: Hi there! How can I help you today?\n",
Update adding_memory.ipynb (#5806)
just change "to" to "too" so it matches the above prompt
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
2023-06-07 05:10:53 +00:00
"Human: Not too bad - how are you?\n",
2022-11-26 02:28:55 +00:00
"Chatbot:\u001b[0m\n",
"\n",
2023-07-25 22:20:32 +00:00
"\u001b[1m> Finished chain.\u001b[0m\n"
2022-11-26 02:28:55 +00:00
]
},
{
"data": {
"text/plain": [
2023-07-25 22:20:32 +00:00
"\" I'm doing great, thanks for asking! How are you doing?\""
2022-11-26 02:28:55 +00:00
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
2023-04-11 16:53:38 +00:00
"llm_chain.predict(human_input=\"Not too bad - how are you?\")"
2022-11-26 02:28:55 +00:00
]
},
2023-07-25 22:20:32 +00:00
{
"cell_type": "markdown",
"id": "33978824-0048-4e75-9431-1b2c02c169b0",
"metadata": {},
"source": [
2023-09-03 22:06:20 +00:00
"## Adding Memory to a chat model-based `LLMChain`\n",
2023-07-25 22:20:32 +00:00
"\n",
"The above works for completion-style `LLM`s, but if you are using a chat model, you will likely get better performance using structured chat messages. Below is an example."
]
},
2022-11-26 02:28:55 +00:00
{
"cell_type": "code",
2023-07-25 22:20:32 +00:00
"execution_count": 6,
2022-11-26 02:28:55 +00:00
"id": "ae5309bb",
2023-07-25 22:20:32 +00:00
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import SystemMessage\n",
"from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder"
]
},
{
"cell_type": "markdown",
"id": "a237bbb8-e448-4238-8420-004e046ef84e",
2022-11-26 02:28:55 +00:00
"metadata": {},
2023-07-25 22:20:32 +00:00
"source": [
"We will use the [ChatPromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html) class to set up the chat prompt.\n",
"\n",
2023-09-03 22:06:20 +00:00
"The [from_messages](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html#langchain.prompts.chat.ChatPromptTemplate.from_messages) method creates a `ChatPromptTemplate` from a list of messages (e.g., `SystemMessage`, `HumanMessage`, `AIMessage`, `ChatMessage`, etc.) or message templates, such as the [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html#langchain.prompts.chat.MessagesPlaceholder) below.\n",
2023-07-25 22:20:32 +00:00
"\n",
2023-09-03 22:06:20 +00:00
"The configuration below makes it so the memory will be injected to the middle of the chat prompt, in the `chat_history` key, and the user's inputs will be added in a human/user message to the end of the chat prompt."
2023-07-25 22:20:32 +00:00
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9bb8cde1-67c2-4133-b453-5c34fb36ff74",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages([\n",
" SystemMessage(content=\"You are a chatbot having a conversation with a human.\"), # The persistent system prompt\n",
" MessagesPlaceholder(variable_name=\"chat_history\"), # Where the memory will be stored.\n",
2023-08-26 19:04:43 +00:00
" HumanMessagePromptTemplate.from_template(\"{human_input}\"), # Where the human input will injected\n",
2023-07-25 22:20:32 +00:00
"])\n",
" \n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9f77e466-a1a3-4c69-a001-ac5b7a40e219",
"metadata": {
"tags": []
},
2022-11-26 02:28:55 +00:00
"outputs": [],
2023-07-25 22:20:32 +00:00
"source": [
"llm = ChatOpenAI()\n",
"\n",
"chat_llm_chain = LLMChain(\n",
" llm=llm,\n",
" prompt=prompt,\n",
" verbose=True,\n",
" memory=memory,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f9709647-be82-43d5-b076-2a7da344ce8a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a chatbot having a conversation with a human.\n",
"Human: Hi there my friend\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello! How can I assist you today, my friend?'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_llm_chain.predict(human_input=\"Hi there my friend\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "bdf04ebe-525a-4156-a3a7-65fd2df8d6fc",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a chatbot having a conversation with a human.\n",
"Human: Hi there my friend\n",
"AI: Hello! How can I assist you today, my friend?\n",
"Human: Not too bad - how are you?\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"I'm an AI chatbot, so I don't have feelings, but I'm here to help and chat with you! Is there something specific you would like to talk about or any questions I can assist you with?\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat_llm_chain.predict(human_input=\"Not too bad - how are you?\")"
]
2022-11-26 02:28:55 +00:00
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2023-08-29 16:58:26 +00:00
"version": "3.10.12"
2022-11-26 02:28:55 +00:00
}
},
"nbformat": 4,
"nbformat_minor": 5
}