updates to vectorstore memory (#2875)

fix_agent_callbacks
Harrison Chase 1 year ago committed by GitHub
parent 203c0eb2ae
commit 0a38bbc750
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "ff4be5f3",
"metadata": {},
@ -10,7 +9,9 @@
"\n",
"`VectorStoreRetrieverMemory` stores memories in a VectorDB and queries the top-K most \"salient\" docs every time it is called.\n",
"\n",
"This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions."
"This differs from most of the other Memory classes in that it doesn't explicitly track the order of interactions.\n",
"\n",
"In this case, the \"docs\" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation."
]
},
{
@ -26,7 +27,8 @@
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.memory import VectorStoreRetrieverMemory\n",
"from langchain.chains import ConversationChain"
"from langchain.chains import ConversationChain\n",
"from langchain.prompts import PromptTemplate"
]
},
{
@ -41,7 +43,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 29,
"id": "eef56f65",
"metadata": {
"tags": []
@ -61,7 +63,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8f4bdf92",
"metadata": {},
@ -73,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 30,
"id": "e00d4938",
"metadata": {
"tags": []
@ -86,14 +87,14 @@
"memory = VectorStoreRetrieverMemory(retriever=retriever)\n",
"\n",
"# When added to an agent, the memory object can save pertinent information from conversations or used tools\n",
"memory.save_context({\"input\": \"check the latest scores of the Warriors game\"}, {\"output\": \"the Warriors are up against the Astros 88 to 84\"})\n",
"memory.save_context({\"input\": \"I need help doing my taxes - what's the standard deduction this year?\"}, {\"output\": \"...\"})\n",
"memory.save_context({\"input\": \"What's the the time?\"}, {\"output\": f\"It's {datetime.now()}\"}) # "
"memory.save_context({\"input\": \"My favorite food is pizza\"}, {\"output\": \"thats good to know\"})\n",
"memory.save_context({\"input\": \"My favorite sport is soccer\"}, {\"output\": \"...\"})\n",
"memory.save_context({\"input\": \"I don't the Celtics\"}, {\"output\": \"ok\"}) # "
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 31,
"id": "2fe28a28",
"metadata": {
"tags": []
@ -103,7 +104,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"input: I need help doing my taxes - what's the standard deduction this year?\n",
"input: My favorite sport is soccer\n",
"output: ...\n"
]
}
@ -111,7 +112,7 @@
"source": [
"# Notice the first result returned is the memory pertaining to tax help, which the language model deems more semantically relevant\n",
"# to a 1099 than the other documents, despite them both containing numbers.\n",
"print(memory.load_memory_variables({\"prompt\": \"What's a 1099?\"})[\"history\"])"
"print(memory.load_memory_variables({\"prompt\": \"what sport should i watch?\"})[\"history\"])"
]
},
{
@ -125,7 +126,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 32,
"id": "ebd68c10",
"metadata": {
"tags": []
@ -141,9 +142,13 @@
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Relevant pieces of previous conversation:\n",
"input: My favorite food is pizza\n",
"output: thats good to know\n",
"\n",
"(You do not need to use these pieces of information if not relevant)\n",
"\n",
"Current conversation:\n",
"input: I need help doing my taxes - what's the standard deduction this year?\n",
"output: ...\n",
"Human: Hi, my name is Perry, what's up?\n",
"AI:\u001b[0m\n",
"\n",
@ -153,18 +158,32 @@
{
"data": {
"text/plain": [
"\" Hi Perry, my name is AI. I'm doing great, how about you? I understand you need help with your taxes. What specifically do you need help with?\""
"\" Hi Perry, I'm doing well. How about you?\""
]
},
"execution_count": 5,
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = OpenAI(temperature=0) # Can be any valid LLM\n",
"_DEFAULT_TEMPLATE = \"\"\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Relevant pieces of previous conversation:\n",
"{history}\n",
"\n",
"(You do not need to use these pieces of information if not relevant)\n",
"\n",
"Current conversation:\n",
"Human: {input}\n",
"AI:\"\"\"\n",
"PROMPT = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"], template=_DEFAULT_TEMPLATE\n",
")\n",
"conversation_with_summary = ConversationChain(\n",
" llm=llm, \n",
" prompt=PROMPT,\n",
" # We set a very low max_token_limit for the purposes of testing.\n",
" memory=memory,\n",
" verbose=True\n",
@ -174,7 +193,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 33,
"id": "86207a61",
"metadata": {
"tags": []
@ -190,10 +209,14 @@
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Relevant pieces of previous conversation:\n",
"input: My favorite sport is soccer\n",
"output: ...\n",
"\n",
"(You do not need to use these pieces of information if not relevant)\n",
"\n",
"Current conversation:\n",
"input: check the latest scores of the Warriors game\n",
"output: the Warriors are up against the Astros 88 to 84\n",
"Human: If the Cavaliers were to face off against the Warriers or the Astros, who would they most stand a chance to beat?\n",
"Human: what's my favorite sport?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@ -202,22 +225,22 @@
{
"data": {
"text/plain": [
"\" It's hard to say without knowing the current form of the teams. However, based on the current scores, it looks like the Cavaliers would have a better chance of beating the Astros than the Warriors.\""
"' You told me earlier that your favorite sport is soccer.'"
]
},
"execution_count": 6,
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Here, the basketball related content is surfaced\n",
"conversation_with_summary.predict(input=\"If the Cavaliers were to face off against the Warriers or the Astros, who would they most stand a chance to beat?\")"
"conversation_with_summary.predict(input=\"what's my favorite sport?\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 34,
"id": "8c669db1",
"metadata": {
"tags": []
@ -233,10 +256,14 @@
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Relevant pieces of previous conversation:\n",
"input: My favorite food is pizza\n",
"output: thats good to know\n",
"\n",
"(You do not need to use these pieces of information if not relevant)\n",
"\n",
"Current conversation:\n",
"input: What's the the time?\n",
"output: It's 2023-04-13 09:18:55.623736\n",
"Human: What day is it tomorrow?\n",
"Human: Whats my favorite food\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@ -245,10 +272,10 @@
{
"data": {
"text/plain": [
"' Tomorrow is 2023-04-14.'"
"' You said your favorite food is pizza.'"
]
},
"execution_count": 7,
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
@ -256,12 +283,12 @@
"source": [
"# Even though the language model is stateless, since relavent memory is fetched, it can \"reason\" about the time.\n",
"# Timestamping memories and data is useful in general to let the agent determine temporal relevance\n",
"conversation_with_summary.predict(input=\"What day is it tomorrow?\")"
"conversation_with_summary.predict(input=\"Whats my favorite food\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 35,
"id": "8c09a239",
"metadata": {
"tags": []
@ -277,10 +304,14 @@
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"Relevant pieces of previous conversation:\n",
"input: Hi, my name is Perry, what's up?\n",
"response: Hi Perry, my name is AI. I'm doing great, how about you? I understand you need help with your taxes. What specifically do you need help with?\n",
"Human: What's your name?\n",
"response: Hi Perry, I'm doing well. How about you?\n",
"\n",
"(You do not need to use these pieces of information if not relevant)\n",
"\n",
"Current conversation:\n",
"Human: What's my name?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@ -289,10 +320,10 @@
{
"data": {
"text/plain": [
"\" My name is AI. It's nice to meet you, Perry.\""
"' Your name is Perry.'"
]
},
"execution_count": 8,
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
@ -301,8 +332,16 @@
"# The memories from the conversation are automatically stored,\n",
"# since this query best matches the introduction chat above,\n",
"# the agent is able to 'remember' the user's name.\n",
"conversation_with_summary.predict(input=\"What's your name?\")"
"conversation_with_summary.predict(input=\"What's my name?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "df27c7dc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -321,7 +360,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.9.1"
}
},
"nbformat": 4,

Loading…
Cancel
Save