mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
c769421aa4
Thank you for contributing to LangChain! - [ ] **PR title**: "community: Add semantic caching and memory using MongoDB" - [ ] **PR message**: - **Description:** This PR introduces functionality for adding semantic caching and chat message history using MongoDB in RAG applications. By leveraging the MongoDBCache and MongoDBChatMessageHistory classes, developers can now enhance their retrieval-augmented generation applications with efficient semantic caching mechanisms and persistent conversation histories, improving response times and consistency across chat sessions. - **Issue:** N/A - **Dependencies:** Requires `datasets`, `langchain`, `langchain-mongodb`, `langchain-openai`, `pymongo`, and `pandas` for implementation. MongoDB Atlas is used for database services, and the OpenAI API for model access. - **Twitter handle:** @richmondalake Co-authored-by: Erick Friis <erick@langchain.dev>
819 lines
23 KiB
Plaintext
819 lines
23 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "70b333e6",
|
|
"metadata": {},
|
|
"source": [
|
|
"[![View Article](https://img.shields.io/badge/View%20Article-blue)](https://www.mongodb.com/developer/products/atlas/advanced-rag-langchain-mongodb/)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d84a72ea",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Adding Semantic Caching and Memory to your RAG Application using MongoDB and LangChain\n",
|
|
"\n",
|
|
"In this notebook, we will see how to use the new MongoDBCache and MongoDBChatMessageHistory in your RAG application.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "65527202",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 1: Install required libraries\n",
|
|
"\n",
|
|
"- **datasets**: Python library to get access to datasets available on Hugging Face Hub\n",
|
|
"\n",
|
|
"- **langchain**: Python toolkit for LangChain\n",
|
|
"\n",
|
|
"- **langchain-mongodb**: Python package to use MongoDB as a vector store, semantic cache, chat history store etc. in LangChain\n",
|
|
"\n",
|
|
"- **langchain-openai**: Python package to use OpenAI models with LangChain\n",
|
|
"\n",
|
|
"- **pymongo**: Python toolkit for MongoDB\n",
|
|
"\n",
|
|
"- **pandas**: Python library for data analysis, exploration, and manipulation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "cbc22fa4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"! pip install -qU datasets langchain langchain-mongodb langchain-openai pymongo pandas"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "39c41e87",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 2: Setup pre-requisites\n",
|
|
"\n",
|
|
"* Set the MongoDB connection string. Follow the steps [here](https://www.mongodb.com/docs/manual/reference/connection-string/) to get the connection string from the Atlas UI.\n",
|
|
"\n",
|
|
"* Set the OpenAI API key. Steps to obtain an API key as [here](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "b56412ae",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import getpass"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "16a20d7a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Enter your MongoDB connection string:········\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"MONGODB_URI = getpass.getpass(\"Enter your MongoDB connection string:\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "978682d4",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Enter your OpenAI API key:········\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"OPENAI_API_KEY = getpass.getpass(\"Enter your OpenAI API key:\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "606081c5",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"········\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Optional-- If you want to enable Langsmith -- good for debugging\n",
|
|
"import os\n",
|
|
"\n",
|
|
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
|
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f6b8302c",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 3: Download the dataset\n",
|
|
"\n",
|
|
"We will be using MongoDB's [embedded_movies](https://huggingface.co/datasets/MongoDB/embedded_movies) dataset"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "1a3433a6",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import pandas as pd\n",
|
|
"from datasets import load_dataset"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "aee5311b",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Ensure you have an HF_TOKEN in your development enviornment:\n",
|
|
"# access tokens can be created or copied from the Hugging Face platform (https://huggingface.co/docs/hub/en/security-tokens)\n",
|
|
"\n",
|
|
"# Load MongoDB's embedded_movies dataset from Hugging Face\n",
|
|
"# https://huggingface.co/datasets/MongoDB/airbnb_embeddings\n",
|
|
"\n",
|
|
"data = load_dataset(\"MongoDB/embedded_movies\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "1d630a26",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"df = pd.DataFrame(data[\"train\"])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a1f94f43",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 4: Data analysis\n",
|
|
"\n",
|
|
"Make sure length of the dataset is what we expect, drop Nones etc."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "b276df71",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<div>\n",
|
|
"<style scoped>\n",
|
|
" .dataframe tbody tr th:only-of-type {\n",
|
|
" vertical-align: middle;\n",
|
|
" }\n",
|
|
"\n",
|
|
" .dataframe tbody tr th {\n",
|
|
" vertical-align: top;\n",
|
|
" }\n",
|
|
"\n",
|
|
" .dataframe thead th {\n",
|
|
" text-align: right;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<table border=\"1\" class=\"dataframe\">\n",
|
|
" <thead>\n",
|
|
" <tr style=\"text-align: right;\">\n",
|
|
" <th></th>\n",
|
|
" <th>fullplot</th>\n",
|
|
" <th>type</th>\n",
|
|
" <th>plot_embedding</th>\n",
|
|
" <th>num_mflix_comments</th>\n",
|
|
" <th>runtime</th>\n",
|
|
" <th>writers</th>\n",
|
|
" <th>imdb</th>\n",
|
|
" <th>countries</th>\n",
|
|
" <th>rated</th>\n",
|
|
" <th>plot</th>\n",
|
|
" <th>title</th>\n",
|
|
" <th>languages</th>\n",
|
|
" <th>metacritic</th>\n",
|
|
" <th>directors</th>\n",
|
|
" <th>awards</th>\n",
|
|
" <th>genres</th>\n",
|
|
" <th>poster</th>\n",
|
|
" <th>cast</th>\n",
|
|
" </tr>\n",
|
|
" </thead>\n",
|
|
" <tbody>\n",
|
|
" <tr>\n",
|
|
" <th>0</th>\n",
|
|
" <td>Young Pauline is left a lot of money when her ...</td>\n",
|
|
" <td>movie</td>\n",
|
|
" <td>[0.00072939653, -0.026834568, 0.013515796, -0....</td>\n",
|
|
" <td>0</td>\n",
|
|
" <td>199.0</td>\n",
|
|
" <td>[Charles W. Goddard (screenplay), Basil Dickey...</td>\n",
|
|
" <td>{'id': 4465, 'rating': 7.6, 'votes': 744}</td>\n",
|
|
" <td>[USA]</td>\n",
|
|
" <td>None</td>\n",
|
|
" <td>Young Pauline is left a lot of money when her ...</td>\n",
|
|
" <td>The Perils of Pauline</td>\n",
|
|
" <td>[English]</td>\n",
|
|
" <td>NaN</td>\n",
|
|
" <td>[Louis J. Gasnier, Donald MacKenzie]</td>\n",
|
|
" <td>{'nominations': 0, 'text': '1 win.', 'wins': 1}</td>\n",
|
|
" <td>[Action]</td>\n",
|
|
" <td>https://m.media-amazon.com/images/M/MV5BMzgxOD...</td>\n",
|
|
" <td>[Pearl White, Crane Wilbur, Paul Panzer, Edwar...</td>\n",
|
|
" </tr>\n",
|
|
" </tbody>\n",
|
|
"</table>\n",
|
|
"</div>"
|
|
],
|
|
"text/plain": [
|
|
" fullplot type \\\n",
|
|
"0 Young Pauline is left a lot of money when her ... movie \n",
|
|
"\n",
|
|
" plot_embedding num_mflix_comments \\\n",
|
|
"0 [0.00072939653, -0.026834568, 0.013515796, -0.... 0 \n",
|
|
"\n",
|
|
" runtime writers \\\n",
|
|
"0 199.0 [Charles W. Goddard (screenplay), Basil Dickey... \n",
|
|
"\n",
|
|
" imdb countries rated \\\n",
|
|
"0 {'id': 4465, 'rating': 7.6, 'votes': 744} [USA] None \n",
|
|
"\n",
|
|
" plot title \\\n",
|
|
"0 Young Pauline is left a lot of money when her ... The Perils of Pauline \n",
|
|
"\n",
|
|
" languages metacritic directors \\\n",
|
|
"0 [English] NaN [Louis J. Gasnier, Donald MacKenzie] \n",
|
|
"\n",
|
|
" awards genres \\\n",
|
|
"0 {'nominations': 0, 'text': '1 win.', 'wins': 1} [Action] \n",
|
|
"\n",
|
|
" poster \\\n",
|
|
"0 https://m.media-amazon.com/images/M/MV5BMzgxOD... \n",
|
|
"\n",
|
|
" cast \n",
|
|
"0 [Pearl White, Crane Wilbur, Paul Panzer, Edwar... "
|
|
]
|
|
},
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# Previewing the contents of the data\n",
|
|
"df.head(1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "22ab375d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Only keep records where the fullplot field is not null\n",
|
|
"df = df[df[\"fullplot\"].notna()]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "fceed99a",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Renaming the embedding field to \"embedding\" -- required by LangChain\n",
|
|
"df.rename(columns={\"plot_embedding\": \"embedding\"}, inplace=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "aedec13a",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 5: Create a simple RAG chain using MongoDB as the vector store"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "11d292f3",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_mongodb import MongoDBAtlasVectorSearch\n",
|
|
"from pymongo import MongoClient\n",
|
|
"\n",
|
|
"# Initialize MongoDB python client\n",
|
|
"client = MongoClient(MONGODB_URI, appname=\"devrel.content.python\")\n",
|
|
"\n",
|
|
"DB_NAME = \"langchain_chatbot\"\n",
|
|
"COLLECTION_NAME = \"data\"\n",
|
|
"ATLAS_VECTOR_SEARCH_INDEX_NAME = \"vector_index\"\n",
|
|
"collection = client[DB_NAME][COLLECTION_NAME]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "d8292d53",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"DeleteResult({'n': 1000, 'electionId': ObjectId('7fffffff00000000000000f6'), 'opTime': {'ts': Timestamp(1710523288, 1033), 't': 246}, 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1710523288, 1042), 'signature': {'hash': b\"i\\xa8\\xe9'\\x1ed\\xf2u\\xf3L\\xff\\xb1\\xf5\\xbfA\\x90\\xabJ\\x12\\x83\", 'keyId': 7299545392000008318}}, 'operationTime': Timestamp(1710523288, 1033)}, acknowledged=True)"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# Delete any existing records in the collection\n",
|
|
"collection.delete_many({})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"id": "36c68914",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Data ingestion into MongoDB completed\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Data Ingestion\n",
|
|
"records = df.to_dict(\"records\")\n",
|
|
"collection.insert_many(records)\n",
|
|
"\n",
|
|
"print(\"Data ingestion into MongoDB completed\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"id": "cbfca0b8",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_openai import OpenAIEmbeddings\n",
|
|
"\n",
|
|
"# Using the text-embedding-ada-002 since that's what was used to create embeddings in the movies dataset\n",
|
|
"embeddings = OpenAIEmbeddings(\n",
|
|
" openai_api_key=OPENAI_API_KEY, model=\"text-embedding-ada-002\"\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"id": "798e176c",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Vector Store Creation\n",
|
|
"vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n",
|
|
" connection_string=MONGODB_URI,\n",
|
|
" namespace=DB_NAME + \".\" + COLLECTION_NAME,\n",
|
|
" embedding=embeddings,\n",
|
|
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
|
|
" text_key=\"fullplot\",\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 49,
|
|
"id": "c71cd087",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Using the MongoDB vector store as a retriever in a RAG chain\n",
|
|
"retriever = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 5})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"id": "b6588cd3",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.output_parsers import StrOutputParser\n",
|
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
|
"from langchain_core.runnables import RunnablePassthrough\n",
|
|
"from langchain_openai import ChatOpenAI\n",
|
|
"\n",
|
|
"# Generate context using the retriever, and pass the user question through\n",
|
|
"retrieve = {\n",
|
|
" \"context\": retriever | (lambda docs: \"\\n\\n\".join([d.page_content for d in docs])),\n",
|
|
" \"question\": RunnablePassthrough(),\n",
|
|
"}\n",
|
|
"template = \"\"\"Answer the question based only on the following context: \\\n",
|
|
"{context}\n",
|
|
"\n",
|
|
"Question: {question}\n",
|
|
"\"\"\"\n",
|
|
"# Defining the chat prompt\n",
|
|
"prompt = ChatPromptTemplate.from_template(template)\n",
|
|
"# Defining the model to be used for chat completion\n",
|
|
"model = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)\n",
|
|
"# Parse output as a string\n",
|
|
"parse_output = StrOutputParser()\n",
|
|
"\n",
|
|
"# Naive RAG chain\n",
|
|
"naive_rag_chain = retrieve | prompt | model | parse_output"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 26,
|
|
"id": "aaae21f5",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Once a Thief'"
|
|
]
|
|
},
|
|
"execution_count": 26,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "75f929ef",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 6: Create a RAG chain with chat history"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 27,
|
|
"id": "94e7bd4a",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.prompts import MessagesPlaceholder\n",
|
|
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
|
"from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 29,
|
|
"id": "5bb30860",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def get_session_history(session_id: str) -> MongoDBChatMessageHistory:\n",
|
|
" return MongoDBChatMessageHistory(\n",
|
|
" MONGODB_URI, session_id, database_name=DB_NAME, collection_name=\"history\"\n",
|
|
" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 50,
|
|
"id": "f51d0f35",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Given a follow-up question and history, create a standalone question\n",
|
|
"standalone_system_prompt = \"\"\"\n",
|
|
"Given a chat history and a follow-up question, rephrase the follow-up question to be a standalone question. \\\n",
|
|
"Do NOT answer the question, just reformulate it if needed, otherwise return it as is. \\\n",
|
|
"Only return the final standalone question. \\\n",
|
|
"\"\"\"\n",
|
|
"standalone_question_prompt = ChatPromptTemplate.from_messages(\n",
|
|
" [\n",
|
|
" (\"system\", standalone_system_prompt),\n",
|
|
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
|
" (\"human\", \"{question}\"),\n",
|
|
" ]\n",
|
|
")\n",
|
|
"\n",
|
|
"question_chain = standalone_question_prompt | model | parse_output"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 51,
|
|
"id": "f3ef3354",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Generate context by passing output of the question_chain i.e. the standalone question to the retriever\n",
|
|
"retriever_chain = RunnablePassthrough.assign(\n",
|
|
" context=question_chain\n",
|
|
" | retriever\n",
|
|
" | (lambda docs: \"\\n\\n\".join([d.page_content for d in docs]))\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 55,
|
|
"id": "5afb7345",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Create a prompt that includes the context, history and the follow-up question\n",
|
|
"rag_system_prompt = \"\"\"Answer the question based only on the following context: \\\n",
|
|
"{context}\n",
|
|
"\"\"\"\n",
|
|
"rag_prompt = ChatPromptTemplate.from_messages(\n",
|
|
" [\n",
|
|
" (\"system\", rag_system_prompt),\n",
|
|
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
|
" (\"human\", \"{question}\"),\n",
|
|
" ]\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 56,
|
|
"id": "f95f47d0",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# RAG chain\n",
|
|
"rag_chain = retriever_chain | rag_prompt | model | parse_output"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 57,
|
|
"id": "9618d395",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'The best movie to watch when feeling down could be \"Last Action Hero.\" It\\'s a fun and action-packed film that blends reality and fantasy, offering an escape from the real world and providing an entertaining distraction.'"
|
|
]
|
|
},
|
|
"execution_count": 57,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# RAG chain with history\n",
|
|
"with_message_history = RunnableWithMessageHistory(\n",
|
|
" rag_chain,\n",
|
|
" get_session_history,\n",
|
|
" input_messages_key=\"question\",\n",
|
|
" history_messages_key=\"history\",\n",
|
|
")\n",
|
|
"with_message_history.invoke(\n",
|
|
" {\"question\": \"What is the best movie to watch when sad?\"},\n",
|
|
" {\"configurable\": {\"session_id\": \"1\"}},\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 58,
|
|
"id": "6e3080d1",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'I apologize for the confusion. Another movie that might lift your spirits when you\\'re feeling sad is \"Smilla\\'s Sense of Snow.\" It\\'s a mystery thriller that could engage your mind and distract you from your sadness with its intriguing plot and suspenseful storyline.'"
|
|
]
|
|
},
|
|
"execution_count": 58,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"with_message_history.invoke(\n",
|
|
" {\n",
|
|
" \"question\": \"Hmmm..I don't want to watch that one. Can you suggest something else?\"\n",
|
|
" },\n",
|
|
" {\"configurable\": {\"session_id\": \"1\"}},\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 59,
|
|
"id": "daea2953",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'For a lighter movie option, you might enjoy \"Cousins.\" It\\'s a comedy film set in Barcelona with action and humor, offering a fun and entertaining escape from reality. The storyline is engaging and filled with comedic moments that could help lift your spirits.'"
|
|
]
|
|
},
|
|
"execution_count": 59,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"with_message_history.invoke(\n",
|
|
" {\"question\": \"How about something more light?\"},\n",
|
|
" {\"configurable\": {\"session_id\": \"1\"}},\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0de23a88",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Step 7: Get faster responses using Semantic Cache\n",
|
|
"\n",
|
|
"**NOTE:** Semantic cache only caches the input to the LLM. When using it in retrieval chains, remember that documents retrieved can change between runs resulting in cache misses for semantically similar queries."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 61,
|
|
"id": "5d6b6741",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain_core.globals import set_llm_cache\n",
|
|
"from langchain_mongodb.cache import MongoDBAtlasSemanticCache\n",
|
|
"\n",
|
|
"set_llm_cache(\n",
|
|
" MongoDBAtlasSemanticCache(\n",
|
|
" connection_string=MONGODB_URI,\n",
|
|
" embedding=embeddings,\n",
|
|
" collection_name=\"semantic_cache\",\n",
|
|
" database_name=DB_NAME,\n",
|
|
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
|
|
" wait_until_ready=True, # Optional, waits until the cache is ready to be used\n",
|
|
" )\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 62,
|
|
"id": "9825bc7b",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 87.8 ms, sys: 670 µs, total: 88.5 ms\n",
|
|
"Wall time: 1.24 s\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Once a Thief'"
|
|
]
|
|
},
|
|
"execution_count": 62,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 63,
|
|
"id": "a5e518cf",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 43.5 ms, sys: 4.16 ms, total: 47.7 ms\n",
|
|
"Wall time: 255 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Once a Thief'"
|
|
]
|
|
},
|
|
"execution_count": 63,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 64,
|
|
"id": "3d3d3ad3",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"CPU times: user 115 ms, sys: 171 µs, total: 115 ms\n",
|
|
"Wall time: 1.38 s\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'I would recommend watching \"Last Action Hero\" when sad, as it is a fun and action-packed film that can help lift your spirits.'"
|
|
]
|
|
},
|
|
"execution_count": 64,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"%%time\n",
|
|
"naive_rag_chain.invoke(\"Which movie do I watch when sad?\")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "conda_pytorch_p310",
|
|
"language": "python",
|
|
"name": "conda_pytorch_p310"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.13"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|