[docs]: update Redis (langchain-redis) documentation notebooks (vectorstore, llm caching, chat message history) (#25113)

- **Description:** Adds notebooks for Redis Partner Package
(langchain-redis)
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** `@bsbodden` and `@redis`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
Brian Sam-Bodden 2024-08-22 08:53:02 -07:00 committed by GitHub
parent 4ff2f4499e
commit 29c873dd69
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 1433 additions and 666 deletions

View File

@ -0,0 +1,424 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Redis Cache for LangChain\n",
"\n",
"This notebook demonstrates how to use the `RedisCache` and `RedisSemanticCache` classes from the langchain-redis package to implement caching for LLM responses."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"First, let's install the required dependencies and ensure we have a Redis instance running."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-core langchain-redis langchain-openai redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ensure you have a Redis server running. You can start one using Docker with:\n",
"\n",
"```\n",
"docker run -d -p 6379:6379 redis:latest\n",
"```\n",
"\n",
"Or install and run Redis locally according to your operating system's instructions."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connecting to Redis at: redis://redis:6379\n"
]
}
],
"source": [
"import os\n",
"\n",
"# Use the environment variable if set, otherwise default to localhost\n",
"REDIS_URL = os.getenv(\"REDIS_URL\", \"redis://localhost:6379\")\n",
"print(f\"Connecting to Redis at: {REDIS_URL}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Importing Required Libraries"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"from langchain.globals import set_llm_cache\n",
"from langchain.schema import Generation\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_redis import RedisCache, RedisSemanticCache"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import langchain_core\n",
"import langchain_openai\n",
"import openai\n",
"import redis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set OpenAI API key"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API key not found in environment variables.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Please enter your OpenAI API key: ········\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API key has been set for this session.\n"
]
}
],
"source": [
"from getpass import getpass\n",
"\n",
"# Check if OPENAI_API_KEY is already set in the environment\n",
"openai_api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"\n",
"if not openai_api_key:\n",
" print(\"OpenAI API key not found in environment variables.\")\n",
" openai_api_key = getpass(\"Please enter your OpenAI API key: \")\n",
"\n",
" # Set the API key for the current session\n",
" os.environ[\"OPENAI_API_KEY\"] = openai_api_key\n",
" print(\"OpenAI API key has been set for this session.\")\n",
"else:\n",
" print(\"OpenAI API key found in environment variables.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using RedisCache"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First call (not cached):\n",
"Result: \n",
"\n",
"Caching is the process of storing frequently accessed data in a temporary storage location for faster retrieval. This helps to reduce the time and resources needed to access the data from its original source. Caching is commonly used in computer systems, web browsers, and databases to improve performance and efficiency.\n",
"Time: 1.16 seconds\n",
"\n",
"Second call (cached):\n",
"Result: \n",
"\n",
"Caching is the process of storing frequently accessed data in a temporary storage location for faster retrieval. This helps to reduce the time and resources needed to access the data from its original source. Caching is commonly used in computer systems, web browsers, and databases to improve performance and efficiency.\n",
"Time: 0.05 seconds\n",
"\n",
"Speed improvement: 25.40x faster\n",
"Cache cleared\n"
]
}
],
"source": [
"# Initialize RedisCache\n",
"redis_cache = RedisCache(redis_url=REDIS_URL)\n",
"\n",
"# Set the cache for LangChain to use\n",
"set_llm_cache(redis_cache)\n",
"\n",
"# Initialize the language model\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"\n",
"# Function to measure execution time\n",
"def timed_completion(prompt):\n",
" start_time = time.time()\n",
" result = llm.invoke(prompt)\n",
" end_time = time.time()\n",
" return result, end_time - start_time\n",
"\n",
"\n",
"# First call (not cached)\n",
"prompt = \"Explain the concept of caching in three sentences.\"\n",
"result1, time1 = timed_completion(prompt)\n",
"print(f\"First call (not cached):\\nResult: {result1}\\nTime: {time1:.2f} seconds\\n\")\n",
"\n",
"# Second call (should be cached)\n",
"result2, time2 = timed_completion(prompt)\n",
"print(f\"Second call (cached):\\nResult: {result2}\\nTime: {time2:.2f} seconds\\n\")\n",
"\n",
"print(f\"Speed improvement: {time1 / time2:.2f}x faster\")\n",
"\n",
"# Clear the cache\n",
"redis_cache.clear()\n",
"print(\"Cache cleared\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using RedisSemanticCache"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Original query:\n",
"Prompt: What is the capital of France?\n",
"Result: \n",
"\n",
"The capital of France is Paris.\n",
"Time: 1.52 seconds\n",
"\n",
"Similar query:\n",
"Prompt: Can you tell me the capital city of France?\n",
"Result: \n",
"\n",
"The capital of France is Paris.\n",
"Time: 0.29 seconds\n",
"\n",
"Speed improvement: 5.22x faster\n",
"Semantic cache cleared\n"
]
}
],
"source": [
"# Initialize RedisSemanticCache\n",
"embeddings = OpenAIEmbeddings()\n",
"semantic_cache = RedisSemanticCache(\n",
" redis_url=REDIS_URL, embeddings=embeddings, distance_threshold=0.2\n",
")\n",
"\n",
"# Set the cache for LangChain to use\n",
"set_llm_cache(semantic_cache)\n",
"\n",
"\n",
"# Function to test semantic cache\n",
"def test_semantic_cache(prompt):\n",
" start_time = time.time()\n",
" result = llm.invoke(prompt)\n",
" end_time = time.time()\n",
" return result, end_time - start_time\n",
"\n",
"\n",
"# Original query\n",
"original_prompt = \"What is the capital of France?\"\n",
"result1, time1 = test_semantic_cache(original_prompt)\n",
"print(\n",
" f\"Original query:\\nPrompt: {original_prompt}\\nResult: {result1}\\nTime: {time1:.2f} seconds\\n\"\n",
")\n",
"\n",
"# Semantically similar query\n",
"similar_prompt = \"Can you tell me the capital city of France?\"\n",
"result2, time2 = test_semantic_cache(similar_prompt)\n",
"print(\n",
" f\"Similar query:\\nPrompt: {similar_prompt}\\nResult: {result2}\\nTime: {time2:.2f} seconds\\n\"\n",
")\n",
"\n",
"print(f\"Speed improvement: {time1 / time2:.2f}x faster\")\n",
"\n",
"# Clear the semantic cache\n",
"semantic_cache.clear()\n",
"print(\"Semantic cache cleared\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom TTL (Time-To-Live)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cached result: Cached response\n",
"Waiting for TTL to expire...\n",
"Result after TTL: Not found (expired)\n"
]
}
],
"source": [
"# Initialize RedisCache with custom TTL\n",
"ttl_cache = RedisCache(redis_url=REDIS_URL, ttl=5) # 60 seconds TTL\n",
"\n",
"# Update a cache entry\n",
"ttl_cache.update(\"test_prompt\", \"test_llm\", [Generation(text=\"Cached response\")])\n",
"\n",
"# Retrieve the cached entry\n",
"cached_result = ttl_cache.lookup(\"test_prompt\", \"test_llm\")\n",
"print(f\"Cached result: {cached_result[0].text if cached_result else 'Not found'}\")\n",
"\n",
"# Wait for TTL to expire\n",
"print(\"Waiting for TTL to expire...\")\n",
"time.sleep(6)\n",
"\n",
"# Try to retrieve the expired entry\n",
"expired_result = ttl_cache.lookup(\"test_prompt\", \"test_llm\")\n",
"print(\n",
" f\"Result after TTL: {expired_result[0].text if expired_result else 'Not found (expired)'}\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Customizing RedisSemanticCache"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Original result: \n",
"\n",
"The largest planet in our solar system is Jupiter.\n",
"Similar query result: \n",
"\n",
"The largest planet in our solar system is Jupiter.\n"
]
}
],
"source": [
"# Initialize RedisSemanticCache with custom settings\n",
"custom_semantic_cache = RedisSemanticCache(\n",
" redis_url=REDIS_URL,\n",
" embeddings=embeddings,\n",
" distance_threshold=0.1, # Stricter similarity threshold\n",
" ttl=3600, # 1 hour TTL\n",
" name=\"custom_cache\", # Custom cache name\n",
")\n",
"\n",
"# Test the custom semantic cache\n",
"set_llm_cache(custom_semantic_cache)\n",
"\n",
"test_prompt = \"What's the largest planet in our solar system?\"\n",
"result, _ = test_semantic_cache(test_prompt)\n",
"print(f\"Original result: {result}\")\n",
"\n",
"# Try a slightly different query\n",
"similar_test_prompt = \"Which planet is the biggest in the solar system?\"\n",
"similar_result, _ = test_semantic_cache(similar_test_prompt)\n",
"print(f\"Similar query result: {similar_result}\")\n",
"\n",
"# Clean up\n",
"custom_semantic_cache.clear()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"This notebook demonstrated the usage of `RedisCache` and `RedisSemanticCache` from the langchain-redis package. These caching mechanisms can significantly improve the performance of LLM-based applications by reducing redundant API calls and leveraging semantic similarity for intelligent caching. The Redis-based implementation provides a fast, scalable, and flexible solution for caching in distributed systems."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@ -457,7 +457,9 @@
"tags": []
},
"source": [
"## `Redis` Cache"
"## `Redis` Cache\n",
"\n",
"See the main [Redis cache docs](/docs/integrations/caches/redis_llm_caching/) for detail."
]
},
{

View File

@ -2,171 +2,347 @@
"cells": [
{
"cell_type": "markdown",
"id": "91c6a7ef",
"metadata": {},
"source": [
"# Redis\n",
"# Redis Chat Message History\n",
"\n",
">[Redis (Remote Dictionary Server)](https://en.wikipedia.org/wiki/Redis) is an open-source in-memory storage, used as a distributed, in-memory keyvalue database, cache and message broker, with optional durability. Because it holds all data in memory and because of its design, `Redis` offers low-latency reads and writes, making it particularly suitable for use cases that require a cache. Redis is the most popular NoSQL database, and one of the most popular databases overall.\n",
">[Redis (Remote Dictionary Server)](https://en.wikipedia.org/wiki/Redis) is an open-source in-memory storage, used as a distributed, in-memory keyvalue database, cache and message broker, with optional durability. `Redis` offers low-latency reads and writes. Redis is the most popular NoSQL database, and one of the most popular databases overall.\n",
"\n",
"This notebook goes over how to use `Redis` to store chat message history."
"This notebook demonstrates how to use the `RedisChatMessageHistory` class from the langchain-redis package to store and manage chat message history using Redis."
]
},
{
"cell_type": "markdown",
"id": "897a4682-f9fc-488b-98f3-ae2acad84600",
"metadata": {},
"source": [
"## Setup\n",
"First we need to install dependencies, and start a redis instance using commands like: `redis-server`."
"\n",
"First, we need to install the required dependencies and ensure we have a Redis instance running."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cda8b56d-baf7-49a2-91a2-4d424a8519cb",
"metadata": {},
"outputs": [],
"source": [
"pip install -U langchain-community redis"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b11090e7-284b-4ed2-9790-ce0d35638717",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import RedisChatMessageHistory"
"%pip install -qU langchain-redis langchain-openai redis"
]
},
{
"cell_type": "markdown",
"id": "20b99474-75ea-422e-9809-fbdb9d103afc",
"metadata": {},
"source": [
"## Store and Retrieve Messages"
"Make sure you have a Redis server running. You can start one using Docker with the following command:\n",
"\n",
"```\n",
"docker run -d -p 6379:6379 redis:latest\n",
"```\n",
"\n",
"Or install and run Redis locally according to the instructions for your operating system."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connecting to Redis at: redis://redis:6379\n"
]
}
],
"source": [
"import os\n",
"\n",
"# Use the environment variable if set, otherwise default to localhost\n",
"REDIS_URL = os.getenv(\"REDIS_URL\", \"redis://localhost:6379\")\n",
"print(f\"Connecting to Redis at: {REDIS_URL}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Importing Required Libraries"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d15e3302",
"metadata": {},
"outputs": [],
"source": [
"history = RedisChatMessageHistory(\"foo\", url=\"redis://localhost:6379\")\n",
"\n",
"history.add_user_message(\"hi!\")\n",
"\n",
"history.add_ai_message(\"whats up?\")"
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_redis import RedisChatMessageHistory"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage of RedisChatMessageHistory"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "64fc465e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!'), AIMessage(content='whats up?')]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Chat History:\n",
"HumanMessage: Hello, AI assistant!\n",
"AIMessage: Hello! How can I assist you today?\n"
]
}
],
"source": [
"history.messages"
"# Initialize RedisChatMessageHistory\n",
"history = RedisChatMessageHistory(session_id=\"user_123\", redis_url=REDIS_URL)\n",
"\n",
"# Add messages to the history\n",
"history.add_user_message(\"Hello, AI assistant!\")\n",
"history.add_ai_message(\"Hello! How can I assist you today?\")\n",
"\n",
"# Retrieve messages\n",
"print(\"Chat History:\")\n",
"for message in history.messages:\n",
" print(f\"{type(message).__name__}: {message.content}\")"
]
},
{
"cell_type": "markdown",
"id": "465fdd8c-b093-4d19-a55a-30f3b646432b",
"metadata": {},
"source": [
"## Using in the Chains"
"## Using RedisChatMessageHistory with Language Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "94d65d2f-e9bb-4b47-a86d-dd6b1b5e8247",
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"pip install -U langchain-openai"
"### Set OpenAI API key"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ace3e7b2-5e3e-4966-b549-04952a6a9a09",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API key not found in environment variables.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Please enter your OpenAI API key: ········\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API key has been set for this session.\n"
]
}
],
"source": [
"from typing import Optional\n",
"from getpass import getpass\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI"
"# Check if OPENAI_API_KEY is already set in the environment\n",
"openai_api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"\n",
"if not openai_api_key:\n",
" print(\"OpenAI API key not found in environment variables.\")\n",
" openai_api_key = getpass(\"Please enter your OpenAI API key: \")\n",
"\n",
" # Set the API key for the current session\n",
" os.environ[\"OPENAI_API_KEY\"] = openai_api_key\n",
" print(\"OpenAI API key has been set for this session.\")\n",
"else:\n",
" print(\"OpenAI API key found in environment variables.\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "5c1fba0d-d06a-4695-ba14-c42a3461ada1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Bob, as you mentioned earlier. Is there anything specific you would like assistance with, Bob?')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"AI Response 1: Hello Alice! How can I assist you today?\n",
"AI Response 2: Your name is Alice.\n"
]
}
],
"source": [
"# Create a prompt template\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You're an assistant。\"),\n",
" (\"system\", \"You are a helpful AI assistant.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | ChatOpenAI()\n",
"# Initialize the language model\n",
"llm = ChatOpenAI()\n",
"\n",
"# Create the conversational chain\n",
"chain = prompt | llm\n",
"\n",
"\n",
"# Function to get or create a RedisChatMessageHistory instance\n",
"def get_redis_history(session_id: str) -> BaseChatMessageHistory:\n",
" return RedisChatMessageHistory(session_id, redis_url=REDIS_URL)\n",
"\n",
"\n",
"# Create a runnable with message history\n",
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: RedisChatMessageHistory(\n",
" session_id, url=\"redis://localhost:6379\"\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
" chain, get_redis_history, input_messages_key=\"input\", history_messages_key=\"history\"\n",
")\n",
"\n",
"config = {\"configurable\": {\"session_id\": \"foo\"}}\n",
"# Use the chain in a conversation\n",
"response1 = chain_with_history.invoke(\n",
" {\"input\": \"Hi, my name is Alice.\"},\n",
" config={\"configurable\": {\"session_id\": \"alice_123\"}},\n",
")\n",
"print(\"AI Response 1:\", response1.content)\n",
"\n",
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)\n",
"\n",
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
"response2 = chain_with_history.invoke(\n",
" {\"input\": \"What's my name?\"}, config={\"configurable\": {\"session_id\": \"alice_123\"}}\n",
")\n",
"print(\"AI Response 2:\", response2.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced Features"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom Redis Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76ce3f6b-f4c7-4d27-8031-60f7dd756695",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Custom History: [HumanMessage(content='This is a message with custom configuration.')]\n"
]
}
],
"source": [
"# Initialize with custom Redis configuration\n",
"custom_history = RedisChatMessageHistory(\n",
" \"user_456\",\n",
" redis_url=REDIS_URL,\n",
" key_prefix=\"custom_prefix:\",\n",
" ttl=3600, # Set TTL to 1 hour\n",
" index_name=\"custom_index\",\n",
")\n",
"\n",
"custom_history.add_user_message(\"This is a message with custom configuration.\")\n",
"print(\"Custom History:\", custom_history.messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Searching Messages"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Search Results:\n",
"human: Tell me about artificial intelligence....\n",
"ai: Artificial Intelligence (AI) is a branch of comput...\n"
]
}
],
"source": [
"# Add more messages\n",
"history.add_user_message(\"Tell me about artificial intelligence.\")\n",
"history.add_ai_message(\n",
" \"Artificial Intelligence (AI) is a branch of computer science...\"\n",
")\n",
"\n",
"# Search for messages containing a specific term\n",
"search_results = history.search_messages(\"artificial intelligence\")\n",
"print(\"Search Results:\")\n",
"for result in search_results:\n",
" print(f\"{result['type']}: {result['content'][:50]}...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Clearing History"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Messages after clearing: []\n"
]
}
],
"source": [
"# Clear the chat history\n",
"history.clear()\n",
"print(\"Messages after clearing:\", history.messages)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"This notebook demonstrated the key features of `RedisChatMessageHistory` from the langchain-redis package. It showed how to initialize and use the chat history, integrate it with language models, and utilize advanced features like custom configurations and message searching. Redis provides a fast and scalable solution for managing chat history in AI applications."
]
}
],
"metadata": {
@ -185,9 +361,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat_minor": 4
}

File diff suppressed because it is too large Load Diff