langchain/docs/integrations/redis.md
Leonid Ganeline e2d7677526
docs: compound ecosystem and integrations (#4870)
# Docs: compound ecosystem and integrations

**Problem statement:** We have a big overlap between the
References/Integrations and Ecosystem/LongChain Ecosystem pages. It
confuses users. It creates a situation when new integration is added
only on one of these pages, which creates even more confusion.
- removed References/Integrations page (but move all its information
into the individual integration pages - in the next PR).
- renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations.
I like the Ecosystem term. It is more generic and semantically richer
than the Integration term. But it mentally overloads users. The
`integration` term is more concrete.
UPDATE: after discussion, the Ecosystem is the term.
Ecosystem/Integrations is the page (in place of Ecosystem/LongChain
Ecosystem).

As a result, a user gets a single place to start with the individual
integration.
2023-05-18 09:29:57 -07:00

2.5 KiB

Redis

This page covers how to use the Redis ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Redis wrappers.

Installation and Setup

  • Install the Redis Python SDK with pip install redis

Wrappers

Cache

The Cache wrapper allows for Redis to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.

Standard Cache

The standard cache is the Redis bread & butter of use case in production for both open source and enterprise users globally.

To import this cache:

from langchain.cache import RedisCache

To use this cache with your LLMs:

import langchain
import redis

redis_client = redis.Redis.from_url(...)
langchain.llm_cache = RedisCache(redis_client)

Semantic Cache

Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.

To import this cache:

from langchain.cache import RedisSemanticCache

To use this cache with your LLMs:

import langchain
import redis

# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings

redis_url = "redis://localhost:6379"

langchain.llm_cache = RedisSemanticCache(
    embedding=FakeEmbeddings(),
    redis_url=redis_url
)

VectorStore

The vectorstore wrapper turns Redis into a low-latency vector database for semantic search or LLM content retrieval.

To import this vectorstore:

from langchain.vectorstores import Redis

For a more detailed walkthrough of the Redis vectorstore wrapper, see this notebook.

Retriever

The Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call .as_retriever() on the base vectorstore class.

Memory

Redis can be used to persist LLM conversations.

Vector Store Retriever Memory

For a more detailed walkthrough of the VectorStoreRetrieverMemory wrapper, see this notebook.

Chat Message History Memory

For a detailed example of Redis to cache conversation message history, see this notebook.