# docs `ecosystem/integrations` update 4 Added missed integrations. Fixed inconsistencies. ## Who can review? @hwchase17 @dev2049
1.7 KiB
Momento
Momento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero capability, and blazing-fast performance.
With Momento Cache, you grab the SDK, you get an end point, input a few lines into your code, and you're off and running.
This page covers how to use the Momento ecosystem within LangChain.
Installation and Setup
- Sign up for a free account here and get an auth token
- Install the Momento Python SDK with
pip install momento
Cache
The Cache wrapper allows for Momento to be used as a serverless, distributed, low-latency cache for LLM prompts and responses.
The standard cache is the go-to use case for Momento users in any environment.
Import the cache as follows:
from langchain.cache import MomentoCache
And set up like so:
from datetime import timedelta
from momento import CacheClient, Configurations, CredentialProvider
import langchain
# Instantiate the Momento client
cache_client = CacheClient(
Configurations.Laptop.v1(),
CredentialProvider.from_environment_variable("MOMENTO_AUTH_TOKEN"),
default_ttl=timedelta(days=1))
# Choose a Momento cache name of your choice
cache_name = "langchain"
# Instantiate the LLM cache
langchain.llm_cache = MomentoCache(cache_client, cache_name)
Memory
Momento can be used as a distributed memory store for LLMs.
Chat Message History Memory
See this notebook for a walkthrough of how to use Momento as a memory store for chat message history.