This template demonstrates building a RAG conversation app using Zep.
Included in this template:
- Populating a [Zep Document Collection](https://docs.getzep.com/sdk/documents/) with a set of documents (a Collection is analogous to an index in other Vector Databases).
- Using Zep's [integrated embedding](https://docs.getzep.com/deployment/embeddings/) functionality to embed the documents as vectors.
- Configuring a LangChain [ZepVectorStore Retriever](https://docs.getzep.com/sdk/documents/) to retrieve documents using Zep's built, hardware accelerated in [Maximal Marginal Relevance](https://docs.getzep.com/sdk/search_query/) (MMR) re-ranking.
- Prompts, a simple chat history data structure, and other components required to build a RAG conversation app.
- The RAG conversation chain.
## About [Zep - Fast, scalable building blocks for LLM Apps](https://www.getzep.com/)
Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
Key Features:
- Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.
- Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
- Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
- Hybrid search over memories and metadata, with messages automatically embedded on creation.
- Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.
- Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.