langchain/templates/rag-chroma-multi-modal
2024-03-12 20:48:56 -07:00
..
docs
rag_chroma_multi_modal docs, experimental[patch], langchain[patch], community[patch]: update storage imports (#15429) 2024-01-02 16:47:11 -05:00
tests
.gitignore
ingest.py infra: add print rule to ruff (#16221) 2024-02-09 16:13:30 -08:00
LICENSE
poetry.lock templates, cli: more security deps (#19006) 2024-03-12 20:48:56 -07:00
pyproject.toml templates: bump (#17074) 2024-02-05 17:12:12 -08:00
rag_chroma_multi_modal.ipynb
README.md Batch update of alt text and title attributes for images in md/mdx files across repo (#15357) 2024-01-12 14:37:48 -08:00

rag-chroma-multi-modal

Multi-modal LLMs enable visual assistants that can perform question-answering about images.

This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.

It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.

Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.

Diagram illustrating the workflow of a multi-modal LLM visual assistant using OpenCLIP embeddings and GPT-4V for question-answering based on slide deck images.

Input

Supply a slide deck as pdf in the /docs directory.

By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.

Example questions to ask can be:

How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?

To create an index of the slide deck, run:

poetry install
python ingest.py

Storage

This template will use OpenCLIP multi-modal embeddings to embed the images.

You can select different embedding model options (see results here).

The first time you run the app, it will automatically download the multimodal embedding model.

By default, LangChain will use an embedding model with moderate performance but lower memory requirments, ViT-H-14.

You can choose alternative OpenCLIPEmbeddings models in rag_chroma_multi_modal/ingest.py:

vectorstore_mmembd = Chroma(
    collection_name="multi-modal-rag",
    persist_directory=str(re_vectorstore_path),
    embedding_function=OpenCLIPEmbeddings(
        model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
    ),
)

LLM

The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.

Environment Setup

Set the OPENAI_API_KEY environment variable to access the OpenAI GPT-4V.

Usage

To use this package, you should first have the LangChain CLI installed:

pip install -U langchain-cli

To create a new LangChain project and install this as the only package, you can do:

langchain app new my-app --package rag-chroma-multi-modal

If you want to add this to an existing project, you can just run:

langchain app add rag-chroma-multi-modal

And add the following code to your server.py file:

from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain

add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")

(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up here. If you don't have access, you can skip this section

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project>  # if not specified, defaults to "default"

If you are inside this directory, then you can spin up a LangServe instance directly by:

langchain serve

This will start the FastAPI app with a server is running locally at http://localhost:8000

We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma-multi-modal/playground

We can access the template from code with:

from langserve.client import RemoteRunnable

runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")