mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
448b4d3522
<!-- Thank you for contributing to LangChain! Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified. Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes if applicable, - **Dependencies:** any dependencies required for this change, - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/ If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
118 lines
3.7 KiB
Markdown
118 lines
3.7 KiB
Markdown
|
|
# rag-chroma-multi-modal
|
|
|
|
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
|
|
|
|
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
|
|
|
|
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
|
|
|
|
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
|
|
|
|
![mm-mmembd](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200)
|
|
|
|
## Input
|
|
|
|
Supply a slide deck as pdf in the `/docs` directory.
|
|
|
|
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
|
|
|
|
Example questions to ask can be:
|
|
```
|
|
How many customers does Datadog have?
|
|
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
|
|
```
|
|
|
|
To create an index of the slide deck, run:
|
|
```
|
|
poetry install
|
|
python ingest.py
|
|
```
|
|
|
|
## Storage
|
|
|
|
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
|
|
|
|
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
|
|
|
|
The first time you run the app, it will automatically download the multimodal embedding model.
|
|
|
|
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
|
|
|
|
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
|
|
```
|
|
vectorstore_mmembd = Chroma(
|
|
collection_name="multi-modal-rag",
|
|
persist_directory=str(re_vectorstore_path),
|
|
embedding_function=OpenCLIPEmbeddings(
|
|
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
|
|
),
|
|
)
|
|
```
|
|
|
|
## LLM
|
|
|
|
The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.
|
|
|
|
## Environment Setup
|
|
|
|
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
|
|
|
|
## Usage
|
|
|
|
To use this package, you should first have the LangChain CLI installed:
|
|
|
|
```shell
|
|
pip install -U langchain-cli
|
|
```
|
|
|
|
To create a new LangChain project and install this as the only package, you can do:
|
|
|
|
```shell
|
|
langchain app new my-app --package rag-chroma-multi-modal
|
|
```
|
|
|
|
If you want to add this to an existing project, you can just run:
|
|
|
|
```shell
|
|
langchain app add rag-chroma-multi-modal
|
|
```
|
|
|
|
And add the following code to your `server.py` file:
|
|
```python
|
|
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain
|
|
|
|
add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
|
|
```
|
|
|
|
(Optional) Let's now configure LangSmith.
|
|
LangSmith will help us trace, monitor and debug LangChain applications.
|
|
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
|
If you don't have access, you can skip this section
|
|
|
|
```shell
|
|
export LANGCHAIN_TRACING_V2=true
|
|
export LANGCHAIN_API_KEY=<your-api-key>
|
|
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
|
```
|
|
|
|
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
|
|
|
```shell
|
|
langchain serve
|
|
```
|
|
|
|
This will start the FastAPI app with a server is running locally at
|
|
[http://localhost:8000](http://localhost:8000)
|
|
|
|
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
|
We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground)
|
|
|
|
We can access the template from code with:
|
|
|
|
```python
|
|
from langserve.client import RemoteRunnable
|
|
|
|
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")
|
|
```
|