mirror of
https://github.com/hwchase17/langchain
synced 2024-11-18 09:25:54 +00:00
527 lines
230 KiB
Plaintext
527 lines
230 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "9fc3897d-176f-4729-8fd1-cfb4add53abd",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## VDMS multi-modal RAG\n",
|
||
|
"\n",
|
||
|
"Many documents contain a mixture of content types, including text and images. \n",
|
||
|
"\n",
|
||
|
"Yet, information captured in images is lost in most RAG applications.\n",
|
||
|
"\n",
|
||
|
"With the emergence of multimodal LLMs, like [GPT-4V](https://openai.com/research/gpt-4v-system-card), it is worth considering how to utilize images in RAG. \n",
|
||
|
"\n",
|
||
|
"This cookbook highlights: \n",
|
||
|
"* Use of [Unstructured](https://unstructured.io/) to parse images, text, and tables from documents (PDFs).\n",
|
||
|
"* Use of multimodal embeddings (such as [CLIP](https://openai.com/research/clip)) to embed images and text\n",
|
||
|
"* Use of [VDMS](https://github.com/IntelLabs/vdms/blob/master/README.md) as a vector store with support for multi-modal\n",
|
||
|
"* Retrieval of both images and text using similarity search\n",
|
||
|
"* Passing raw images and text chunks to a multimodal LLM for answer synthesis \n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"## Packages\n",
|
||
|
"\n",
|
||
|
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"# (newest versions required for multi-modal)\n",
|
||
|
"! pip install --quiet -U vdms langchain-experimental\n",
|
||
|
"\n",
|
||
|
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
|
||
|
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "6a6b6e73",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Start VDMS Server\n",
|
||
|
"\n",
|
||
|
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
|
||
|
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 3,
|
||
|
"id": "5f483872",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"docker: Error response from daemon: Conflict. The container name \"/vdms_rag_nb\" is already in use by container \"0c19ed281463ac10d7efe07eb815643e3e534ddf24844357039453ad2b0c27e8\". You have to remove (or rename) that container to be able to reuse that name.\n",
|
||
|
"See 'docker run --help'.\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
|
||
|
"\n",
|
||
|
"# Connect to VDMS Vector Store\n",
|
||
|
"from langchain_community.vectorstores.vdms import VDMS_Client\n",
|
||
|
"\n",
|
||
|
"vdms_client = VDMS_Client(port=55559)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "78ac6543",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"# from dotenv import load_dotenv, find_dotenv\n",
|
||
|
"# load_dotenv(find_dotenv(), override=True);"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Data Loading\n",
|
||
|
"\n",
|
||
|
"### Partition PDF text and images\n",
|
||
|
" \n",
|
||
|
"Let's look at an example pdf containing interesting images.\n",
|
||
|
"\n",
|
||
|
"Famous photographs from library of congress:\n",
|
||
|
"\n",
|
||
|
"* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n",
|
||
|
"* We'll use this as an example below\n",
|
||
|
"\n",
|
||
|
"We can use `partition_pdf` below from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 4,
|
||
|
"id": "9646b524-71a7-4b2a-bdc8-0b81f77e968f",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from pathlib import Path\n",
|
||
|
"\n",
|
||
|
"import requests\n",
|
||
|
"\n",
|
||
|
"# Folder with pdf and extracted images\n",
|
||
|
"datapath = Path(\"./multimodal_files\").resolve()\n",
|
||
|
"datapath.mkdir(parents=True, exist_ok=True)\n",
|
||
|
"\n",
|
||
|
"pdf_url = \"https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\"\n",
|
||
|
"pdf_path = str(datapath / pdf_url.split(\"/\")[-1])\n",
|
||
|
"with open(pdf_path, \"wb\") as f:\n",
|
||
|
" f.write(requests.get(pdf_url).content)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 5,
|
||
|
"id": "bc4839c0-8773-4a07-ba59-5364501269b2",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"# Extract images, tables, and chunk text\n",
|
||
|
"from unstructured.partition.pdf import partition_pdf\n",
|
||
|
"\n",
|
||
|
"raw_pdf_elements = partition_pdf(\n",
|
||
|
" filename=pdf_path,\n",
|
||
|
" extract_images_in_pdf=True,\n",
|
||
|
" infer_table_structure=True,\n",
|
||
|
" chunking_strategy=\"by_title\",\n",
|
||
|
" max_characters=4000,\n",
|
||
|
" new_after_n_chars=3800,\n",
|
||
|
" combine_text_under_n_chars=2000,\n",
|
||
|
" image_output_dir_path=datapath,\n",
|
||
|
")\n",
|
||
|
"\n",
|
||
|
"datapath = str(datapath)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 6,
|
||
|
"id": "969545ad",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"# Categorize text elements by type\n",
|
||
|
"tables = []\n",
|
||
|
"texts = []\n",
|
||
|
"for element in raw_pdf_elements:\n",
|
||
|
" if \"unstructured.documents.elements.Table\" in str(type(element)):\n",
|
||
|
" tables.append(str(element))\n",
|
||
|
" elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):\n",
|
||
|
" texts.append(str(element))"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "5d8e6349-1547-4cbf-9c6f-491d8610ec10",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Multi-modal embeddings with our document\n",
|
||
|
"\n",
|
||
|
"We will use [OpenClip multimodal embeddings](https://python.langchain.com/docs/integrations/text_embedding/open_clip).\n",
|
||
|
"\n",
|
||
|
"We use a larger model for better performance (set in `langchain_experimental.open_clip.py`).\n",
|
||
|
"\n",
|
||
|
"```\n",
|
||
|
"model_name = \"ViT-g-14\"\n",
|
||
|
"checkpoint = \"laion2b_s34b_b88k\"\n",
|
||
|
"```"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 7,
|
||
|
"id": "4bc15842-cb95-4f84-9eb5-656b0282a800",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"import os\n",
|
||
|
"\n",
|
||
|
"from langchain_community.vectorstores import VDMS\n",
|
||
|
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
|
||
|
"\n",
|
||
|
"# Create VDMS\n",
|
||
|
"vectorstore = VDMS(\n",
|
||
|
" client=vdms_client,\n",
|
||
|
" collection_name=\"mm_rag_clip_photos\",\n",
|
||
|
" embedding_function=OpenCLIPEmbeddings(\n",
|
||
|
" model_name=\"ViT-g-14\", checkpoint=\"laion2b_s34b_b88k\"\n",
|
||
|
" ),\n",
|
||
|
")\n",
|
||
|
"\n",
|
||
|
"# Get image URIs with .jpg extension only\n",
|
||
|
"image_uris = sorted(\n",
|
||
|
" [\n",
|
||
|
" os.path.join(datapath, image_name)\n",
|
||
|
" for image_name in os.listdir(datapath)\n",
|
||
|
" if image_name.endswith(\".jpg\")\n",
|
||
|
" ]\n",
|
||
|
")\n",
|
||
|
"\n",
|
||
|
"# Add images\n",
|
||
|
"if image_uris:\n",
|
||
|
" vectorstore.add_images(uris=image_uris)\n",
|
||
|
"\n",
|
||
|
"# Add documents\n",
|
||
|
"if texts:\n",
|
||
|
" vectorstore.add_texts(texts=texts)\n",
|
||
|
"\n",
|
||
|
"# Make retriever\n",
|
||
|
"retriever = vectorstore.as_retriever()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "02a186d0-27e0-4820-8092-63b5349dd25d",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## RAG\n",
|
||
|
"\n",
|
||
|
"`vectorstore.add_images` will store / retrieve images as base64 encoded strings."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 8,
|
||
|
"id": "344f56a8-0dc3-433e-851c-3f7600c7a72b",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"import base64\n",
|
||
|
"from io import BytesIO\n",
|
||
|
"\n",
|
||
|
"from PIL import Image\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def resize_base64_image(base64_string, size=(128, 128)):\n",
|
||
|
" \"\"\"\n",
|
||
|
" Resize an image encoded as a Base64 string.\n",
|
||
|
"\n",
|
||
|
" Args:\n",
|
||
|
" base64_string (str): Base64 string of the original image.\n",
|
||
|
" size (tuple): Desired size of the image as (width, height).\n",
|
||
|
"\n",
|
||
|
" Returns:\n",
|
||
|
" str: Base64 string of the resized image.\n",
|
||
|
" \"\"\"\n",
|
||
|
" # Decode the Base64 string\n",
|
||
|
" img_data = base64.b64decode(base64_string)\n",
|
||
|
" img = Image.open(BytesIO(img_data))\n",
|
||
|
"\n",
|
||
|
" # Resize the image\n",
|
||
|
" resized_img = img.resize(size, Image.LANCZOS)\n",
|
||
|
"\n",
|
||
|
" # Save the resized image to a bytes buffer\n",
|
||
|
" buffered = BytesIO()\n",
|
||
|
" resized_img.save(buffered, format=img.format)\n",
|
||
|
"\n",
|
||
|
" # Encode the resized image to Base64\n",
|
||
|
" return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def is_base64(s):\n",
|
||
|
" \"\"\"Check if a string is Base64 encoded\"\"\"\n",
|
||
|
" try:\n",
|
||
|
" return base64.b64encode(base64.b64decode(s)) == s.encode()\n",
|
||
|
" except Exception:\n",
|
||
|
" return False\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def split_image_text_types(docs):\n",
|
||
|
" \"\"\"Split numpy array images and texts\"\"\"\n",
|
||
|
" images = []\n",
|
||
|
" text = []\n",
|
||
|
" for doc in docs:\n",
|
||
|
" doc = doc.page_content # Extract Document contents\n",
|
||
|
" if is_base64(doc):\n",
|
||
|
" # Resize image to avoid OAI server error\n",
|
||
|
" images.append(\n",
|
||
|
" resize_base64_image(doc, size=(250, 250))\n",
|
||
|
" ) # base64 encoded str\n",
|
||
|
" else:\n",
|
||
|
" text.append(doc)\n",
|
||
|
" return {\"images\": images, \"texts\": text}"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "23a2c1d8-fea6-4152-b184-3172dd46c735",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Currently, we format the inputs using a `RunnableLambda` while we add image support to `ChatPromptTemplates`.\n",
|
||
|
"\n",
|
||
|
"Our runnable follows the classic RAG flow - \n",
|
||
|
"\n",
|
||
|
"* We first compute the context (both \"texts\" and \"images\" in this case) and the question (just a RunnablePassthrough here) \n",
|
||
|
"* Then we pass this into our prompt template, which is a custom function that formats the message for the llava model. \n",
|
||
|
"* And finally we parse the output as a string.\n",
|
||
|
"\n",
|
||
|
"Here we are using Ollama to serve the Llava model. Please see [Ollama](https://python.langchain.com/docs/integrations/llms/ollama) for setup instructions."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 9,
|
||
|
"id": "4c93fab3-74c4-4f1d-958a-0bc4cdd0797e",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain_community.llms.ollama import Ollama\n",
|
||
|
"from langchain_core.messages import HumanMessage\n",
|
||
|
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
|
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def prompt_func(data_dict):\n",
|
||
|
" # Joining the context texts into a single string\n",
|
||
|
" formatted_texts = \"\\n\".join(data_dict[\"context\"][\"texts\"])\n",
|
||
|
" messages = []\n",
|
||
|
"\n",
|
||
|
" # Adding image(s) to the messages if present\n",
|
||
|
" if data_dict[\"context\"][\"images\"]:\n",
|
||
|
" image_message = {\n",
|
||
|
" \"type\": \"image_url\",\n",
|
||
|
" \"image_url\": {\n",
|
||
|
" \"url\": f\"data:image/jpeg;base64,{data_dict['context']['images'][0]}\"\n",
|
||
|
" },\n",
|
||
|
" }\n",
|
||
|
" messages.append(image_message)\n",
|
||
|
"\n",
|
||
|
" # Adding the text message for analysis\n",
|
||
|
" text_message = {\n",
|
||
|
" \"type\": \"text\",\n",
|
||
|
" \"text\": (\n",
|
||
|
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
|
||
|
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
|
||
|
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
|
||
|
" \"on user-input keywords. Please convert answers to english and use your extensive knowledge \"\n",
|
||
|
" \"and analytical skills to provide a comprehensive summary that includes:\\n\"\n",
|
||
|
" \"- A detailed description of the visual elements in the image.\\n\"\n",
|
||
|
" \"- The historical and cultural context of the image.\\n\"\n",
|
||
|
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
|
||
|
" \"- Connections between the image and the related text.\\n\\n\"\n",
|
||
|
" f\"User-provided keywords: {data_dict['question']}\\n\\n\"\n",
|
||
|
" \"Text and / or tables:\\n\"\n",
|
||
|
" f\"{formatted_texts}\"\n",
|
||
|
" ),\n",
|
||
|
" }\n",
|
||
|
" messages.append(text_message)\n",
|
||
|
" return [HumanMessage(content=messages)]\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def multi_modal_rag_chain(retriever):\n",
|
||
|
" \"\"\"Multi-modal RAG chain\"\"\"\n",
|
||
|
"\n",
|
||
|
" # Multi-modal LLM\n",
|
||
|
" llm_model = Ollama(\n",
|
||
|
" verbose=True, temperature=0.5, model=\"llava\", base_url=\"http://localhost:11434\"\n",
|
||
|
" )\n",
|
||
|
"\n",
|
||
|
" # RAG pipeline\n",
|
||
|
" chain = (\n",
|
||
|
" {\n",
|
||
|
" \"context\": retriever | RunnableLambda(split_image_text_types),\n",
|
||
|
" \"question\": RunnablePassthrough(),\n",
|
||
|
" }\n",
|
||
|
" | RunnableLambda(prompt_func)\n",
|
||
|
" | llm_model\n",
|
||
|
" | StrOutputParser()\n",
|
||
|
" )\n",
|
||
|
"\n",
|
||
|
" return chain"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "1566096d-97c2-4ddc-ba4a-6ef88c525e4e",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Test retrieval and run RAG"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 10,
|
||
|
"id": "90121e56-674b-473b-871d-6e4753fd0c45",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"GREAT PHOTOGRAPHS\n",
|
||
|
"The subject of the photo, Florence Owens Thompson, a Cherokee from Oklahoma, initially regretted that Lange ever made this photograph. “She was a very strong woman. She was a leader,” her daughter Katherine later said. “I think that's one of the reasons she resented the photo — because it didn't show her in that light.”\n",
|
||
|
"\n",
|
||
|
"DOROTHEA LANGE. “DESTITUTE PEA PICKERS IN CALIFORNIA. MOTHER OF SEVEN CHILDREN. AGE THIRTY-TWO. NIPOMO, CALIFORNIA.” MARCH 1936. NITRATE NEGATIVE. FARM SECURITY ADMINISTRATION-OFFICE OF WAR INFORMATION COLLECTION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
|
||
|
"\n",
|
||
|
"—Helena Zinkham\n",
|
||
|
"\n",
|
||
|
"—Helena Zinkham\n",
|
||
|
"\n",
|
||
|
"NOVEMBER/DECEMBER 2020 LOC.GOV/LCM\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"data": {
|
||
|
"text/html": [
|
||
|
"<img src=\"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAVhBFADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3wcEDI57UfnXzr8dvEGr6T4xt4bG/nt4WtlYojYBOTzXlg8ZeJsf8hi7/AO+6APt4HHXNGfrXxD/wmHiY/wDMYu/++6D4t8SsCDq93g/7dAH27jHOM0vB7c18QL4r8Rr01i6H/A6lHi7xOBka1d/990AfbXP0or4wsfH/AIrs2Zk1ed84zvO79K6O3+MPiqB0Vb21kTuZIGB/9CoA+rPw/Wl/Cvl8fHLxNC2DHYyj18sj/wBmrRtvjzq4iLXFvZg9gIm5/HdQB9HH34o49a+cl/aD1Veuk27e4JFWYf2hLwuvm6TCFzzhucUAfQnHrS5FeBP+0LtPyaPv/wCB4/pQP2iGyo/sMgdz53T9KAPfcjtRmvBJ/wBogxsPL0dWB/6bD/CrFt+0ValD9p0aRWxxtl/+tQB7nmjNeHn9omy7aPL/AN/P/rUjftE2QUY0iXP/AF0/+tQB7jmjNeFN+0Xb7fl0SUntmXH9Kr/8NFyA86F8v/Xbn+VAHvuaM14Z/wANF2nlr/xJZd/cecP8KqTftGSBj5Whcdt0v/1qAPfs0Zr5/tv2i5zN/pGhgp/sS/8A1qu/8NF2eTjRZT7ecP8A4mgD3PNGa8LP7Rdt20ST/v8AD/Cqcn7Rc3mnytEHl54zLz/KgD6AzRmvBR+0WDH/AMgRt/tLx/KqK/tE6iZmzpMQTsN1AH0RmjNfO/8Aw0VqG7/kDw4/66Ux/wBojVt526Tbhe2WoA+i80bhXzl/w0PrB/5hVt/31TW/aF1ntpVsPxNAH0fuFG4V83/8NDa1/wBAq1/M0p/aG1nbn+yrUH1JNAH0cSD3o49a+cIf2htXLnzdLtnHYISMU4/tC6t5hxpNvt9NxzQB9G8etHHrXzrN+0NqJjHl6TArf7TZpq/tC6qEO7Sbbd6gnFAH0ZkDvRuFfNf/AA0Hr/VdOssfQ/40h/aE8QkcadZf98n/ABoA+lePWlBHrXzKP2gvEuT/AKDYf98H/GhP2gfEwcFrKwK+gQ/40AfTW4UbhXziP2hdZ6HSrUn6mk/4aF1nP/IKtvzNAH0fuFG4V83P+0Jrh+7plqv1yf602P8AaE13DbtOtCe3B/xoA+kiQe9HHrXzX/w0Jr2P+QdafiD/AI1G37QHiI9LCzH/AAE/40AfTO4UZr5pX9oHxFvGbCyxj+6f8ajX9oDxMshY2tltPRSp4/WgD6azRmvmdv2g/EasG+yWRXuAp/xp5/aF8Qb8ixs9pHTB/wAaAPpXNGa+aT+0H4jPSxsx+B/xpjftA+JccWdmP+An/GgD6ZyaTcPUZr5n/wCGgfEhTBsbMn12n/GoP+F9eJA2WtbP2+U/40AfT+T7Uua+XW+PXicn/U2Q/wCAH/GkPx68V7SFWzAH+wf8aAPqPNGa+Vh8dvGB5DWv/fB/xqVfjx4uC4K2pP8A1zP+NAH1JmjNfKjfHHxkW3ebbKPTYf8AGnr8dvGAGd9qQP8Apmf8aAPqjNGa+UJPjf4xecut1CFI+6EOB+tOPxy8YhcCe3/FD/jQB9W5FGRXycfjf4yMqn7Xb4xyBGf8akX43+MSpzNBn12H/GgD6tyDSEj1r5Qb41+My2ftUAA5I8s8/rVSX4t+NpdxGqlRJyNq9PpQB9d/hRj2r5Ft/i142h66j5v/AF0X/wCvT5/i943lIP2wRAf3EOD+tAH1t+lH4mvkuP4veNAQzXKsvvGf8aWT4weM2bKTIAPSM/40AfWn4UZ/Cvks/GDxqy/8fKr/ANsz/jTU+MPjZAQ1+vPTMZ4/WgD62yPWjI9a+SV+M3jRPvX6E+8Z/wAaH+M/jKSIwnUY42JyHVDn6daAPrbPtRivkQ/GLxhwTqIBAxjb1/Wmx/GDxjGGB1IPu6Er0/WgD685o6dTXyCvxc8XqTnUw5914/nTbj4r+Lpwv/EyEZB/5Zqf8aAPsDr3pelfHw+K/i5V2nUufXb/APXqM/FLxeTk6q35f/XoA+xTz1FJxXx0fih4tPXVX/AUn/CzPFp/5i0n5UAfY3FA54Br45HxP8Wqu3+1WPuQc/zqOP4j+K/NZ/7XmLHselAH2SSc4xSnjuK+Of8AhZ3i4nB1ZwR7YqGb4keK5nVm1iVdpB+Q9aAPszjvQPbpXxy3xN8WMR/xNZfypF+Jvi1eP7TYj3H/ANegD7I5pOfWvjn/AIWf4t5/4mR/I/404fE7xZt/5CJz/n3oA+xOe5FHHYivjpviZ4rbrqGPpn/Gox8R/FOc/wBpy/h0oA+ysH1FJ07ivjsfE3xVjH9pN+IP+NQ/8LJ8VF8/2q/HbH/16APsrJ9aOvevjv8A4Wh4rB41EH8D/jT/APha/i9QAuoAD/dP+NAH2DnbR05xXx3L8UvF8p/5CrLjn5RQfip4uYjGpFce3/16APsXnuKTI9a+Oj8U/F5P/IVP5f8A16T/AIWf4t3bhqjfl/8AXoA+xsk9sUZ96+QB8WfF6/8AMUwP93/69M/4Wx4xPP8Aa/6f/XoA+w/wpOa+Px8W/GeP+Quf++f/AK9Mb4q+MpBn+1259BQB9h/Sl+tfG4+KHi8Ek6zJ+VA+J/i5jzrEv5UAfY+ecYo5r46PxR8Ysvl/2xJtNRf8LG8XqNi6xNj2oA+yuKM59fxr41T4jeMEfeus3WfrVl/ih41nUK2rTgD2oA+wMjIB6mgc18bn4heMmcN/a1ySOmBT/wDhYPjQEn+1bo5oA+xSecCgsDXx5H8QPG6Fiup3ZDDByKIvHHjiJy0epXpJ6hloA+xce9FfID/ELx0XAOoXat2Xb1pP+E68eHJF/esT1+Q8UAfYHWk/GvkFfHXj2P5mv73HqUNOHjv4gEqVvrxsnshoA+vfxNH4V8ljx58RWPFzd8df3RpT8Q/iGhVDdXI5/wCeRoA+sj+VB+Xqa+TT4/8AiP0+03JA5z5ZprePfiJIRm5uRj0jNAH1rn2pM+1fKX/CefEgjAuLj67DSjxp8TGXAuZyPUoc0AfVmM980ua+UD4n+JTcm6uR/wABNKviT4ldric/VTQB9W56D1pM84r5WbXfiTM677ibge9Vk1T4hQSu/nzkvwTk0AfWQORwRxRn0r5LgvviFCXK3dx8/OS1TRa18RI84vZ8Y5BagD6v4HYmgH6ivju78beMLG4MVxqcsT4zhTUD/ELxW4x/bE+3/eoA+zOaQHNfFZ8aeIySTrF0Sf8AbqI+L/En8Os3Q/4HQB9sfjThnHNfEEHiXXW1C
|
||
|
],
|
||
|
"text/plain": [
|
||
|
"<IPython.core.display.HTML object>"
|
||
|
]
|
||
|
},
|
||
|
"metadata": {},
|
||
|
"output_type": "display_data"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from IPython.display import HTML, display\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"def plt_img_base64(img_base64):\n",
|
||
|
" # Create an HTML img tag with the base64 string as the source\n",
|
||
|
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
|
||
|
"\n",
|
||
|
" # Display the image by rendering the HTML\n",
|
||
|
" display(HTML(image_html))\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"query = \"Woman with children\"\n",
|
||
|
"docs = retriever.get_relevant_documents(query, k=10)\n",
|
||
|
"\n",
|
||
|
"for doc in docs:\n",
|
||
|
" if is_base64(doc.page_content):\n",
|
||
|
" plt_img_base64(doc.page_content)\n",
|
||
|
" else:\n",
|
||
|
" print(doc.page_content)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 11,
|
||
|
"id": "69fb15fd-76fc-49b4-806d-c4db2990027d",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"1. Detailed description of the visual elements in the image: The image features a woman with children, likely a mother and her family, standing together outside. They appear to be poor or struggling financially, as indicated by their attire and surroundings.\n",
|
||
|
"2. Historical and cultural context of the image: The photo was taken in 1936 during the Great Depression, when many families struggled to make ends meet. Dorothea Lange, a renowned American photographer, took this iconic photograph that became an emblem of poverty and hardship experienced by many Americans at that time.\n",
|
||
|
"3. Interpretation of the image's symbolism and meaning: The image conveys a sense of unity and resilience despite adversity. The woman and her children are standing together, displaying their strength as a family unit in the face of economic challenges. The photograph also serves as a reminder of the importance of empathy and support for those who are struggling.\n",
|
||
|
"4. Connections between the image and the related text: The text provided offers additional context about the woman in the photo, her background, and her feelings towards the photograph. It highlights the historical backdrop of the Great Depression and emphasizes the significance of this particular image as a representation of that time period.\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain = multi_modal_rag_chain(retriever)\n",
|
||
|
"response = chain.invoke(query)\n",
|
||
|
"print(response)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 12,
|
||
|
"id": "ec2ea7e6",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"vdms_rag_nb\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"! docker kill vdms_rag_nb"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "8ba652da",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": []
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": ".langchain-venv",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.10.13"
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 5
|
||
|
}
|