2023-11-10 17:43:10 +00:00
{
"cells": [
{
"attachments": {
"1920fda3-1808-407c-9820-f518c9c6f566.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABbcAAAKfCAYAAABdWfWvAAAMQGlDQ1BJQ0MgUHJvZmlsZQAASImVVwdYU8kWnluSkJAQIICAlNCbIFIDSAmhBZBeBBshCRBKjIGgYkcXFVy7iIANXRVR7IDYETuLYu+LBRVlXSzYlTcpoOu+8r35vrnz33/O/OfMuTP33gGAfpwnkeSimgDkiQukcaGBzNEpqUzSU0AEdEAFVkCLx8+XsGNiIgEsA+3fy7vrAJG3VxzlWv/s/69FSyDM5wOAxECcLsjn50G8HwC8mi+RFgBAlPMWkwskcgwr0JHCACFeIMeZSlwtx+lKvFthkxDHgbgVADUqjyfNBEDjEuSZhfxMqKHRC7GzWCASA0BnQuyXlzdRAHEaxLbQRgKxXJ+V/oNO5t800wc1ebzMQayci6KoBYnyJbm8qf9nOv53ycuVDfiwhpWaJQ2Lk88Z5u1mzsQIOaZC3CNOj4qGWBviDyKBwh5ilJIlC0tU2qNG/HwOzBnQg9hZwAuKgNgI4hBxblSkik/PEIVwIYYrBJ0iKuAmQKwP8QJhfnC8ymaDdGKcyhfakCHlsFX8WZ5U4Vfu674sJ5Gt0n+dJeSq9DGNoqyEZIgpEFsWipKiINaA2Ck/Jz5CZTOyKIsTNWAjlcXJ47eEOE4oDg1U6mOFGdKQOJV9aV7+wHyxDVkibpQK7y3ISghT5gdr5fMU8cO5YJeEYnbigI4wf3TkwFwEwqBg5dyxZ0JxYrxK54OkIDBOORanSHJjVPa4uTA3VM6bQ+yWXxivGosnFcAFqdTHMyQFMQnKOPGibF54jDIefCmIBBwQBJhABms6mAiygai9p7EH3il7QgAPSEEmEAJHFTMwIlnRI4bXeFAE/oRICPIHxwUqeoWgEPJfB1nl1RFkKHoLFSNywBOI80AEyIX3MsUo8aC3JPAYMqJ/eOfByofx5sIq7//3/AD7nWFDJlLFyAY8MukDlsRgYhAxjBhCtMMNcT/cB4+E1wBYXXAW7jUwj+/2hCeEDsJDwjVCJ+HWBFGx9KcoR4FOqB+iykX6j7nAraGmOx6I+0J1qIzr4YbAEXeDfti4P/TsDlmOKm55Vpg/af9tBj88DZUd2ZmMkoeQA8i2P4/UsNdwH1SR5/rH/ChjTR/MN2ew52f/nB+yL4BtxM+W2AJsH3YGO4Gdww5jjYCJHcOasDbsiBwPrq7HitU14C1OEU8O1BH9w9/Ak5VnMt+5zrnb+Yuyr0A4Rf6OBpyJkqlSUWZWAZMNvwhCJlfMdxrGdHF2cQVA/n1Rvr7exCq+G4he23du7h8A+B7r7+8/9J0LPwbAHk+4/Q9+52xZ8NOhDsDZg3yZtFDJ4fILAb4l6HCnGQATYAFs4XxcgAfwAQEgGISDaJAAUsB4GH0WXOdSMBlMB3NACSgDS8EqUAnWg01gG9gJ9oJGcBicAKfBBXAJXAN34OrpAi9AL3gHPiMIQkJoCAMxQEwRK8QBcUFYiB8SjEQicUgKkoZkImJEhkxH5iJlyHKkEtmI1CJ7kIPICeQc0oHcQh4g3chr5BOKoVRUBzVGrdHhKAtloxFoAjoOzUQnoUXoPHQxWoHWoDvQBvQEegG9hnaiL9A+DGDqmB5mhjliLIyDRWOpWAYmxWZipVg5VoPVY83wOV/BOrEe7CNOxBk4E3eEKzgMT8T5+CR8Jr4Ir8S34Q14K34Ff4D34t8INIIRwYHgTeASRhMyCZMJJYRywhbCAcIpuJe6CO+IRKIe0YboCfdiCjGbOI24iLiWuIt4nNhBfETsI5FIBiQHki8pmsQjFZBKSGtIO0jHSJdJXaQPaupqpmouaiFqqWpitWK1crXtakfVLqs9VftM1iRbkb3J0WQBeSp5CXkzuZl8kdxF/kzRothQfCkJlGzKHEoFpZ5yinKX8kZdXd1c3Us9Vl2kPlu9Qn23+ln1B+ofqdpUeyqHOpYqoy6mbqUep96ivqHRaNa0AFoqrYC2mFZLO0m7T/ugwdBw0uBqCDRmaVRpNGhc1nhJJ9Ot6Gz6eHoRvZy+j36R3qNJ1rTW5GjyNGdqVmke1Lyh2afF0BqhFa2Vp7VIa7vWOa1n2iRta+1gbYH2PO1N2ie1HzEwhgWDw+Az5jI2M04xunSIOjY6XJ1snTKdnTrtOr262rpuukm6U3SrdI/oduphetZ6XL1cvSV6e/Wu630aYjyEPUQ4ZOGQ+iGXh7zXH6ofoC/UL9XfpX9N/5MB0yDYIMdgmUGjwT1D3NDeMNZwsuE6w1OGPUN1hvoM5Q8tHbp36G0j1MjeKM5omtEmozajPmMT41BjifEa45PGPSZ6JgEm2SYrTY6adJsyTP1MRaYrTY+ZPmfqMtnMXGYFs5XZa2ZkFmYmM9to1m722dzGPNG82HyX+T0LigXLIsNipUWLRa+lqeUoy+mWdZa3rchWLKssq9VWZ6zeW9tYJ1vPt260fmajb8O1KbKps7lrS7P1t51kW2N71Y5ox7LLsVtrd8ketXe3z7Kvsr/ogDp4OIgc1jp0DCMM8xomHlYz7IYj1ZHtWOhY5/jASc8p0qnYqdHp5XDL4anDlw0/M/ybs7tzrvNm5zsjtEeEjyge0TzitYu9C9+lyuWqK801xHWWa5PrKzcHN6HbOreb7gz3Ue7z3Vvcv3p4ekg96j26PS090zyrPW+wdFgxrEWss14Er0CvWV6HvT56e3gXeO/1/svH0SfHZ7vPs5E2I4UjN4985Gvuy/Pd6Nvpx/RL89vg1+lv5s/zr/F/GGARIAjYEvCUbcfOZu9gvwx0DpQGHgh8z/HmzOAcD8KCQoNKg9qDtYMTgyuD74eYh2SG1IX0hrqHTgs9HkYIiwhbFnaDa8zlc2u5veGe4TPCWyOoEfERlREPI+0jpZHNo9BR4aNWjLobZRUljmqMBtHc6BXR92JsYibFHIolxsbEVsU+iRsRNz3uTDwjfkL89vh3CYEJSxLuJNomyhJbkuhJY5Nqk94nByUvT+4cPXz0jNEXUgxTRClNqaTUpNQtqX1jgsesGtM11n1sydjr42zGTRl3brzh+NzxRybQJ/Am7EsjpCWnbU/7wovm1fD60rnp1em9fA5/Nf+FIECwUtAt9BUuFz7N8M1YnvEs0zdzRWZ3ln9WeVaPiCOqFL3KDsten/0+Jzpna05/bnLurjy1vLS8g2JtcY64daLJxCkTOyQOkhJJ5yTvSasm9UojpFvykfxx+U0FOvBHvk1mK/tF9qDQr7Cq8MPkpMn7pmhNEU9pm2o/deHUp0UhRb9Nw6fxp7VMN5s+Z/qDGewZG2ciM9NntsyymDVvVtfs0Nnb5lDm5Mz5vdi5eHnx27nJc5vnGc+bPe/RL6G/1JVolEhLbsz3mb9+Ab5AtKB9oevCNQu/lQpKz5c5l5WXfVnEX3T+1xG/VvzavzhjcfsSjyXrlhKXipdeX+a/bNtyreVFyx+tGLWiYSVzZenKt6smrDpX7la+fjVltWx1Z0VkRdMayzVL13ypzKq8VhVYtavaqHph9fu1grWX1wWsq19vvL5s/acNog03N4ZubKixrinfRNxUuOnJ5qTNZ35j/Va7xXBL2ZavW8VbO7fFbWut9ayt3W60fUkdWier694xdselnUE7m+od6zfu0ttVthvslu1+vidtz/W9EXtb9rH21e+32l99gHGgtAFpmNrQ25jV2NmU0tRxMPxgS7NP84FDToe2HjY7XHVE98iSo5Sj8472Hys61ndccrznROaJRy0TWu6cHH3yamtsa/upiFNnT4ecPnmGfebYWd+zh895nzt4nnW+8YLHhYY297YDv7v/fqDdo73houfFpktel5o7RnYcvex/+cSVoCunr3KvXrgWda3jeuL1mzfG3ui8Kbj57FburVe3C29/vjP7LuFu6T3Ne+X3je7X/GH3x65Oj84jD4IetD2Mf3jnEf/Ri8f5j790zXtCe1L+1PRp7TOXZ4e7Q7ovPR/zvOuF5MXnnpI/tf6sfmn7cv9fAX+19Y7u7XolfdX/etEbgzdb37q9bemL6bv/Lu/d5/elHww+bPvI+njmU/Knp58nfyF9qfhq97X5
}
},
"cell_type": "markdown",
"id": "9fc3897d-176f-4729-8fd1-cfb4add53abd",
"metadata": {},
"source": [
"## Chroma multi-modal RAG\n",
"\n",
"Many documents contain a mixture of content types, including text and images. \n",
"\n",
"Yet, information captured in images is lost in most RAG applications.\n",
"\n",
"With the emergence of multimodal LLMs, like [GPT-4V](https://openai.com/research/gpt-4v-system-card), it is worth considering how to utilize images in RAG:\n",
"\n",
"`Option 1:` (Shown) \n",
"\n",
"* Use multimodal embeddings (such as [CLIP](https://openai.com/research/clip)) to embed images and text\n",
"* Retrieve both using similarity search\n",
"* Pass raw images and text chunks to a multimodal LLM for answer synthesis \n",
"\n",
"`Option 2:` \n",
"\n",
"* Use a multimodal LLM (such as [GPT-4V](https://openai.com/research/gpt-4v-system-card), [LLaVA](https://llava.hliu.cc/), or [FUYU-8b](https://www.adept.ai/blog/fuyu-8b)) to produce text summaries from images\n",
"* Embed and retrieve text \n",
"* Pass text chunks to an LLM for answer synthesis \n",
"\n",
"`Option 3` \n",
"\n",
"* Use a multimodal LLM (such as [GPT-4V](https://openai.com/research/gpt-4v-system-card), [LLaVA](https://llava.hliu.cc/), or [FUYU-8b](https://www.adept.ai/blog/fuyu-8b)) to produce text summaries from images\n",
"* Embed and retrieve image summaries with a reference to the raw image \n",
"* Pass raw images and text chunks to a multimodal LLM for answer synthesis \n",
"\n",
"This cookbook highlights `Option 1`: \n",
"\n",
"* We will use [Unstructured](https://unstructured.io/) to parse images, text, and tables from documents (PDFs).\n",
"* We will use Open Clip multi-modal embeddings.\n",
"* We will use [Chroma](https://www.trychroma.com/) with support for multi-modal.\n",
"\n",
docs: update multi_modal_RAG_chroma.ipynb (#14602)
seperate -> separate
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
2023-12-12 23:24:37 +00:00
"A separate cookbook highlights `Options 2 and 3` [here](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb).\n",
2023-11-10 17:43:10 +00:00
"\n",
"![chroma_multimodal.png](attachment:1920fda3-1808-407c-9820-f518c9c6f566.png)\n",
"\n",
"## Packages\n",
"\n",
"For `unstructured`, you will also need `poppler` ([installation instructions](https://pdf2image.readthedocs.io/en/latest/installation.html)) and `tesseract` ([installation instructions](https://tesseract-ocr.github.io/tessdoc/Installation.html)) in your system."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain openai chromadb langchain-experimental # (newest versions required for multi-modal)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acbdc603-39e2-4a5f-836c-2bbaecd46b0b",
"metadata": {},
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken open_clip_torch torch"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
"metadata": {},
"source": [
"## Data Loading\n",
"\n",
"### Partition PDF text and images\n",
" \n",
"Let's look at an example pdfs containing interesting images.\n",
"\n",
"1/ Art from the J Paul Getty museum:\n",
"\n",
" * Here is a [zip file](https://drive.google.com/file/d/18kRKbq2dqAhhJ3DfZRnYcTBEUfYxe1YR/view?usp=sharing) with the PDF and the already extracted images. \n",
"* https://www.getty.edu/publications/resources/virtuallibrary/0892360224.pdf\n",
"\n",
"2/ Famous photographs from library of congress:\n",
"\n",
"* https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\n",
"* We'll use this as an example below\n",
"\n",
"We can use `partition_pdf` below from [Unstructured](https://unstructured-io.github.io/unstructured/introduction.html#key-concepts) to extract text and images.\n",
"\n",
"To supply this to extract the images:\n",
"```\n",
"extract_images_in_pdf=True\n",
"```\n",
"\n",
"\n",
"\n",
"If using this zip file, then you can simply process the text only with:\n",
"```\n",
"extract_images_in_pdf=False\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9646b524-71a7-4b2a-bdc8-0b81f77e968f",
"metadata": {},
"outputs": [],
"source": [
2023-11-14 20:58:22 +00:00
"# Folder with pdf and extracted images\n",
2023-11-10 17:43:10 +00:00
"path = \"/Users/rlm/Desktop/photos/\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bc4839c0-8773-4a07-ba59-5364501269b2",
"metadata": {},
"outputs": [],
"source": [
"# Extract images, tables, and chunk text\n",
"from unstructured.partition.pdf import partition_pdf\n",
2023-11-14 20:58:22 +00:00
"\n",
2023-11-10 17:43:10 +00:00
"raw_pdf_elements = partition_pdf(\n",
" filename=path + \"photos.pdf\",\n",
2023-11-14 20:58:22 +00:00
" extract_images_in_pdf=True,\n",
2023-11-10 17:43:10 +00:00
" infer_table_structure=True,\n",
" chunking_strategy=\"by_title\",\n",
" max_characters=4000,\n",
" new_after_n_chars=3800,\n",
" combine_text_under_n_chars=2000,\n",
" image_output_dir_path=path,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "969545ad",
"metadata": {},
"outputs": [],
"source": [
"# Categorize text elements by type\n",
"tables = []\n",
"texts = []\n",
"for element in raw_pdf_elements:\n",
" if \"unstructured.documents.elements.Table\" in str(type(element)):\n",
" tables.append(str(element))\n",
" elif \"unstructured.documents.elements.CompositeElement\" in str(type(element)):\n",
" texts.append(str(element))"
]
},
{
"cell_type": "markdown",
"id": "5d8e6349-1547-4cbf-9c6f-491d8610ec10",
"metadata": {},
"source": [
"## Multi-modal embeddings with our document\n",
"\n",
"We will use [OpenClip multimodal embeddings](https://python.langchain.com/docs/integrations/text_embedding/open_clip).\n",
"\n",
"We use a larger model for better performance (set in `langchain_experimental.open_clip.py`).\n",
"\n",
"```\n",
"model_name = \"ViT-g-14\"\n",
"checkpoint = \"laion2b_s34b_b88k\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "4bc15842-cb95-4f84-9eb5-656b0282a800",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import uuid\n",
2023-11-14 22:17:44 +00:00
"\n",
2023-11-10 17:43:10 +00:00
"import chromadb\n",
"import numpy as np\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
2023-11-14 22:17:44 +00:00
"from PIL import Image as _PILImage\n",
2023-11-10 17:43:10 +00:00
"\n",
"# Create chroma\n",
"vectorstore = Chroma(\n",
2023-11-14 20:58:22 +00:00
" collection_name=\"mm_rag_clip_photos\", embedding_function=OpenCLIPEmbeddings()\n",
2023-11-10 17:43:10 +00:00
")\n",
"\n",
"# Get image URIs with .jpg extension only\n",
2023-11-14 20:58:22 +00:00
"image_uris = sorted(\n",
" [\n",
" os.path.join(path, image_name)\n",
" for image_name in os.listdir(path)\n",
" if image_name.endswith(\".jpg\")\n",
" ]\n",
")\n",
2023-11-10 17:43:10 +00:00
"\n",
"# Add images\n",
"vectorstore.add_images(uris=image_uris)\n",
"\n",
"# Add documents\n",
"vectorstore.add_texts(texts=texts)\n",
"\n",
2023-11-14 20:58:22 +00:00
"# Make retriever\n",
2023-11-10 17:43:10 +00:00
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "markdown",
"id": "02a186d0-27e0-4820-8092-63b5349dd25d",
"metadata": {},
"source": [
"## RAG\n",
"\n",
"`vectorstore.add_images` will store / retrieve images as base64 encoded strings.\n",
"\n",
"These can be passed to [GPT-4V](https://platform.openai.com/docs/guides/vision)."
]
},
{
"cell_type": "code",
"execution_count": 70,
"id": "344f56a8-0dc3-433e-851c-3f7600c7a72b",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
2023-11-14 22:17:44 +00:00
"import io\n",
2023-11-10 17:43:10 +00:00
"from io import BytesIO\n",
2023-11-14 22:17:44 +00:00
"\n",
"import numpy as np\n",
2023-11-10 17:43:10 +00:00
"from PIL import Image\n",
"\n",
2023-11-14 20:58:22 +00:00
"\n",
2023-11-10 17:43:10 +00:00
"def resize_base64_image(base64_string, size=(128, 128)):\n",
" \"\"\"\n",
" Resize an image encoded as a Base64 string.\n",
"\n",
" Args:\n",
" base64_string (str): Base64 string of the original image.\n",
" size (tuple): Desired size of the image as (width, height).\n",
"\n",
" Returns:\n",
" str: Base64 string of the resized image.\n",
" \"\"\"\n",
" # Decode the Base64 string\n",
" img_data = base64.b64decode(base64_string)\n",
" img = Image.open(io.BytesIO(img_data))\n",
"\n",
" # Resize the image\n",
" resized_img = img.resize(size, Image.LANCZOS)\n",
"\n",
" # Save the resized image to a bytes buffer\n",
" buffered = io.BytesIO()\n",
" resized_img.save(buffered, format=img.format)\n",
"\n",
" # Encode the resized image to Base64\n",
2023-11-14 20:58:22 +00:00
" return base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
"\n",
2023-11-10 17:43:10 +00:00
"\n",
"def is_base64(s):\n",
2023-11-14 20:58:22 +00:00
" \"\"\"Check if a string is Base64 encoded\"\"\"\n",
2023-11-10 17:43:10 +00:00
" try:\n",
" return base64.b64encode(base64.b64decode(s)) == s.encode()\n",
" except Exception:\n",
" return False\n",
2023-11-14 20:58:22 +00:00
"\n",
"\n",
2023-11-10 17:43:10 +00:00
"def split_image_text_types(docs):\n",
2023-11-14 20:58:22 +00:00
" \"\"\"Split numpy array images and texts\"\"\"\n",
2023-11-10 17:43:10 +00:00
" images = []\n",
" text = []\n",
" for doc in docs:\n",
2023-11-14 20:58:22 +00:00
" doc = doc.page_content # Extract Document contents\n",
2023-11-10 17:43:10 +00:00
" if is_base64(doc):\n",
" # Resize image to avoid OAI server error\n",
2023-11-14 20:58:22 +00:00
" images.append(\n",
" resize_base64_image(doc, size=(250, 250))\n",
" ) # base64 encoded str\n",
2023-11-10 17:43:10 +00:00
" else:\n",
2023-11-14 20:58:22 +00:00
" text.append(doc)\n",
" return {\"images\": images, \"texts\": text}"
2023-11-10 17:43:10 +00:00
]
},
{
"cell_type": "markdown",
"id": "23a2c1d8-fea6-4152-b184-3172dd46c735",
"metadata": {},
"source": [
"Currently, we format the inputs using a `RunnableLambda` while we add image support to `ChatPromptTemplates`.\n",
"\n",
"Our runnable follows the classic RAG flow - \n",
"\n",
"* We first compute the context (both \"texts\" and \"images\" in this case) and the question (just a RunnablePassthrough here) \n",
"* Then we pass this into our prompt template, which is a custom function that formats the message for the gpt-4-vision-preview model. \n",
"* And finally we parse the output as a string."
]
},
{
"cell_type": "code",
"execution_count": 75,
"id": "4c93fab3-74c4-4f1d-958a-0bc4cdd0797e",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
2023-11-14 22:17:44 +00:00
"\n",
2023-11-10 17:43:10 +00:00
"from langchain.chat_models import ChatOpenAI\n",
docs[patch], templates[patch]: Import from core (#14575)
Update imports to use core for the low-hanging fruit changes. Ran
following
```bash
git grep -l 'langchain.schema.runnable' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.runnable/langchain_core.runnables/g'
git grep -l 'langchain.schema.output_parser' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.output_parser/langchain_core.output_parsers/g'
git grep -l 'langchain.schema.messages' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.messages/langchain_core.messages/g'
git grep -l 'langchain.schema.chat_histry' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.chat_history/langchain_core.chat_history/g'
git grep -l 'langchain.schema.prompt_template' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.prompt_template/langchain_core.prompts/g'
git grep -l 'from langchain.pydantic_v1' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.pydantic_v1/from langchain_core.pydantic_v1/g'
git grep -l 'from langchain.tools.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.tools\.base/from langchain_core.tools/g'
git grep -l 'from langchain.chat_models.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.chat_models.base/from langchain_core.language_models.chat_models/g'
git grep -l 'from langchain.llms.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.llms\.base\ /from langchain_core.language_models.llms\ /g'
git grep -l 'from langchain.embeddings.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.embeddings\.base/from langchain_core.embeddings/g'
git grep -l 'from langchain.vectorstores.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.vectorstores\.base/from langchain_core.vectorstores/g'
git grep -l 'from langchain.agents.tools' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.agents\.tools/from langchain_core.tools/g'
git grep -l 'from langchain.schema.output' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.output\ /from langchain_core.outputs\ /g'
git grep -l 'from langchain.schema.embeddings' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.embeddings/from langchain_core.embeddings/g'
git grep -l 'from langchain.schema.document' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.document/from langchain_core.documents/g'
git grep -l 'from langchain.schema.agent' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.agent/from langchain_core.agents/g'
git grep -l 'from langchain.schema.prompt ' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.prompt\ /from langchain_core.prompt_values /g'
git grep -l 'from langchain.schema.language_model' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.language_model/from langchain_core.language_models/g'
```
2023-12-12 00:49:10 +00:00
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
2023-11-10 17:43:10 +00:00
"\n",
2023-11-14 20:58:22 +00:00
"\n",
2023-11-10 17:43:10 +00:00
"def prompt_func(data_dict):\n",
" # Joining the context texts into a single string\n",
" formatted_texts = \"\\n\".join(data_dict[\"context\"][\"texts\"])\n",
" messages = []\n",
"\n",
" # Adding image(s) to the messages if present\n",
" if data_dict[\"context\"][\"images\"]:\n",
" image_message = {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": f\"data:image/jpeg;base64,{data_dict['context']['images'][0]}\"\n",
2023-11-14 20:58:22 +00:00
" },\n",
2023-11-10 17:43:10 +00:00
" }\n",
" messages.append(image_message)\n",
"\n",
" # Adding the text message for analysis\n",
" text_message = {\n",
" \"type\": \"text\",\n",
" \"text\": (\n",
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
" \"- Connections between the image and the related text.\\n\\n\"\n",
" f\"User-provided keywords: {data_dict['question']}\\n\\n\"\n",
" \"Text and / or tables:\\n\"\n",
" f\"{formatted_texts}\"\n",
2023-11-14 20:58:22 +00:00
" ),\n",
2023-11-10 17:43:10 +00:00
" }\n",
" messages.append(text_message)\n",
"\n",
" return [HumanMessage(content=messages)]\n",
2023-11-14 20:58:22 +00:00
"\n",
"\n",
2023-11-10 17:43:10 +00:00
"model = ChatOpenAI(temperature=0, model=\"gpt-4-vision-preview\", max_tokens=1024)\n",
"\n",
"# RAG pipeline\n",
"chain = (\n",
2023-11-14 20:58:22 +00:00
" {\n",
" \"context\": retriever | RunnableLambda(split_image_text_types),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
2023-11-10 17:43:10 +00:00
" | RunnableLambda(prompt_func)\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1566096d-97c2-4ddc-ba4a-6ef88c525e4e",
"metadata": {},
"source": [
"## Test retrieval and run RAG"
]
},
{
"cell_type": "code",
"execution_count": 76,
"id": "90121e56-674b-473b-871d-6e4753fd0c45",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GREAT PHOTOGRAPHS\n",
"The subject of the photo, Florence Owens Thompson, a Cherokee from Oklahoma, initially regretted that Lange ever made this photograph. “She was a very strong woman. She was a leader,” her daughter Katherine later said. “I think that's one of the reasons she resented the photo — because it didn't show her in that light.”\n",
"\n",
"DOROTHEA LANGE. “DESTITUTE PEA PICKERS IN CALIFORNIA. MOTHER OF SEVEN CHILDREN. AGE THIRTY-TWO. NIPOMO, CALIFORNIA.” MARCH 1936. NITRATE NEGATIVE. FARM SECURITY ADMINISTRATION-OFFICE OF WAR INFORMATION COLLECTION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"—Helena Zinkham\n",
"\n",
"—Helena Zinkham\n",
"\n",
"NOVEMBER/DECEMBER 2020 LOC.GOV/LCM\n"
]
},
{
"data": {
"text/html": [
"<img src=\"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAVhBFADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3wcEDI57UfnXzr8dvEGr6T4xt4bG/nt4WtlYojYBOTzXlg8ZeJsf8hi7/AO+6APt4HHXNGfrXxD/wmHiY/wDMYu/++6D4t8SsCDq93g/7dAH27jHOM0vB7c18QL4r8Rr01i6H/A6lHi7xOBka1d/990AfbXP0or4wsfH/AIrs2Zk1ed84zvO79K6O3+MPiqB0Vb21kTuZIGB/9CoA+rPw/Wl/Cvl8fHLxNC2DHYyj18sj/wBmrRtvjzq4iLXFvZg9gIm5/HdQB9HH34o49a+cl/aD1Veuk27e4JFWYf2hLwuvm6TCFzzhucUAfQnHrS5FeBP+0LtPyaPv/wCB4/pQP2iGyo/sMgdz53T9KAPfcjtRmvBJ/wBogxsPL0dWB/6bD/CrFt+0ValD9p0aRWxxtl/+tQB7nmjNeHn9omy7aPL/AN/P/rUjftE2QUY0iXP/AF0/+tQB7jmjNeFN+0Xb7fl0SUntmXH9Kr/8NFyA86F8v/Xbn+VAHvuaM14Z/wANF2nlr/xJZd/cecP8KqTftGSBj5Whcdt0v/1qAPfs0Zr5/tv2i5zN/pGhgp/sS/8A1qu/8NF2eTjRZT7ecP8A4mgD3PNGa8LP7Rdt20ST/v8AD/Cqcn7Rc3mnytEHl54zLz/KgD6AzRmvBR+0WDH/AMgRt/tLx/KqK/tE6iZmzpMQTsN1AH0RmjNfO/8Aw0VqG7/kDw4/66Ux/wBojVt526Tbhe2WoA+i80bhXzl/w0PrB/5hVt/31TW/aF1ntpVsPxNAH0fuFG4V83/8NDa1/wBAq1/M0p/aG1nbn+yrUH1JNAH0cSD3o49a+cIf2htXLnzdLtnHYISMU4/tC6t5hxpNvt9NxzQB9G8etHHrXzrN+0NqJjHl6TArf7TZpq/tC6qEO7Sbbd6gnFAH0ZkDvRuFfNf/AA0Hr/VdOssfQ/40h/aE8QkcadZf98n/ABoA+lePWlBHrXzKP2gvEuT/AKDYf98H/GhP2gfEwcFrKwK+gQ/40AfTW4UbhXziP2hdZ6HSrUn6mk/4aF1nP/IKtvzNAH0fuFG4V83P+0Jrh+7plqv1yf602P8AaE13DbtOtCe3B/xoA+kiQe9HHrXzX/w0Jr2P+QdafiD/AI1G37QHiI9LCzH/AAE/40AfTO4UZr5pX9oHxFvGbCyxj+6f8ajX9oDxMshY2tltPRSp4/WgD6azRmvmdv2g/EasG+yWRXuAp/xp5/aF8Qb8ixs9pHTB/wAaAPpXNGa+aT+0H4jPSxsx+B/xpjftA+JccWdmP+An/GgD6ZyaTcPUZr5n/wCGgfEhTBsbMn12n/GoP+F9eJA2WtbP2+U/40AfT+T7Uua+XW+PXicn/U2Q/wCAH/GkPx68V7SFWzAH+wf8aAPqPNGa+Vh8dvGB5DWv/fB/xqVfjx4uC4K2pP8A1zP+NAH1JmjNfKjfHHxkW3ebbKPTYf8AGnr8dvGAGd9qQP8Apmf8aAPqjNGa+UJPjf4xecut1CFI+6EOB+tOPxy8YhcCe3/FD/jQB9W5FGRXycfjf4yMqn7Xb4xyBGf8akX43+MSpzNBn12H/GgD6tyDSEj1r5Qb41+My2ftUAA5I8s8/rVSX4t+NpdxGqlRJyNq9PpQB9d/hRj2r5Ft/i142h66j5v/AF0X/wCvT5/i943lIP2wRAf3EOD+tAH1t+lH4mvkuP4veNAQzXKsvvGf8aWT4weM2bKTIAPSM/40AfWn4UZ/Cvks/GDxqy/8fKr/ANsz/jTU+MPjZAQ1+vPTMZ4/WgD62yPWjI9a+SV+M3jRPvX6E+8Z/wAaH+M/jKSIwnUY42JyHVDn6daAPrbPtRivkQ/GLxhwTqIBAxjb1/Wmx/GDxjGGB1IPu6Er0/WgD685o6dTXyCvxc8XqTnUw5914/nTbj4r+Lpwv/EyEZB/5Zqf8aAPsDr3pelfHw+K/i5V2nUufXb/APXqM/FLxeTk6q35f/XoA+xTz1FJxXx0fih4tPXVX/AUn/CzPFp/5i0n5UAfY3FA54Br45HxP8Wqu3+1WPuQc/zqOP4j+K/NZ/7XmLHselAH2SSc4xSnjuK+Of8AhZ3i4nB1ZwR7YqGb4keK5nVm1iVdpB+Q9aAPszjvQPbpXxy3xN8WMR/xNZfypF+Jvi1eP7TYj3H/ANegD7I5pOfWvjn/AIWf4t5/4mR/I/404fE7xZt/5CJz/n3oA+xOe5FHHYivjpviZ4rbrqGPpn/Gox8R/FOc/wBpy/h0oA+ysH1FJ07ivjsfE3xVjH9pN+IP+NQ/8LJ8VF8/2q/HbH/16APsrJ9aOvevjv8A4Wh4rB41EH8D/jT/APha/i9QAuoAD/dP+NAH2DnbR05xXx3L8UvF8p/5CrLjn5RQfip4uYjGpFce3/16APsXnuKTI9a+Oj8U/F5P/IVP5f8A16T/AIWf4t3bhqjfl/8AXoA+xsk9sUZ96+QB8WfF6/8AMUwP93/69M/4Wx4xPP8Aa/6f/XoA+w/wpOa+Px8W/GeP+Quf++f/AK9Mb4q+MpBn+1259BQB9h/Sl+tfG4+KHi8Ek6zJ+VA+J/i5jzrEv5UAfY+ecYo5r46PxR8Ysvl/2xJtNRf8LG8XqNi6xNj2oA+yuKM59fxr41T4jeMEfeus3WfrVl/ih41nUK2rTgD2oA+wMjIB6mgc18bn4heMmcN/a1ySOmBT/wDhYPjQEn+1bo5oA+xSecCgsDXx5H8QPG6Fiup3ZDDByKIvHHjiJy0epXpJ6hloA+xce9FfID/ELx0XAOoXat2Xb1pP+E68eHJF/esT1+Q8UAfYHWk/GvkFfHXj2P5mv73HqUNOHjv4gEqVvrxsnshoA+vfxNH4V8ljx58RWPFzd8df3RpT8Q/iGhVDdXI5/wCeRoA+sj+VB+Xqa+TT4/8AiP0+03JA5z5ZprePfiJIRm5uRj0jNAH1rn2pM+1fKX/CefEgjAuLj67DSjxp8TGXAuZyPUoc0AfVmM980ua+UD4n+JTcm6uR/wABNKviT4ldric/VTQB9W56D1pM84r5WbXfiTM677ibge9Vk1T4hQSu/nzkvwTk0AfWQORwRxRn0r5LgvviFCXK3dx8/OS1TRa18RI84vZ8Y5BagD6v4HYmgH6ivju78beMLG4MVxqcsT4zhTUD/ELxW4x/bE+3/eoA+zOaQHNfFZ8aeIySTrF0Sf8AbqI+L/En8Os3Q/4HQB9sfjThnHNfEEHiXXW1C
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"THEYRE WILLING TO HAVE MEENTERTAIN THEM DURING THE DAY,BUT AS SOON AS IT STARTSGETTING DARK, THEY ALLGO OFF, AND LEAVE ME!\n"
]
}
],
"source": [
2023-11-14 22:17:44 +00:00
"from IPython.display import HTML, display\n",
2023-11-13 22:26:02 +00:00
"\n",
"\n",
2023-11-14 20:58:22 +00:00
"def plt_img_base64(img_base64):\n",
2023-11-13 22:26:02 +00:00
" # Create an HTML img tag with the base64 string as the source\n",
" image_html = f'<img src=\"data:image/jpeg;base64,{img_base64}\" />'\n",
2023-11-14 20:58:22 +00:00
"\n",
2023-11-13 22:26:02 +00:00
" # Display the image by rendering the HTML\n",
" display(HTML(image_html))\n",
"\n",
2023-11-14 20:58:22 +00:00
"\n",
"docs = retriever.get_relevant_documents(\"Woman with children\", k=10)\n",
2023-11-10 17:43:10 +00:00
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",
" else:\n",
" print(doc.page_content)"
]
},
{
"cell_type": "code",
"execution_count": 77,
"id": "69fb15fd-76fc-49b4-806d-c4db2990027d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Visual Elements:\\nThe image is a black and white photograph depicting a woman with two children. The woman is positioned centrally and appears to be in her thirties. She has a look of concern or contemplation on her face, with her hand resting on her chin. Her gaze is directed away from the camera, suggesting introspection or worry. The children are turned away from the camera, with their heads leaning against the woman, seeking comfort or protection. The clothing of the subjects is simple and worn, indicating a lack of wealth. The background is out of focus, drawing attention to the expressions and posture of the subjects.\\n\\nHistorical and Cultural Context:\\nThe photograph was taken by Dorothea Lange in March 1936 and is titled \"Destitute pea pickers in California. Mother of seven children. Age thirty-two. Nipomo, California.\" It was taken during the Great Depression in the United States, a period of severe economic hardship. The woman in the photo, Florence Owens Thompson, was a Cherokee from Oklahoma. The image is part of the Farm Security Administration-Office of War Information Collection, which aimed to document and bring attention to the plight of impoverished farmers and workers during this era.\\n\\nInterpretation and Symbolism:\\nThe photograph, often referred to as \"Migrant Mother,\" has become an iconic symbol of the Great Depression. The woman\\'s expression and posture convey a sense of worry and determination, reflecting the resilience and strength required to endure such difficult times. The children\\'s reliance on their mother for comfort underscores the family\\'s vulnerability and the burdens placed upon the woman. Despite the hardship conveyed, the image also suggests a sense of dignity and maternal protectiveness.\\n\\nThe text provided indicates that Florence Owens Thompson was a strong and leading figure within her community, which contrasts with the vulnerability shown in the photograph. This dichotomy highlights the complexity of Thompson\\'s character and the circumstances of the time, where even the strongest individuals faced moments of hardship that could overshadow their usual demeanor.\\n\\nConnections Between Image and Text:\\nThe text complements the image by providing personal insight into the subject\\'s feelings about the photograph. It reveals that Thompson resented the photo because it did not reflect her strength and leadership qualities. This adds depth to our understanding of the image, as it suggests that the moment captured by Lange is not fully representative of Thompson\\'s character. The photograph, while powerful, is a snapshot that may not encompass the entirety of the subject\\'s identity and life experiences.\\n\\nThe final line of the text, \"They\\'re willing to have me entertain them during the day, but as soon as it starts getting dark, they all go off, and leave me!\" could be interpreted as a metaphor for the transient sympathy of society towards the impoverished during the Great Depression. People may have shown interest or concern during the crisis, but ultimately, those suffering, like Thompson and her family, were left to face their struggles alone when the attention faded. This line underscores the isolation and abandonment felt by many during this period, which is poignantly captured in the photograph\\'s portrayal of the mother and her children.'"
]
},
"execution_count": 77,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
2023-11-14 20:58:22 +00:00
"chain.invoke(\"Woman with children\")"
2023-11-10 17:43:10 +00:00
]
},
{
"cell_type": "markdown",
"id": "227f08b8-e732-4089-b65c-6eb6f9e48f15",
"metadata": {},
"source": [
"We can see the images retrieved in the LangSmith trace:\n",
"\n",
"LangSmith [trace](https://smith.langchain.com/public/69c558a5-49dc-4c60-a49b-3adbb70f74c5/r/e872c2c8-528c-468f-aefd-8b5cd730a673)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}