mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-17 15:29:46 +00:00
613 lines
19 KiB
Plaintext
613 lines
19 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "w9w5JBaUL-lO"
|
||
|
},
|
||
|
"source": [
|
||
|
"# Multimodal RAG with CLIP Embeddings and GPT-4 Vision\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "3CCjcFSiMbvf"
|
||
|
},
|
||
|
"source": [
|
||
|
"Multimodal RAG integrates additional modalities into traditional text-based RAG, enhancing LLMs' question-answering by providing extra context and grounding textual data for improved understanding.\n",
|
||
|
"\n",
|
||
|
"Adopting the approach from the [clothing matchmaker cookbook](https://cookbook.openai.com/examples/how_to_combine_gpt4v_with_rag_outfit_assistant), we directly embed images for similarity search, bypassing the lossy process of text captioning, to boost retrieval accuracy.\n",
|
||
|
"\n",
|
||
|
"Using CLIP-based embeddings further allows fine-tuning with specific data or updating with unseen images.\n",
|
||
|
"\n",
|
||
|
"This technique is showcased through searching an enterprise knowledge base with user-provided tech images to deliver pertinent information."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "T-Mpdxit4x49"
|
||
|
},
|
||
|
"source": [
|
||
|
"# Installations"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "nbt3evfHUJTZ"
|
||
|
},
|
||
|
"source": [
|
||
|
"First let's install the relevant packages."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"id": "7hgrcVEl0Ma1"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"#installations\n",
|
||
|
"%pip install clip\n",
|
||
|
"%pip install torch\n",
|
||
|
"%pip install pillow\n",
|
||
|
"%pip install faiss-cpu\n",
|
||
|
"%pip install numpy\n",
|
||
|
"%pip install git+https://github.com/openai/CLIP.git\n",
|
||
|
"%pip install openai"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "GgrlBLTpT0si"
|
||
|
},
|
||
|
"source": [
|
||
|
"Then let's import all the needed packages.\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 3,
|
||
|
"metadata": {
|
||
|
"id": "pN1cWF-iyLUg"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"# model imports\n",
|
||
|
"import faiss\n",
|
||
|
"import json\n",
|
||
|
"import torch\n",
|
||
|
"from openai import OpenAI\n",
|
||
|
"import torch.nn as nn\n",
|
||
|
"from torch.utils.data import DataLoader\n",
|
||
|
"import clip\n",
|
||
|
"client = OpenAI()\n",
|
||
|
"\n",
|
||
|
"# helper imports\n",
|
||
|
"from tqdm import tqdm\n",
|
||
|
"import json\n",
|
||
|
"import os\n",
|
||
|
"import numpy as np\n",
|
||
|
"import pickle\n",
|
||
|
"from typing import List, Union, Tuple\n",
|
||
|
"\n",
|
||
|
"# visualisation imports\n",
|
||
|
"from PIL import Image\n",
|
||
|
"import matplotlib.pyplot as plt\n",
|
||
|
"import base64"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "9fONcWxRqll8"
|
||
|
},
|
||
|
"source": [
|
||
|
"Now let's load the CLIP model."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 4,
|
||
|
"metadata": {
|
||
|
"id": "_Ua9y98NRk70"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"#load model on device. The device you are running inference/training on is either a CPU or GPU if you have.\n",
|
||
|
"device = \"cpu\"\n",
|
||
|
"model, preprocess = clip.load(\"ViT-B/32\",device=device)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "Dev-zjfJ774W"
|
||
|
},
|
||
|
"source": [
|
||
|
"\n",
|
||
|
"We will now:\n",
|
||
|
"1. Create the image embedding database\n",
|
||
|
"2. Set up a query to the vision model\n",
|
||
|
"3. Perform the semantic search\n",
|
||
|
"4. Pass a user query to the image\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "5Y1v2jkS42TS"
|
||
|
},
|
||
|
"source": [
|
||
|
"# Create image embedding database"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "wVBAMyhesAyi"
|
||
|
},
|
||
|
"source": [
|
||
|
"Next we will create our image embeddings knowledge base from a directory of images. This will be the knowledge base of technology that we search through to provide information to the user for an image they upload.\n",
|
||
|
"\n",
|
||
|
"We pass in the directory in which we store our images (as JPEGs) and loop through each to create our embeddings.\n",
|
||
|
"\n",
|
||
|
"We also have a description.json. This has an entry for every single image in our knowledge base. It has two keys: 'image_path' and 'description'. It maps each image to a useful description of this image to aid in answering the user question."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "fDCz76gr8yAu"
|
||
|
},
|
||
|
"source": [
|
||
|
"First let's write a function to get all the image paths in a given directory. We will then get all the jpeg's from a directory called 'image_database'"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 5,
|
||
|
"metadata": {
|
||
|
"id": "vE9i3zLuRk5c"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"def get_image_paths(directory: str, number: int = None) -> List[str]:\n",
|
||
|
" image_paths = []\n",
|
||
|
" count = 0\n",
|
||
|
" for filename in os.listdir(directory):\n",
|
||
|
" if filename.endswith('.jpeg'):\n",
|
||
|
" image_paths.append(os.path.join(directory, filename))\n",
|
||
|
" if number is not None and count == number:\n",
|
||
|
" return [image_paths[-1]]\n",
|
||
|
" count += 1\n",
|
||
|
" return image_paths\n",
|
||
|
"direc = 'image_database/'\n",
|
||
|
"image_paths = get_image_paths(direc)\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "hMldfjn189vC"
|
||
|
},
|
||
|
"source": [
|
||
|
"Next we will write a function to get the image embeddings from the CLIP model given a series of paths.\n",
|
||
|
"\n",
|
||
|
"We first preprocess the image using the preprocess function we got earlier. This performs a few things to ensure the input to the CLIP model is of the right format and dimensionality including resizing, normalization, colour channel adjustment etc.\n",
|
||
|
"\n",
|
||
|
"We then stack these preprocessed images together so we can pass them into the model at once rather than in a loop. And finally return the model output which is an array of embeddings."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 6,
|
||
|
"metadata": {
|
||
|
"id": "fd3I_fPh8qvi"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"def get_features_from_image_path(image_paths):\n",
|
||
|
" images = [preprocess(Image.open(image_path).convert(\"RGB\")) for image_path in image_paths]\n",
|
||
|
" image_input = torch.tensor(np.stack(images))\n",
|
||
|
" with torch.no_grad():\n",
|
||
|
" image_features = model.encode_image(image_input).float()\n",
|
||
|
" return image_features\n",
|
||
|
"image_features = get_features_from_image_path(image_paths)\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "UH_kyZAE-kHe"
|
||
|
},
|
||
|
"source": [
|
||
|
"We can now create our vector database."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 7,
|
||
|
"metadata": {
|
||
|
"id": "TIeqpndF8tZk"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"index = faiss.IndexFlatIP(image_features.shape[1])\n",
|
||
|
"index.add(image_features)\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "swDe1c4v-mbz"
|
||
|
},
|
||
|
"source": [
|
||
|
"And also ingest our json for image-description mapping and create a list of jsons. We also create a helper function to search through this list for a given image we want, so we can obtain the description of that image"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 11,
|
||
|
"metadata": {
|
||
|
"id": "tdjlXQqC8uNE"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"data = []\n",
|
||
|
"image_path = 'train1.jpeg'\n",
|
||
|
"with open('description.json', 'r') as file:\n",
|
||
|
" for line in file:\n",
|
||
|
" data.append(json.loads(line))\n",
|
||
|
"def find_entry(data, key, value):\n",
|
||
|
" for entry in data:\n",
|
||
|
" if entry.get(key) == value:\n",
|
||
|
" return entry\n",
|
||
|
" return None"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "fJXfCtPD5_63"
|
||
|
},
|
||
|
"source": [
|
||
|
"Let us display an example image, this will be the user uploaded image. This is a piece of tech that was unveiled at the 2024 CES. It is the DELTA Pro Ultra Whole House Battery Generator."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"id": "RtkZ7W3g5sED"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"im = Image.open(image_path)\n",
|
||
|
"plt.imshow(im)\n",
|
||
|
"plt.show()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "5ivECCKSdbBy"
|
||
|
},
|
||
|
"source": [
|
||
|
"![Delta Pro](../images/train1.jpeg)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "Sidjylki7Kye"
|
||
|
},
|
||
|
"source": [
|
||
|
"# Querying the vision model"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "H8O7X6ml7t38"
|
||
|
},
|
||
|
"source": [
|
||
|
"Now let's have a look at what GPT-4 Vision (which wouldn't have seen this technology before) will label it as.\n",
|
||
|
"\n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "r4uDjS-gQAqm"
|
||
|
},
|
||
|
"source": [
|
||
|
"First we will need to write a function to encode our image in base64 as this is the format we will pass into the vision model. Then we will create a generic image_query function to allow us to query the LLM with an image input."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 12,
|
||
|
"metadata": {
|
||
|
"colab": {
|
||
|
"base_uri": "https://localhost:8080/",
|
||
|
"height": 35
|
||
|
},
|
||
|
"id": "87gf6_xO8Y4i",
|
||
|
"outputId": "99be865f-12e8-4ef0-c2f5-5fd6e5c787f3"
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"application/vnd.google.colaboratory.intrinsic+json": {
|
||
|
"type": "string"
|
||
|
},
|
||
|
"text/plain": [
|
||
|
"'Autonomous Delivery Robot'"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 12,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"def encode_image(image_path):\n",
|
||
|
" with open(image_path, 'rb') as image_file:\n",
|
||
|
" encoded_image = base64.b64encode(image_file.read())\n",
|
||
|
" return encoded_image.decode('utf-8')\n",
|
||
|
"\n",
|
||
|
"def image_query(query, image_path):\n",
|
||
|
" response = client.chat.completions.create(\n",
|
||
|
" model='gpt-4-vision-preview',\n",
|
||
|
" messages=[\n",
|
||
|
" {\n",
|
||
|
" \"role\": \"user\",\n",
|
||
|
" \"content\": [\n",
|
||
|
" {\n",
|
||
|
" \"type\": \"text\",\n",
|
||
|
" \"text\": query,\n",
|
||
|
" },\n",
|
||
|
" {\n",
|
||
|
" \"type\": \"image_url\",\n",
|
||
|
" \"image_url\": {\n",
|
||
|
" \"url\": f\"data:image/jpeg;base64,{encode_image(image_path)}\",\n",
|
||
|
" },\n",
|
||
|
" }\n",
|
||
|
" ],\n",
|
||
|
" }\n",
|
||
|
" ],\n",
|
||
|
" max_tokens=300,\n",
|
||
|
" )\n",
|
||
|
" # Extract relevant features from the response\n",
|
||
|
" return response.choices[0].message.content\n",
|
||
|
"image_query('Write a short label of what is show in this image?', image_path)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "yfG_7c-jQAqm"
|
||
|
},
|
||
|
"source": [
|
||
|
"As we can see, it tries its best from the information it's been trained on but it makes a mistake due to it not having seen anything similar in its training data. This is because it is an ambiguous image making it difficult to extrapolate and deduce."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "szWZqTqf7SrA"
|
||
|
},
|
||
|
"source": [
|
||
|
"# Performing semantic search"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "eV8LaOncGH3j"
|
||
|
},
|
||
|
"source": [
|
||
|
"Now let's perform similarity search to find the two most similar images in our knowledge base. We do this by getting the embeddings of a user inputted image_path, retrieving the indexes and distances of the similar iamges in our database. Distance will be our proxy metric for similarity and a smaller distance means more similar. We then sort based on distance in descending order."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 13,
|
||
|
"metadata": {
|
||
|
"id": "GzNEhKJ04D-F"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"image_search_embedding = get_features_from_image_path([image_path])\n",
|
||
|
"distances, indices = index.search(image_search_embedding.reshape(1, -1), 2) #2 signifies the number of topmost similar images to bring back\n",
|
||
|
"distances = distances[0]\n",
|
||
|
"indices = indices[0]\n",
|
||
|
"indices_distances = list(zip(indices, distances))\n",
|
||
|
"indices_distances.sort(key=lambda x: x[1], reverse=True)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "0O-GYQ-1QAqm"
|
||
|
},
|
||
|
"source": [
|
||
|
"We require the indices as we will use this to serach through our image_directory and selecting the image at the location of the index to feed into the vision model for RAG."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "9-6SVzwSJVuT"
|
||
|
},
|
||
|
"source": [
|
||
|
"And let's see what it brought back (we display these in order of similarity):"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"id": "Lt1ZYuKDFeww"
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"#display similar images\n",
|
||
|
"for idx, distance in indices_distances:\n",
|
||
|
" print(idx)\n",
|
||
|
" path = get_image_paths(direc, idx)[0]\n",
|
||
|
" im = Image.open(path)\n",
|
||
|
" plt.imshow(im)\n",
|
||
|
" plt.show()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "GPTvKIUJ2tgz"
|
||
|
},
|
||
|
"source": [
|
||
|
"![Delta Pro2](../images/train2.jpeg)\n",
|
||
|
"\n",
|
||
|
"![Delta Pro3](../images/train17.jpeg)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "x4kF2-MJQAqm"
|
||
|
},
|
||
|
"source": [
|
||
|
"We can see here it brought back two images which contain the DELTA Pro Ultra Whole House Battery Generator. In one of the images it also has some background which could be distracting but manages to find the right image."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "Qc2sOKzY7yv3"
|
||
|
},
|
||
|
"source": [
|
||
|
"# User querying the most similar image"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "8Sio6OR4MDjI"
|
||
|
},
|
||
|
"source": [
|
||
|
"Now for our most similar image, we want to pass it and the description of it to gpt-v with a user query so they can inquire about the technology that they may have bought. This is where the power of the vision model comes in, where you can ask general queries for which the model hasn't been explicitly trained on to the model and it responds with high accuracy."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "uPzsRk66QAqn"
|
||
|
},
|
||
|
"source": [
|
||
|
"In our example below, we will inquire as to the capacity of the item in question."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 14,
|
||
|
"metadata": {
|
||
|
"colab": {
|
||
|
"base_uri": "https://localhost:8080/",
|
||
|
"height": 87
|
||
|
},
|
||
|
"id": "-_5W_xwitbr3",
|
||
|
"outputId": "99a40617-0153-492a-d8b0-6782b8421e40"
|
||
|
},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"application/vnd.google.colaboratory.intrinsic+json": {
|
||
|
"type": "string"
|
||
|
},
|
||
|
"text/plain": [
|
||
|
"'The portable home battery DELTA Pro has a base capacity of 3.6kWh. This capacity can be expanded up to 25kWh with additional batteries. The image showcases the DELTA Pro, which has an impressive 3600W power capacity for AC output as well.'"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 14,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"similar_path = get_image_paths(direc, indices_distances[0][0])[0]\n",
|
||
|
"element = find_entry(data, 'image_path', similar_path)\n",
|
||
|
"\n",
|
||
|
"user_query = 'What is the capacity of this item?'\n",
|
||
|
"prompt = f\"\"\"\n",
|
||
|
"Below is a user query, I want you to answer the query using the description and image provided.\n",
|
||
|
"\n",
|
||
|
"user query:\n",
|
||
|
"{user_query}\n",
|
||
|
"\n",
|
||
|
"description:\n",
|
||
|
"{element['description']}\n",
|
||
|
"\"\"\"\n",
|
||
|
"image_query(prompt, similar_path)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "VIInamGaAG9L"
|
||
|
},
|
||
|
"source": [
|
||
|
"And we see it is able to answer the question. This was only possible by matching images directly and from there gathering the relevant description as context."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "Ljrf0VKR_2q9"
|
||
|
},
|
||
|
"source": [
|
||
|
"# Conclusion"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "PexvxTF5_7ay"
|
||
|
},
|
||
|
"source": [
|
||
|
"In this notebook, we have gone through how to use the CLIP model, an example of creating an image embedding database using the CLIP model, performing semantic search and finally providing a user query to answer the question."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"id": "gOgRBeh6eMiq"
|
||
|
},
|
||
|
"source": [
|
||
|
"The applications of this pattern of usage spread across many different application domains and this is easily improved to further enhance the technique. For example you may finetune CLIP, you may improve the retrieval process just like in RAG and you can prompt engineer GPT-V.\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"colab": {
|
||
|
"provenance": []
|
||
|
},
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"name": "python",
|
||
|
"version": "3.10.14"
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 0
|
||
|
}
|