From 98c48f303a924ea1c47f867bf9b59e2123a3ba63 Mon Sep 17 00:00:00 2001 From: Bagatur <22008038+baskaryan@users.noreply.github.com> Date: Mon, 17 Jul 2023 07:53:11 -0700 Subject: [PATCH] fix (#7838) --- .../chains/additional/extraction.ipynb | 1106 ++++++------- .../vectorstores/integrations/qdrant.ipynb | 1459 +++++++++-------- .../vectorstores/integrations/redis.ipynb | 16 +- 3 files changed, 1290 insertions(+), 1291 deletions(-) diff --git a/docs/extras/modules/chains/additional/extraction.ipynb b/docs/extras/modules/chains/additional/extraction.ipynb index 8e0db268e2..a57c12f9c9 100644 --- a/docs/extras/modules/chains/additional/extraction.ipynb +++ b/docs/extras/modules/chains/additional/extraction.ipynb @@ -1,566 +1,566 @@ { - "cells": [ - { - "cell_type": "markdown", - "id": "6605e7f7", - "metadata": {}, - "source": [ - "# Extraction\n", - "\n", - "The extraction chain uses the OpenAI `functions` parameter to specify a schema to extract entities from a document. This helps us make sure that the model outputs exactly the schema of entities and properties that we want, with their appropriate types.\n", - "\n", - "The extraction chain is to be used when we want to extract several entities with their properties from the same passage (i.e. what people were mentioned in this passage?)" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "34f04daf", - "metadata": {}, - "outputs": [ + "cells": [ { - "name": "stderr", - "output_type": "stream", - "text": [ - "/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n", - " warnings.warn(\n" - ] - } - ], - "source": [ - "from langchain.chat_models import ChatOpenAI\n", - "from langchain.chains import create_extraction_chain, create_extraction_chain_pydantic\n", - "from langchain.prompts import ChatPromptTemplate" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "a2648974", - "metadata": {}, - "outputs": [], - "source": [ - "llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")" - ] - }, - { - "cell_type": "markdown", - "id": "5ef034ce", - "metadata": {}, - "source": [ - "## Extracting entities" - ] - }, - { - "cell_type": "markdown", - "id": "78ff9df9", - "metadata": {}, - "source": [ - "To extract entities, we need to create a schema where we specify all the properties we want to find and the type we expect them to have. We can also specify which of these properties are required and which are optional." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "4ac43eba", - "metadata": {}, - "outputs": [], - "source": [ - "schema = {\n", - " \"properties\": {\n", - " \"name\": {\"type\": \"string\"},\n", - " \"height\": {\"type\": \"integer\"},\n", - " \"hair_color\": {\"type\": \"string\"},\n", - " },\n", - " \"required\": [\"name\", \"height\"],\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "640bd005", - "metadata": {}, - "outputs": [], - "source": [ - "inp = \"\"\"\n", - "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", - " \"\"\"" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "64313214", - "metadata": {}, - "outputs": [], - "source": [ - "chain = create_extraction_chain(schema, llm)" - ] - }, - { - "cell_type": "markdown", - "id": "17c48adb", - "metadata": {}, - "source": [ - "As we can see, we extracted the required entities and their properties in the required format (it even calculated Claudia's height before returning!)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "cc5436ed", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'},\n", - " {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]" + "cell_type": "markdown", + "id": "6605e7f7", + "metadata": {}, + "source": [ + "# Extraction\n", + "\n", + "The extraction chain uses the OpenAI `functions` parameter to specify a schema to extract entities from a document. This helps us make sure that the model outputs exactly the schema of entities and properties that we want, with their appropriate types.\n", + "\n", + "The extraction chain is to be used when we want to extract several entities with their properties from the same passage (i.e. what people were mentioned in this passage?)" ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(inp)" - ] - }, - { - "cell_type": "markdown", - "id": "8d51fcdc", - "metadata": {}, - "source": [ - "## Several entity types" - ] - }, - { - "cell_type": "markdown", - "id": "5813affe", - "metadata": {}, - "source": [ - "Notice that we are using OpenAI functions under the hood and thus the model can only call one function per request (with one, unique schema)" - ] - }, - { - "cell_type": "markdown", - "id": "511b9838", - "metadata": {}, - "source": [ - "If we want to extract more than one entity type, we need to introduce a little hack - we will define our properties with an included entity type. \n", - "\n", - "Following we have an example where we also want to extract dog attributes from the passage. Notice the 'person_' and 'dog_' prefixes we use for each property; this tells the model which entity type the property refers to. In this way, the model can return properties from several entity types in one single call." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "cf243a26", - "metadata": {}, - "outputs": [], - "source": [ - "schema = {\n", - " \"properties\": {\n", - " \"person_name\": {\"type\": \"string\"},\n", - " \"person_height\": {\"type\": \"integer\"},\n", - " \"person_hair_color\": {\"type\": \"string\"},\n", - " \"dog_name\": {\"type\": \"string\"},\n", - " \"dog_breed\": {\"type\": \"string\"},\n", - " },\n", - " \"required\": [\"person_name\", \"person_height\"],\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "640bd005", - "metadata": {}, - "outputs": [], - "source": [ - "inp = \"\"\"\n", - "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", - "Alex's dog Frosty is a labrador and likes to play hide and seek.\n", - " \"\"\"" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "64313214", - "metadata": {}, - "outputs": [], - "source": [ - "chain = create_extraction_chain(schema, llm)" - ] - }, - { - "cell_type": "markdown", - "id": "eb074f7b", - "metadata": {}, - "source": [ - "People attributes and dog attributes were correctly extracted from the text in the same call" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "cc5436ed", - "metadata": {}, - "outputs": [ + }, { - "data": { - "text/plain": [ - "[{'person_name': 'Alex',\n", - " 'person_height': 5,\n", - " 'person_hair_color': 'blonde',\n", - " 'dog_name': 'Frosty',\n", - " 'dog_breed': 'labrador'},\n", - " {'person_name': 'Claudia',\n", - " 'person_height': 6,\n", - " 'person_hair_color': 'brunette'}]" + "cell_type": "code", + "execution_count": 2, + "id": "34f04daf", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n", + " warnings.warn(\n" + ] + } + ], + "source": [ + "from langchain.chat_models import ChatOpenAI\n", + "from langchain.chains import create_extraction_chain, create_extraction_chain_pydantic\n", + "from langchain.prompts import ChatPromptTemplate" ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(inp)" - ] - }, - { - "cell_type": "markdown", - "id": "0273e0e2", - "metadata": {}, - "source": [ - "## Unrelated entities" - ] - }, - { - "cell_type": "markdown", - "id": "c07b3480", - "metadata": {}, - "source": [ - "What if our entities are unrelated? In that case, the model will return the unrelated entities in different dictionaries, allowing us to successfully extract several unrelated entity types in the same call." - ] - }, - { - "cell_type": "markdown", - "id": "01d98af0", - "metadata": {}, - "source": [ - "Notice that we use `required: []`: we need to allow the model to return **only** person attributes or **only** dog attributes for a single entity (person or dog)" - ] - }, - { - "cell_type": "code", - "execution_count": 48, - "id": "e584c993", - "metadata": {}, - "outputs": [], - "source": [ - "schema = {\n", - " \"properties\": {\n", - " \"person_name\": {\"type\": \"string\"},\n", - " \"person_height\": {\"type\": \"integer\"},\n", - " \"person_hair_color\": {\"type\": \"string\"},\n", - " \"dog_name\": {\"type\": \"string\"},\n", - " \"dog_breed\": {\"type\": \"string\"},\n", - " },\n", - " \"required\": [],\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": 49, - "id": "ad6b105f", - "metadata": {}, - "outputs": [], - "source": [ - "inp = \"\"\"\n", - "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", - "\n", - "Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by.\n", - "\"\"\"" - ] - }, - { - "cell_type": "code", - "execution_count": 50, - "id": "6bfe5a33", - "metadata": {}, - "outputs": [], - "source": [ - "chain = create_extraction_chain(schema, llm)" - ] - }, - { - "cell_type": "markdown", - "id": "24fe09af", - "metadata": {}, - "source": [ - "We have each entity in its own separate dictionary, with only the appropriate attributes being returned" - ] - }, - { - "cell_type": "code", - "execution_count": 51, - "id": "f6e1fd89", - "metadata": {}, - "outputs": [ + }, { - "data": { - "text/plain": [ - "[{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'},\n", - " {'person_name': 'Claudia',\n", - " 'person_height': 6,\n", - " 'person_hair_color': 'brunette'},\n", - " {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'},\n", - " {'dog_name': 'Milo', 'dog_breed': 'border collie'}]" + "cell_type": "code", + "execution_count": 3, + "id": "a2648974", + "metadata": {}, + "outputs": [], + "source": [ + "llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")" ] - }, - "execution_count": 51, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(inp)" - ] - }, - { - "cell_type": "markdown", - "id": "0ac466d1", - "metadata": {}, - "source": [ - "## Extra info for an entity" - ] - }, - { - "cell_type": "markdown", - "id": "d240ffc1", - "metadata": {}, - "source": [ - "What if.. _we don't know what we want?_ More specifically, say we know a few properties we want to extract for a given entity but we also want to know if there's any extra information in the passage. Fortunately, we don't need to structure everything - we can have unstructured extraction as well. \n", - "\n", - "We can do this by introducing another hack, namely the *extra_info* attribute - let's see an example." - ] - }, - { - "cell_type": "code", - "execution_count": 68, - "id": "f19685f6", - "metadata": {}, - "outputs": [], - "source": [ - "schema = {\n", - " \"properties\": {\n", - " \"person_name\": {\"type\": \"string\"},\n", - " \"person_height\": {\"type\": \"integer\"},\n", - " \"person_hair_color\": {\"type\": \"string\"},\n", - " \"dog_name\": {\"type\": \"string\"},\n", - " \"dog_breed\": {\"type\": \"string\"},\n", - " \"dog_extra_info\": {\"type\": \"string\"},\n", - " },\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": 81, - "id": "200c3477", - "metadata": {}, - "outputs": [], - "source": [ - "inp = \"\"\"\n", - "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", - "\n", - "Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by.\n", - "\"\"\"" - ] - }, - { - "cell_type": "code", - "execution_count": 82, - "id": "ddad7dc6", - "metadata": {}, - "outputs": [], - "source": [ - "chain = create_extraction_chain(schema, llm)" - ] - }, - { - "cell_type": "markdown", - "id": "e5c0dbbc", - "metadata": {}, - "source": [ - "It is nice to know more about Willow and Milo!" - ] - }, - { - "cell_type": "code", - "execution_count": 83, - "id": "c22cfd30", - "metadata": {}, - "outputs": [ + }, { - "data": { - "text/plain": [ - "[{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'},\n", - " {'person_name': 'Claudia',\n", - " 'person_height': 6,\n", - " 'person_hair_color': 'brunette'},\n", - " {'dog_name': 'Willow',\n", - " 'dog_breed': 'German Shepherd',\n", - " 'dog_extra_information': 'likes to play with other dogs'},\n", - " {'dog_name': 'Milo',\n", - " 'dog_breed': 'border collie',\n", - " 'dog_extra_information': 'lives close by'}]" + "cell_type": "markdown", + "id": "5ef034ce", + "metadata": {}, + "source": [ + "## Extracting entities" ] - }, - "execution_count": 83, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(inp)" - ] - }, - { - "cell_type": "markdown", - "id": "698b4c4d", - "metadata": {}, - "source": [ - "## Pydantic example" - ] - }, - { - "cell_type": "markdown", - "id": "6504a6d9", - "metadata": {}, - "source": [ - "We can also use a Pydantic schema to choose the required properties and types and we will set as 'Optional' those that are not strictly required.\n", - "\n", - "By using the `create_extraction_chain_pydantic` function, we can send a Pydantic schema as input and the output will be an instantiated object that respects our desired schema. \n", - "\n", - "In this way, we can specify our schema in the same manner that we would a new class or function in Python - with purely Pythonic types." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "6792866b", - "metadata": {}, - "outputs": [], - "source": [ - "from typing import Optional, List\n", - "from pydantic import BaseModel, Field" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "36a63761", - "metadata": {}, - "outputs": [], - "source": [ - "class Properties(BaseModel):\n", - " person_name: str\n", - " person_height: int\n", - " person_hair_color: str\n", - " dog_breed: Optional[str]\n", - " dog_name: Optional[str]" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "8ffd1e57", - "metadata": {}, - "outputs": [], - "source": [ - "chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "24baa954", - "metadata": { - "scrolled": false - }, - "outputs": [], - "source": [ - "inp = \"\"\"\n", - "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", - "Alex's dog Frosty is a labrador and likes to play hide and seek.\n", - " \"\"\"" - ] - }, - { - "cell_type": "markdown", - "id": "84e0a241", - "metadata": {}, - "source": [ - "As we can see, we extracted the required entities and their properties in the required format:" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "f771df58", - "metadata": {}, - "outputs": [ + }, { - "data": { - "text/plain": [ - "[Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed='labrador', dog_name='Frosty'),\n", - " Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]" + "cell_type": "markdown", + "id": "78ff9df9", + "metadata": {}, + "source": [ + "To extract entities, we need to create a schema where we specify all the properties we want to find and the type we expect them to have. We can also specify which of these properties are required and which are optional." ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "4ac43eba", + "metadata": {}, + "outputs": [], + "source": [ + "schema = {\n", + " \"properties\": {\n", + " \"name\": {\"type\": \"string\"},\n", + " \"height\": {\"type\": \"integer\"},\n", + " \"hair_color\": {\"type\": \"string\"},\n", + " },\n", + " \"required\": [\"name\", \"height\"],\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "640bd005", + "metadata": {}, + "outputs": [], + "source": [ + "inp = \"\"\"\n", + "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", + " \"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "64313214", + "metadata": {}, + "outputs": [], + "source": [ + "chain = create_extraction_chain(schema, llm)" + ] + }, + { + "cell_type": "markdown", + "id": "17c48adb", + "metadata": {}, + "source": [ + "As we can see, we extracted the required entities and their properties in the required format (it even calculated Claudia's height before returning!)" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "cc5436ed", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'},\n", + " {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.run(inp)" + ] + }, + { + "cell_type": "markdown", + "id": "8d51fcdc", + "metadata": {}, + "source": [ + "## Several entity types" + ] + }, + { + "cell_type": "markdown", + "id": "5813affe", + "metadata": {}, + "source": [ + "Notice that we are using OpenAI functions under the hood and thus the model can only call one function per request (with one, unique schema)" + ] + }, + { + "cell_type": "markdown", + "id": "511b9838", + "metadata": {}, + "source": [ + "If we want to extract more than one entity type, we need to introduce a little hack - we will define our properties with an included entity type. \n", + "\n", + "Following we have an example where we also want to extract dog attributes from the passage. Notice the 'person_' and 'dog_' prefixes we use for each property; this tells the model which entity type the property refers to. In this way, the model can return properties from several entity types in one single call." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "cf243a26", + "metadata": {}, + "outputs": [], + "source": [ + "schema = {\n", + " \"properties\": {\n", + " \"person_name\": {\"type\": \"string\"},\n", + " \"person_height\": {\"type\": \"integer\"},\n", + " \"person_hair_color\": {\"type\": \"string\"},\n", + " \"dog_name\": {\"type\": \"string\"},\n", + " \"dog_breed\": {\"type\": \"string\"},\n", + " },\n", + " \"required\": [\"person_name\", \"person_height\"],\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "52841fb3", + "metadata": {}, + "outputs": [], + "source": [ + "inp = \"\"\"\n", + "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", + "Alex's dog Frosty is a labrador and likes to play hide and seek.\n", + " \"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "93f904ab", + "metadata": {}, + "outputs": [], + "source": [ + "chain = create_extraction_chain(schema, llm)" + ] + }, + { + "cell_type": "markdown", + "id": "eb074f7b", + "metadata": {}, + "source": [ + "People attributes and dog attributes were correctly extracted from the text in the same call" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "db3e9e17", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'person_name': 'Alex',\n", + " 'person_height': 5,\n", + " 'person_hair_color': 'blonde',\n", + " 'dog_name': 'Frosty',\n", + " 'dog_breed': 'labrador'},\n", + " {'person_name': 'Claudia',\n", + " 'person_height': 6,\n", + " 'person_hair_color': 'brunette'}]" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.run(inp)" + ] + }, + { + "cell_type": "markdown", + "id": "0273e0e2", + "metadata": {}, + "source": [ + "## Unrelated entities" + ] + }, + { + "cell_type": "markdown", + "id": "c07b3480", + "metadata": {}, + "source": [ + "What if our entities are unrelated? In that case, the model will return the unrelated entities in different dictionaries, allowing us to successfully extract several unrelated entity types in the same call." + ] + }, + { + "cell_type": "markdown", + "id": "01d98af0", + "metadata": {}, + "source": [ + "Notice that we use `required: []`: we need to allow the model to return **only** person attributes or **only** dog attributes for a single entity (person or dog)" + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "id": "e584c993", + "metadata": {}, + "outputs": [], + "source": [ + "schema = {\n", + " \"properties\": {\n", + " \"person_name\": {\"type\": \"string\"},\n", + " \"person_height\": {\"type\": \"integer\"},\n", + " \"person_hair_color\": {\"type\": \"string\"},\n", + " \"dog_name\": {\"type\": \"string\"},\n", + " \"dog_breed\": {\"type\": \"string\"},\n", + " },\n", + " \"required\": [],\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "id": "ad6b105f", + "metadata": {}, + "outputs": [], + "source": [ + "inp = \"\"\"\n", + "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", + "\n", + "Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by.\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "id": "6bfe5a33", + "metadata": {}, + "outputs": [], + "source": [ + "chain = create_extraction_chain(schema, llm)" + ] + }, + { + "cell_type": "markdown", + "id": "24fe09af", + "metadata": {}, + "source": [ + "We have each entity in its own separate dictionary, with only the appropriate attributes being returned" + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "id": "f6e1fd89", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'},\n", + " {'person_name': 'Claudia',\n", + " 'person_height': 6,\n", + " 'person_hair_color': 'brunette'},\n", + " {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'},\n", + " {'dog_name': 'Milo', 'dog_breed': 'border collie'}]" + ] + }, + "execution_count": 51, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.run(inp)" + ] + }, + { + "cell_type": "markdown", + "id": "0ac466d1", + "metadata": {}, + "source": [ + "## Extra info for an entity" + ] + }, + { + "cell_type": "markdown", + "id": "d240ffc1", + "metadata": {}, + "source": [ + "What if.. _we don't know what we want?_ More specifically, say we know a few properties we want to extract for a given entity but we also want to know if there's any extra information in the passage. Fortunately, we don't need to structure everything - we can have unstructured extraction as well. \n", + "\n", + "We can do this by introducing another hack, namely the *extra_info* attribute - let's see an example." + ] + }, + { + "cell_type": "code", + "execution_count": 68, + "id": "f19685f6", + "metadata": {}, + "outputs": [], + "source": [ + "schema = {\n", + " \"properties\": {\n", + " \"person_name\": {\"type\": \"string\"},\n", + " \"person_height\": {\"type\": \"integer\"},\n", + " \"person_hair_color\": {\"type\": \"string\"},\n", + " \"dog_name\": {\"type\": \"string\"},\n", + " \"dog_breed\": {\"type\": \"string\"},\n", + " \"dog_extra_info\": {\"type\": \"string\"},\n", + " },\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 81, + "id": "200c3477", + "metadata": {}, + "outputs": [], + "source": [ + "inp = \"\"\"\n", + "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", + "\n", + "Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by.\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 82, + "id": "ddad7dc6", + "metadata": {}, + "outputs": [], + "source": [ + "chain = create_extraction_chain(schema, llm)" + ] + }, + { + "cell_type": "markdown", + "id": "e5c0dbbc", + "metadata": {}, + "source": [ + "It is nice to know more about Willow and Milo!" + ] + }, + { + "cell_type": "code", + "execution_count": 83, + "id": "c22cfd30", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'},\n", + " {'person_name': 'Claudia',\n", + " 'person_height': 6,\n", + " 'person_hair_color': 'brunette'},\n", + " {'dog_name': 'Willow',\n", + " 'dog_breed': 'German Shepherd',\n", + " 'dog_extra_information': 'likes to play with other dogs'},\n", + " {'dog_name': 'Milo',\n", + " 'dog_breed': 'border collie',\n", + " 'dog_extra_information': 'lives close by'}]" + ] + }, + "execution_count": 83, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.run(inp)" + ] + }, + { + "cell_type": "markdown", + "id": "698b4c4d", + "metadata": {}, + "source": [ + "## Pydantic example" + ] + }, + { + "cell_type": "markdown", + "id": "6504a6d9", + "metadata": {}, + "source": [ + "We can also use a Pydantic schema to choose the required properties and types and we will set as 'Optional' those that are not strictly required.\n", + "\n", + "By using the `create_extraction_chain_pydantic` function, we can send a Pydantic schema as input and the output will be an instantiated object that respects our desired schema. \n", + "\n", + "In this way, we can specify our schema in the same manner that we would a new class or function in Python - with purely Pythonic types." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "6792866b", + "metadata": {}, + "outputs": [], + "source": [ + "from typing import Optional, List\n", + "from pydantic import BaseModel, Field" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "36a63761", + "metadata": {}, + "outputs": [], + "source": [ + "class Properties(BaseModel):\n", + " person_name: str\n", + " person_height: int\n", + " person_hair_color: str\n", + " dog_breed: Optional[str]\n", + " dog_name: Optional[str]" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "8ffd1e57", + "metadata": {}, + "outputs": [], + "source": [ + "chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "24baa954", + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "inp = \"\"\"\n", + "Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n", + "Alex's dog Frosty is a labrador and likes to play hide and seek.\n", + " \"\"\"" + ] + }, + { + "cell_type": "markdown", + "id": "84e0a241", + "metadata": {}, + "source": [ + "As we can see, we extracted the required entities and their properties in the required format:" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "f771df58", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed='labrador', dog_name='Frosty'),\n", + " Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.run(inp)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0df61283", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.1" } - ], - "source": [ - "chain.run(inp)" - ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "0df61283", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.1" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb b/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb index e95c33ce8c..ad81582ee9 100644 --- a/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb +++ b/docs/extras/modules/data_connection/vectorstores/integrations/qdrant.ipynb @@ -1,739 +1,742 @@ { - "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "id": "683953b3", - "metadata": {}, - "source": [ - "# Qdrant\n", - "\n", - ">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n", - "\n", - "\n", - "This notebook shows how to use functionality related to the `Qdrant` vector database. \n", - "\n", - "There are various modes of how to run `Qdrant`, and depending on the chosen one, there will be some subtle differences. The options include:\n", - "- Local mode, no server required\n", - "- On-premise server deployment\n", - "- Qdrant Cloud\n", - "\n", - "See the [installation instructions](https://qdrant.tech/documentation/install/)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "e03e8460-8f32-4d1f-bb93-4f7636a476fa", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "!pip install qdrant-client" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5", - "metadata": {}, - "source": [ - "We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "082e7e8b-ac52-430c-98d6-8f0924457642", - "metadata": { - "tags": [] - }, - "outputs": [ + "cells": [ { - "name": "stdout", - "output_type": "stream", - "text": [ - "OpenAI API Key: ········\n" - ] - } - ], - "source": [ - "import os\n", - "import getpass\n", - "\n", - "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "aac9563e", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:22.282884Z", - "start_time": "2023-04-04T10:51:21.408077Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "from langchain.embeddings.openai import OpenAIEmbeddings\n", - "from langchain.text_splitter import CharacterTextSplitter\n", - "from langchain.vectorstores import Qdrant\n", - "from langchain.document_loaders import TextLoader" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "a3c3999a", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:22.520144Z", - "start_time": "2023-04-04T10:51:22.285826Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "loader = TextLoader(\"../../../state_of_the_union.txt\")\n", - "documents = loader.load()\n", - "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", - "docs = text_splitter.split_documents(documents)\n", - "\n", - "embeddings = OpenAIEmbeddings()" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "eeead681", - "metadata": {}, - "source": [ - "## Connecting to Qdrant from LangChain\n", - "\n", - "### Local mode\n", - "\n", - "Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\n", - "\n", - "#### In-memory\n", - "\n", - "For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "8429667e", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:22.525091Z", - "start_time": "2023-04-04T10:51:22.522015Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " location=\":memory:\", # Local mode with in-memory storage only\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "59f0b954", - "metadata": {}, - "source": [ - "#### On-disk storage\n", - "\n", - "Local mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "24b370e2", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.827567Z", - "start_time": "2023-04-04T10:51:22.529080Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " path=\"/tmp/local_qdrant\",\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "749658ce", - "metadata": {}, - "source": [ - "### On-premise server deployment\n", - "\n", - "No matter if you choose to launch Qdrant locally with [a Docker container](https://qdrant.tech/documentation/install/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "91e7f5ce", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.832708Z", - "start_time": "2023-04-04T10:51:24.829905Z" - } - }, - "outputs": [], - "source": [ - "url = \"<---qdrant url here --->\"\n", - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " url,\n", - " prefer_grpc=True,\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "c9e21ce9", - "metadata": {}, - "source": [ - "### Qdrant Cloud\n", - "\n", - "If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on [Qdrant Cloud](https://cloud.qdrant.io/). There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly." - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "dcf88bdf", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.837599Z", - "start_time": "2023-04-04T10:51:24.834690Z" - } - }, - "outputs": [], - "source": [ - "url = \"<---qdrant cloud cluster url here --->\"\n", - "api_key = \"<---api key here--->\"\n", - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " url,\n", - " prefer_grpc=True,\n", - " api_key=api_key,\n", - " collection_name=\"my_documents\",\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "93540013", - "metadata": {}, - "source": [ - "## Recreating the collection\n", - "\n", - "Both `Qdrant.from_texts` and `Qdrant.from_documents` methods are great to start using Qdrant with Langchain. In the previous versions the collection was recreated every time you called any of them. That behaviour has changed. Currently, the collection is going to be reused if it already exists. Setting `force_recreate` to `True` allows to remove the old collection and start from scratch." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "30a87570", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:24.854117Z", - "start_time": "2023-04-04T10:51:24.845385Z" - } - }, - "outputs": [], - "source": [ - "url = \"<---qdrant url here --->\"\n", - "qdrant = Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " url,\n", - " prefer_grpc=True,\n", - " collection_name=\"my_documents\",\n", - " force_recreate=True,\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "1f9215c8", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T09:27:29.920258Z", - "start_time": "2023-04-04T09:27:29.913714Z" - } - }, - "source": [ - "## Similarity search\n", - "\n", - "The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the `embedding_function` and used to find similar documents in Qdrant collection." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "a8c513ab", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.204469Z", - "start_time": "2023-04-04T10:51:24.855618Z" - }, - "tags": [] - }, - "outputs": [], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.similarity_search(query)" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "fc516993", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.220984Z", - "start_time": "2023-04-04T10:51:25.213943Z" - }, - "tags": [] - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", - "\n", - "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", - "\n", - "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", - "\n", - "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n" - ] - } - ], - "source": [ - "print(found_docs[0].page_content)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "1bda9bf5", - "metadata": {}, - "source": [ - "## Similarity search with score\n", - "\n", - "Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. \n", - "The returned distance score is cosine distance. Therefore, a lower score is better." - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "8804a21d", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.631585Z", - "start_time": "2023-04-04T10:51:25.227384Z" - } - }, - "outputs": [], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.similarity_search_with_score(query)" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "id": "756a6887", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:25.642282Z", - "start_time": "2023-04-04T10:51:25.635947Z" - } - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", - "\n", - "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", - "\n", - "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", - "\n", - "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n", - "\n", - "Score: 0.8153784913324512\n" - ] - } - ], - "source": [ - "document, score = found_docs[0]\n", - "print(document.page_content)\n", - "print(f\"\\nScore: {score}\")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "525e3582", - "metadata": {}, - "source": [ - "### Metadata filtering\n", - "\n", - "Qdrant has an [extensive filtering system](https://qdrant.tech/documentation/concepts/filtering/) with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the `similarity_search_with_score` and `similarity_search` methods." - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "1c2c58dc", - "metadata": {}, - "source": [ - "```python\n", - "from qdrant_client.http import models as rest\n", - "\n", - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\n", - "```" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "c58c30bf", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:39:53.032744Z", - "start_time": "2023-04-04T10:39:53.028673Z" - } - }, - "source": [ - "## Maximum marginal relevance search (MMR)\n", - "\n", - "If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents." - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "id": "76810fb6", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.010947Z", - "start_time": "2023-04-04T10:51:25.647687Z" - } - }, - "outputs": [], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)" - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "id": "80c6db11", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.016979Z", - "start_time": "2023-04-04T10:51:26.013329Z" - } - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", - "\n", - "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", - "\n", - "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", - "\n", - "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. \n", - "\n", - "2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n", - "\n", - "I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n", - "\n", - "They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n", - "\n", - "Officer Mora was 27 years old. \n", - "\n", - "Officer Rivera was 22. \n", - "\n", - "Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n", - "\n", - "I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n", - "\n", - "I’ve worked on these issues a long time. \n", - "\n", - "I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. \n", - "\n" - ] - } - ], - "source": [ - "for i, doc in enumerate(found_docs):\n", - " print(f\"{i + 1}.\", doc.page_content, \"\\n\")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "691a82d6", - "metadata": {}, - "source": [ - "## Qdrant as a Retriever\n", - "\n", - "Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. " - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "id": "9427195f", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.031451Z", - "start_time": "2023-04-04T10:51:26.018763Z" - } - }, - "outputs": [ - { - "data": { - "text/plain": [ - "VectorStoreRetriever(vectorstore=, search_type='similarity', search_kwargs={})" + "attachments": {}, + "cell_type": "markdown", + "id": "683953b3", + "metadata": {}, + "source": [ + "# Qdrant\n", + "\n", + ">[Qdrant](https://qdrant.tech/documentation/) (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n", + "\n", + "\n", + "This notebook shows how to use functionality related to the `Qdrant` vector database. \n", + "\n", + "There are various modes of how to run `Qdrant`, and depending on the chosen one, there will be some subtle differences. The options include:\n", + "- Local mode, no server required\n", + "- On-premise server deployment\n", + "- Qdrant Cloud\n", + "\n", + "See the [installation instructions](https://qdrant.tech/documentation/install/)." ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "retriever = qdrant.as_retriever()\n", - "retriever" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "0c851b4f", - "metadata": {}, - "source": [ - "It might be also specified to use MMR as a search strategy, instead of similarity." - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "id": "64348f1b", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.043909Z", - "start_time": "2023-04-04T10:51:26.034284Z" - } - }, - "outputs": [ + }, { - "data": { - "text/plain": [ - "VectorStoreRetriever(vectorstore=, search_type='mmr', search_kwargs={})" + "cell_type": "code", + "execution_count": null, + "id": "e03e8460-8f32-4d1f-bb93-4f7636a476fa", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "!pip install qdrant-client" ] - }, - "execution_count": 16, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "retriever = qdrant.as_retriever(search_type=\"mmr\")\n", - "retriever" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "id": "f3c70c31", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T10:51:26.495652Z", - "start_time": "2023-04-04T10:51:26.046407Z" - } - }, - "outputs": [ + }, { - "data": { - "text/plain": [ - "Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})" + "attachments": {}, + "cell_type": "markdown", + "id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5", + "metadata": {}, + "source": [ + "We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key." ] - }, - "execution_count": 17, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "retriever.get_relevant_documents(query)[0]" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "0358ecde", - "metadata": {}, - "source": [ - "## Customizing Qdrant\n", - "\n", - "There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain `Document`.\n", - "\n", - "### Named vectors\n", - "\n", - "Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "outputs": [], - "source": [ - "Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " location=\":memory:\",\n", - " collection_name=\"my_documents_2\",\n", - " vector_name=\"custom_vector\",\n", - ")" - ], - "metadata": { - "collapsed": false - } - }, - { - "cell_type": "markdown", - "source": [ - "As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood." - ], - "metadata": { - "collapsed": false - } - }, - { - "cell_type": "markdown", - "source": [ - "### Metadata\n", - "\n", - "Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.\n", - "\n", - "By default, your document is going to be stored in the following payload structure:\n", - "\n", - "```json\n", - "{\n", - " \"page_content\": \"Lorem ipsum dolor sit amet\",\n", - " \"metadata\": {\n", - " \"foo\": \"bar\"\n", - " }\n", - "}\n", - "```\n", - "\n", - "You can, however, decide to use different keys for the page content and metadata. That's useful if you already have a collection that you'd like to reuse." - ], - "metadata": { - "collapsed": false - } - }, - { - "cell_type": "code", - "execution_count": 19, - "id": "e4d6baf9", - "metadata": { - "ExecuteTime": { - "end_time": "2023-04-04T11:08:31.739141Z", - "start_time": "2023-04-04T11:08:30.229748Z" - } - }, - "outputs": [ + }, { - "data": { - "text/plain": [ - "" + "cell_type": "code", + "execution_count": 2, + "id": "082e7e8b-ac52-430c-98d6-8f0924457642", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key: \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n" + ] + } + ], + "source": [ + "import os\n", + "import getpass\n", + "\n", + "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" ] - }, - "execution_count": 19, - "metadata": {}, - "output_type": "execute_result" + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "aac9563e", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:22.282884Z", + "start_time": "2023-04-04T10:51:21.408077Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "from langchain.embeddings.openai import OpenAIEmbeddings\n", + "from langchain.text_splitter import CharacterTextSplitter\n", + "from langchain.vectorstores import Qdrant\n", + "from langchain.document_loaders import TextLoader" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "a3c3999a", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:22.520144Z", + "start_time": "2023-04-04T10:51:22.285826Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "loader = TextLoader(\"../../../state_of_the_union.txt\")\n", + "documents = loader.load()\n", + "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", + "docs = text_splitter.split_documents(documents)\n", + "\n", + "embeddings = OpenAIEmbeddings()" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "eeead681", + "metadata": {}, + "source": [ + "## Connecting to Qdrant from LangChain\n", + "\n", + "### Local mode\n", + "\n", + "Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk.\n", + "\n", + "#### In-memory\n", + "\n", + "For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "8429667e", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:22.525091Z", + "start_time": "2023-04-04T10:51:22.522015Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " location=\":memory:\", # Local mode with in-memory storage only\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "59f0b954", + "metadata": {}, + "source": [ + "#### On-disk storage\n", + "\n", + "Local mode, without using the Qdrant server, may also store your vectors on disk so they're persisted between runs." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "24b370e2", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.827567Z", + "start_time": "2023-04-04T10:51:22.529080Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " path=\"/tmp/local_qdrant\",\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "749658ce", + "metadata": {}, + "source": [ + "### On-premise server deployment\n", + "\n", + "No matter if you choose to launch Qdrant locally with [a Docker container](https://qdrant.tech/documentation/install/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "91e7f5ce", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.832708Z", + "start_time": "2023-04-04T10:51:24.829905Z" + } + }, + "outputs": [], + "source": [ + "url = \"<---qdrant url here --->\"\n", + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " url,\n", + " prefer_grpc=True,\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "c9e21ce9", + "metadata": {}, + "source": [ + "### Qdrant Cloud\n", + "\n", + "If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on [Qdrant Cloud](https://cloud.qdrant.io/). There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you'll need to provide an API key to secure your deployment from being accessed publicly." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "dcf88bdf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.837599Z", + "start_time": "2023-04-04T10:51:24.834690Z" + } + }, + "outputs": [], + "source": [ + "url = \"<---qdrant cloud cluster url here --->\"\n", + "api_key = \"<---api key here--->\"\n", + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " url,\n", + " prefer_grpc=True,\n", + " api_key=api_key,\n", + " collection_name=\"my_documents\",\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "93540013", + "metadata": {}, + "source": [ + "## Recreating the collection\n", + "\n", + "Both `Qdrant.from_texts` and `Qdrant.from_documents` methods are great to start using Qdrant with Langchain. In the previous versions the collection was recreated every time you called any of them. That behaviour has changed. Currently, the collection is going to be reused if it already exists. Setting `force_recreate` to `True` allows to remove the old collection and start from scratch." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "30a87570", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:24.854117Z", + "start_time": "2023-04-04T10:51:24.845385Z" + } + }, + "outputs": [], + "source": [ + "url = \"<---qdrant url here --->\"\n", + "qdrant = Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " url,\n", + " prefer_grpc=True,\n", + " collection_name=\"my_documents\",\n", + " force_recreate=True,\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1f9215c8", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T09:27:29.920258Z", + "start_time": "2023-04-04T09:27:29.913714Z" + } + }, + "source": [ + "## Similarity search\n", + "\n", + "The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the `embedding_function` and used to find similar documents in Qdrant collection." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "a8c513ab", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.204469Z", + "start_time": "2023-04-04T10:51:24.855618Z" + }, + "tags": [] + }, + "outputs": [], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.similarity_search(query)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "fc516993", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.220984Z", + "start_time": "2023-04-04T10:51:25.213943Z" + }, + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n" + ] + } + ], + "source": [ + "print(found_docs[0].page_content)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1bda9bf5", + "metadata": {}, + "source": [ + "## Similarity search with score\n", + "\n", + "Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. \n", + "The returned distance score is cosine distance. Therefore, a lower score is better." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "8804a21d", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.631585Z", + "start_time": "2023-04-04T10:51:25.227384Z" + } + }, + "outputs": [], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.similarity_search_with_score(query)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "756a6887", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:25.642282Z", + "start_time": "2023-04-04T10:51:25.635947Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.\n", + "\n", + "Score: 0.8153784913324512\n" + ] + } + ], + "source": [ + "document, score = found_docs[0]\n", + "print(document.page_content)\n", + "print(f\"\\nScore: {score}\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "525e3582", + "metadata": {}, + "source": [ + "### Metadata filtering\n", + "\n", + "Qdrant has an [extensive filtering system](https://qdrant.tech/documentation/concepts/filtering/) with rich type support. It is also possible to use the filters in Langchain, by passing an additional param to both the `similarity_search_with_score` and `similarity_search` methods." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "1c2c58dc", + "metadata": {}, + "source": [ + "```python\n", + "from qdrant_client.http import models as rest\n", + "\n", + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.similarity_search_with_score(query, filter=rest.Filter(...))\n", + "```" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "c58c30bf", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:39:53.032744Z", + "start_time": "2023-04-04T10:39:53.028673Z" + } + }, + "source": [ + "## Maximum marginal relevance search (MMR)\n", + "\n", + "If you'd like to look up for some similar documents, but you'd also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "76810fb6", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.010947Z", + "start_time": "2023-04-04T10:51:25.647687Z" + } + }, + "outputs": [], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "80c6db11", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.016979Z", + "start_time": "2023-04-04T10:51:26.013329Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence. \n", + "\n", + "2. We can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n", + "\n", + "I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n", + "\n", + "They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n", + "\n", + "Officer Mora was 27 years old. \n", + "\n", + "Officer Rivera was 22. \n", + "\n", + "Both Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n", + "\n", + "I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n", + "\n", + "I\u2019ve worked on these issues a long time. \n", + "\n", + "I know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \n", + "\n" + ] + } + ], + "source": [ + "for i, doc in enumerate(found_docs):\n", + " print(f\"{i + 1}.\", doc.page_content, \"\\n\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "691a82d6", + "metadata": {}, + "source": [ + "## Qdrant as a Retriever\n", + "\n", + "Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. " + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "9427195f", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.031451Z", + "start_time": "2023-04-04T10:51:26.018763Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "VectorStoreRetriever(vectorstore=, search_type='similarity', search_kwargs={})" + ] + }, + "execution_count": 15, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "retriever = qdrant.as_retriever()\n", + "retriever" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "0c851b4f", + "metadata": {}, + "source": [ + "It might be also specified to use MMR as a search strategy, instead of similarity." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "64348f1b", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.043909Z", + "start_time": "2023-04-04T10:51:26.034284Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "VectorStoreRetriever(vectorstore=, search_type='mmr', search_kwargs={})" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "retriever = qdrant.as_retriever(search_type=\"mmr\")\n", + "retriever" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "f3c70c31", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T10:51:26.495652Z", + "start_time": "2023-04-04T10:51:26.046407Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "query = \"What did the president say about Ketanji Brown Jackson\"\n", + "retriever.get_relevant_documents(query)[0]" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "0358ecde", + "metadata": {}, + "source": [ + "## Customizing Qdrant\n", + "\n", + "There are some options to use an existing Qdrant collection within your Langchain application. In such cases you may need to define how to map Qdrant point into the Langchain `Document`.\n", + "\n", + "### Named vectors\n", + "\n", + "Qdrant supports [multiple vectors per point](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors) by named vectors. Langchain requires just a single embedding per document and, by default, uses a single vector. However, if you work with a collection created externally or want to have the named vector used, you can configure it by providing its name.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "outputs": [], + "source": [ + "Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " location=\":memory:\",\n", + " collection_name=\"my_documents_2\",\n", + " vector_name=\"custom_vector\",\n", + ")" + ], + "metadata": { + "collapsed": false + }, + "id": "1f11adf8" + }, + { + "cell_type": "markdown", + "source": [ + "As a Langchain user, you won't see any difference whether you use named vectors or not. Qdrant integration will handle the conversion under the hood." + ], + "metadata": { + "collapsed": false + }, + "id": "b34f5230" + }, + { + "cell_type": "markdown", + "source": [ + "### Metadata\n", + "\n", + "Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well.\n", + "\n", + "By default, your document is going to be stored in the following payload structure:\n", + "\n", + "```json\n", + "{\n", + " \"page_content\": \"Lorem ipsum dolor sit amet\",\n", + " \"metadata\": {\n", + " \"foo\": \"bar\"\n", + " }\n", + "}\n", + "```\n", + "\n", + "You can, however, decide to use different keys for the page content and metadata. That's useful if you already have a collection that you'd like to reuse." + ], + "metadata": { + "collapsed": false + }, + "id": "b2350093" + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "e4d6baf9", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-04T11:08:31.739141Z", + "start_time": "2023-04-04T11:08:30.229748Z" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "Qdrant.from_documents(\n", + " docs,\n", + " embeddings,\n", + " location=\":memory:\",\n", + " collection_name=\"my_documents_2\",\n", + " content_payload_key=\"my_page_content_key\",\n", + " metadata_payload_key=\"my_meta\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2300e785", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.3" } - ], - "source": [ - "Qdrant.from_documents(\n", - " docs,\n", - " embeddings,\n", - " location=\":memory:\",\n", - " collection_name=\"my_documents_2\",\n", - " content_payload_key=\"my_page_content_key\",\n", - " metadata_payload_key=\"my_meta\",\n", - ")" - ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "2300e785", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.3" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/docs/extras/modules/data_connection/vectorstores/integrations/redis.ipynb b/docs/extras/modules/data_connection/vectorstores/integrations/redis.ipynb index 7edfd71cef..83c17f60d5 100644 --- a/docs/extras/modules/data_connection/vectorstores/integrations/redis.ipynb +++ b/docs/extras/modules/data_connection/vectorstores/integrations/redis.ipynb @@ -12,7 +12,7 @@ "\n", "As database either Redis standalone server or Redis Sentinel HA setups are supported for connections with the \"redis_url\"\n", "parameter. More information about the different formats of the redis connection url can be found in the LangChain\n", - "[Redis Readme](../../../../integrations/redis.md) file" + "[Redis Readme](/docs/modules/data_connection/vectorstores/integrations/redis) file" ] }, { @@ -265,6 +265,7 @@ }, { "cell_type": "markdown", + "metadata": {}, "source": [ "### Redis connection Url examples\n", "\n", @@ -275,14 +276,12 @@ "4. `rediss+sentinel://` - Connection to Redis server via Redis Sentinel, booth connections with TLS encryption\n", "\n", "More information about additional connection parameter can be found in the redis-py documentation at https://redis-py.readthedocs.io/en/stable/connections.html" - ], - "metadata": { - "collapsed": false - } + ] }, { "cell_type": "code", "execution_count": null, + "metadata": {}, "outputs": [], "source": [ "# connection to redis standalone at localhost, db 0, no password\n", @@ -304,10 +303,7 @@ "# connection to redis sentinel at localhost and default port, db 0, no password\n", "# but with TLS support for booth Sentinel and Redis server\n", "redis_url=\"rediss+sentinel://localhost\"\n" - ], - "metadata": { - "collapsed": false - } + ] } ], "metadata": { @@ -326,7 +322,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.11.3" } }, "nbformat": 4,