langchain/docs/extras/integrations/retrievers/pinecone_hybrid_search.ipynb
2023-07-23 23:23:16 -07:00

352 lines
8.4 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "ab66dd43",
"metadata": {},
"source": [
"# Pinecone Hybrid Search\n",
"\n",
">[Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.\n",
"\n",
"This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.\n",
"\n",
"The logic of this retriever is taken from [this documentaion](https://docs.pinecone.io/docs/hybrid-search)\n",
"\n",
"To use Pinecone, you must have an API key and an Environment. \n",
"Here are the [installation instructions](https://docs.pinecone.io/docs/quickstart)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab4ab62-9bb2-4ecf-9fbf-1af7f0be558b",
"metadata": {},
"outputs": [],
"source": [
"#!pip install pinecone-client pinecone-text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bf0cf405-451d-4f87-94b1-2b7d65f1e1be",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"os.environ[\"PINECONE_API_KEY\"] = getpass.getpass(\"Pinecone API Key:\")"
]
},
{
"cell_type": "code",
"execution_count": 75,
"id": "393ac030",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import PineconeHybridSearchRetriever"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4577fea1-05e7-47a0-8173-56b0ddaa22bf",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"PINECONE_ENVIRONMENT\"] = getpass.getpass(\"Pinecone Environment:\")"
]
},
{
"cell_type": "markdown",
"id": "80e2e8e3-0fb5-4bd9-9196-9eada3439a61",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "314a7ee5-f498-45f6-8fdb-81428730083e",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{
"cell_type": "markdown",
"id": "aaf80e7f",
"metadata": {},
"source": [
"## Setup Pinecone"
]
},
{
"cell_type": "markdown",
"id": "95d5d7f9",
"metadata": {},
"source": [
"You should only have to do this part once.\n",
"\n",
"Note: it's important to make sure that the \"context\" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's [docs](https://docs.pinecone.io/docs/manage-indexes#selective-metadata-indexing)."
]
},
{
"cell_type": "code",
"execution_count": 76,
"id": "3b8f7697",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"WhoAmIResponse(username='load', user_label='label', projectname='load-test')"
]
},
"execution_count": 76,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"import pinecone\n",
"\n",
"api_key = os.getenv(\"PINECONE_API_KEY\") or \"PINECONE_API_KEY\"\n",
"# find environment next to your API key in the Pinecone console\n",
"env = os.getenv(\"PINECONE_ENVIRONMENT\") or \"PINECONE_ENVIRONMENT\"\n",
"\n",
"index_name = \"langchain-pinecone-hybrid-search\"\n",
"\n",
"pinecone.init(api_key=api_key, environment=env)\n",
"pinecone.whoami()"
]
},
{
"cell_type": "code",
"execution_count": 77,
"id": "cfa3a8d8",
"metadata": {},
"outputs": [],
"source": [
"# create the index\n",
"pinecone.create_index(\n",
" name=index_name,\n",
" dimension=1536, # dimensionality of dense model\n",
" metric=\"dotproduct\", # sparse values supported only for dotproduct\n",
" pod_type=\"s1\",\n",
" metadata_config={\"indexed\": []}, # see explaination above\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e01549af",
"metadata": {},
"source": [
"Now that its created, we can use it"
]
},
{
"cell_type": "code",
"execution_count": 78,
"id": "bcb3c8c2",
"metadata": {},
"outputs": [],
"source": [
"index = pinecone.Index(index_name)"
]
},
{
"cell_type": "markdown",
"id": "dbc025d6",
"metadata": {},
"source": [
"## Get embeddings and sparse encoders\n",
"\n",
"Embeddings are used for the dense vectors, tokenizer is used for the sparse vector"
]
},
{
"cell_type": "code",
"execution_count": 79,
"id": "2f63c911",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "markdown",
"id": "96bf8879",
"metadata": {},
"source": [
"To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.\n",
"\n",
"For more information about the sparse encoders you can checkout pinecone-text library [docs](https://pinecone-io.github.io/pinecone-text/pinecone_text.html)."
]
},
{
"cell_type": "code",
"execution_count": 80,
"id": "c3f030e5",
"metadata": {},
"outputs": [],
"source": [
"from pinecone_text.sparse import BM25Encoder\n",
"\n",
"# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE\n",
"\n",
"# use default tf-idf values\n",
"bm25_encoder = BM25Encoder().default()"
]
},
{
"cell_type": "markdown",
"id": "23601ddb",
"metadata": {},
"source": [
"The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:\n",
"\n",
"```python\n",
"corpus = [\"foo\", \"bar\", \"world\", \"hello\"]\n",
"\n",
"# fit tf-idf values on your corpus\n",
"bm25_encoder.fit(corpus)\n",
"\n",
"# store the values to a json file\n",
"bm25_encoder.dump(\"bm25_values.json\")\n",
"\n",
"# load to your BM25Encoder object\n",
"bm25_encoder = BM25Encoder().load(\"bm25_values.json\")\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "5462801e",
"metadata": {},
"source": [
"## Load Retriever\n",
"\n",
"We can now construct the retriever!"
]
},
{
"cell_type": "code",
"execution_count": 81,
"id": "ac77d835",
"metadata": {},
"outputs": [],
"source": [
"retriever = PineconeHybridSearchRetriever(\n",
" embeddings=embeddings, sparse_encoder=bm25_encoder, index=index\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1c518c42",
"metadata": {},
"source": [
"## Add texts (if necessary)\n",
"\n",
"We can optionally add texts to the retriever (if they aren't already in there)"
]
},
{
"cell_type": "code",
"execution_count": 82,
"id": "98b1c017",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 1/1 [00:02<00:00, 2.27s/it]\n"
]
}
],
"source": [
"retriever.add_texts([\"foo\", \"bar\", \"world\", \"hello\"])"
]
},
{
"cell_type": "markdown",
"id": "08437fa2",
"metadata": {},
"source": [
"## Use Retriever\n",
"\n",
"We can now use the retriever!"
]
},
{
"cell_type": "code",
"execution_count": 83,
"id": "c0455218",
"metadata": {},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
]
},
{
"cell_type": "code",
"execution_count": 84,
"id": "7dfa5c29",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='foo', metadata={})"
]
},
"execution_count": 84,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[0]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
},
"vscode": {
"interpreter": {
"hash": "7ec0d8babd8cabf695a1d94b1e586d626e046c9df609f6bad065d15d49f67f54"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}