mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
580 lines
16 KiB
Plaintext
580 lines
16 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d9172545",
|
||
"metadata": {},
|
||
"source": [
|
||
"# MultiVector Retriever\n",
|
||
"\n",
|
||
"It can often be beneficial to store multiple vectors per document. There are multiple use cases where this is beneficial. LangChain has a base `MultiVectorRetriever` which makes querying this type of setup easy. A lot of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
|
||
"\n",
|
||
"The methods to create multiple vectors per document include:\n",
|
||
"\n",
|
||
"- smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever)\n",
|
||
"- summary: create a summary for each document, embed that along with (or instead of) the document\n",
|
||
"- hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document\n",
|
||
"\n",
|
||
"\n",
|
||
"Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "eed469be",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.retrievers.multi_vector import MultiVectorRetriever"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "18c1421a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.vectorstores import Chroma\n",
|
||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||
"from langchain.storage import InMemoryStore\n",
|
||
"from langchain.document_loaders import TextLoader"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "6d869496",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"loaders = [\n",
|
||
" TextLoader('../../paul_graham_essay.txt'),\n",
|
||
" TextLoader('../../state_of_the_union.txt'),\n",
|
||
"]\n",
|
||
"docs = []\n",
|
||
"for l in loaders:\n",
|
||
" docs.extend(l.load())\n",
|
||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000)\n",
|
||
"docs = text_splitter.split_documents(docs)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fa17beda",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Smaller chunks\n",
|
||
"\n",
|
||
"Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. NOTE: this is what the ParentDocumentRetriever does. Here we show what is going on under the hood."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "0e7b6b45",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# The vectorstore to use to index the child chunks\n",
|
||
"vectorstore = Chroma(\n",
|
||
" collection_name=\"full_documents\",\n",
|
||
" embedding_function=OpenAIEmbeddings()\n",
|
||
")\n",
|
||
"# The storage layer for the parent documents\n",
|
||
"store = InMemoryStore()\n",
|
||
"id_key = \"doc_id\"\n",
|
||
"# The retriever (empty to start)\n",
|
||
"retriever = MultiVectorRetriever(\n",
|
||
" vectorstore=vectorstore, \n",
|
||
" docstore=store, \n",
|
||
" id_key=id_key,\n",
|
||
")\n",
|
||
"import uuid\n",
|
||
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "72a36491",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# The splitter to use to create smaller chunks\n",
|
||
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "5d23247d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"sub_docs = []\n",
|
||
"for i, doc in enumerate(docs):\n",
|
||
" _id = doc_ids[i]\n",
|
||
" _sub_docs = child_text_splitter.split_documents([doc])\n",
|
||
" for _doc in _sub_docs:\n",
|
||
" _doc.metadata[id_key] = _id\n",
|
||
" sub_docs.extend(_sub_docs)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "92ed5861",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"retriever.vectorstore.add_documents(sub_docs)\n",
|
||
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "8afed60c",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Document(page_content='Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata={'doc_id': '10e9cbc0-4ba5-4d79-a09b-c033d1ba7b01', 'source': '../../state_of_the_union.txt'})"
|
||
]
|
||
},
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"# Vectorstore alone retrieves the small chunks\n",
|
||
"retriever.vectorstore.similarity_search(\"justice breyer\")[0]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "3c9017f1",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"9874"
|
||
]
|
||
},
|
||
"execution_count": 9,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"# Retriever returns larger chunks\n",
|
||
"len(retriever.get_relevant_documents(\"justice breyer\")[0].page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d6a7ae0d",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Summary\n",
|
||
"\n",
|
||
"Oftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "1433dff4",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.chat_models import ChatOpenAI\n",
|
||
"from langchain.prompts import ChatPromptTemplate\n",
|
||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||
"import uuid\n",
|
||
"from langchain.schema.document import Document"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "35b30390",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"chain = (\n",
|
||
" {\"doc\": lambda x: x.page_content}\n",
|
||
" | ChatPromptTemplate.from_template(\"Summarize the following document:\\n\\n{doc}\")\n",
|
||
" | ChatOpenAI(max_retries=0)\n",
|
||
" | StrOutputParser()\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"id": "41a2a738",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"summaries = chain.batch(docs, {\"max_concurrency\": 5})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"id": "7ac5e4b1",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# The vectorstore to use to index the child chunks\n",
|
||
"vectorstore = Chroma(\n",
|
||
" collection_name=\"summaries\",\n",
|
||
" embedding_function=OpenAIEmbeddings()\n",
|
||
")\n",
|
||
"# The storage layer for the parent documents\n",
|
||
"store = InMemoryStore()\n",
|
||
"id_key = \"doc_id\"\n",
|
||
"# The retriever (empty to start)\n",
|
||
"retriever = MultiVectorRetriever(\n",
|
||
" vectorstore=vectorstore, \n",
|
||
" docstore=store, \n",
|
||
" id_key=id_key,\n",
|
||
")\n",
|
||
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"id": "0d93309f",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"summary_docs = [Document(page_content=s,metadata={id_key: doc_ids[i]}) for i, s in enumerate(summaries)]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"id": "6d5edf0d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"retriever.vectorstore.add_documents(summary_docs)\n",
|
||
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"id": "862ae920",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# # We can also add the original chunks to the vectorstore if we so want\n",
|
||
"# for i, doc in enumerate(docs):\n",
|
||
"# doc.metadata[id_key] = doc_ids[i]\n",
|
||
"# retriever.vectorstore.add_documents(docs)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 18,
|
||
"id": "299232d6",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 19,
|
||
"id": "10e404c0",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Document(page_content=\"The document is a transcript of a speech given by the President of the United States. The President discusses several important issues and initiatives, including the nomination of a Supreme Court Justice, border security and immigration reform, protecting women's rights, advancing LGBTQ+ equality, bipartisan legislation, addressing the opioid epidemic and mental health, supporting veterans, investigating the health effects of burn pits on military personnel, ending cancer, and the strength and resilience of the American people.\", metadata={'doc_id': '79fa2e9f-28d9-4372-8af3-2caf4f1de312'})"
|
||
]
|
||
},
|
||
"execution_count": 19,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"sub_docs[0]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 20,
|
||
"id": "e4cce5c2",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 21,
|
||
"id": "c8570dbb",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"9194"
|
||
]
|
||
},
|
||
"execution_count": 21,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"len(retrieved_docs[0].page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "097a5396",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Hypothetical Queries\n",
|
||
"\n",
|
||
"An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embedded"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 26,
|
||
"id": "5219b085",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"functions = [\n",
|
||
" {\n",
|
||
" \"name\": \"hypothetical_questions\",\n",
|
||
" \"description\": \"Generate hypothetical questions\",\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": {\n",
|
||
" \"questions\": {\n",
|
||
" \"type\": \"array\",\n",
|
||
" \"items\": {\n",
|
||
" \"type\": \"string\"\n",
|
||
" },\n",
|
||
" },\n",
|
||
" },\n",
|
||
" \"required\": [\"questions\"]\n",
|
||
" }\n",
|
||
" }\n",
|
||
" ]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 32,
|
||
"id": "523deb92",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
|
||
"chain = (\n",
|
||
" {\"doc\": lambda x: x.page_content}\n",
|
||
" # Only asking for 3 hypothetical questions, but this could be adjusted\n",
|
||
" | ChatPromptTemplate.from_template(\"Generate a list of 3 hypothetical questions that the below document could be used to answer:\\n\\n{doc}\")\n",
|
||
" | ChatOpenAI(max_retries=0, model=\"gpt-4\").bind(functions=functions, function_call={\"name\": \"hypothetical_questions\"})\n",
|
||
" | JsonKeyOutputFunctionsParser(key_name=\"questions\")\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 33,
|
||
"id": "11d30554",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[\"What was the author's initial impression of philosophy as a field of study, and how did it change when they got to college?\",\n",
|
||
" 'Why did the author decide to switch their focus to Artificial Intelligence (AI)?',\n",
|
||
" \"What led to the author's disillusionment with the field of AI as it was practiced at the time?\"]"
|
||
]
|
||
},
|
||
"execution_count": 33,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"chain.invoke(docs[0])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 34,
|
||
"id": "3eb2e48c",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"hypothetical_questions = chain.batch(docs, {\"max_concurrency\": 5})"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 67,
|
||
"id": "b2cd6e75",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# The vectorstore to use to index the child chunks\n",
|
||
"vectorstore = Chroma(\n",
|
||
" collection_name=\"hypo-questions\",\n",
|
||
" embedding_function=OpenAIEmbeddings()\n",
|
||
")\n",
|
||
"# The storage layer for the parent documents\n",
|
||
"store = InMemoryStore()\n",
|
||
"id_key = \"doc_id\"\n",
|
||
"# The retriever (empty to start)\n",
|
||
"retriever = MultiVectorRetriever(\n",
|
||
" vectorstore=vectorstore, \n",
|
||
" docstore=store, \n",
|
||
" id_key=id_key,\n",
|
||
")\n",
|
||
"doc_ids = [str(uuid.uuid4()) for _ in docs]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 68,
|
||
"id": "18831b3b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"question_docs = []\n",
|
||
"for i, question_list in enumerate(hypothetical_questions):\n",
|
||
" question_docs.extend([Document(page_content=s,metadata={id_key: doc_ids[i]}) for s in question_list])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 69,
|
||
"id": "224b24c5",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"retriever.vectorstore.add_documents(question_docs)\n",
|
||
"retriever.docstore.mset(list(zip(doc_ids, docs)))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 70,
|
||
"id": "7b442b90",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 71,
|
||
"id": "089b5ad0",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[Document(page_content=\"What is the President's stance on immigration reform?\", metadata={'doc_id': '505d73e3-8350-46ec-a58e-3af032f04ab3'}),\n",
|
||
" Document(page_content=\"What is the President's stance on immigration reform?\", metadata={'doc_id': '1c9618f0-7660-4b4f-a37c-509cbbbf6dba'}),\n",
|
||
" Document(page_content=\"What is the President's stance on immigration reform?\", metadata={'doc_id': '82c08209-b904-46a8-9532-edd2380950b7'}),\n",
|
||
" Document(page_content='What measures is the President proposing to protect the rights of LGBTQ+ Americans?', metadata={'doc_id': '82c08209-b904-46a8-9532-edd2380950b7'})]"
|
||
]
|
||
},
|
||
"execution_count": 71,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"sub_docs"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 72,
|
||
"id": "7594b24e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 73,
|
||
"id": "4c120c65",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"9194"
|
||
]
|
||
},
|
||
"execution_count": 73,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"len(retrieved_docs[0].page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "616cfeeb",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.1"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|