langchain/docs/extras/modules/data_connection/retrievers/parent_document_retriever.ipynb

440 lines
12 KiB
Plaintext
Raw Normal View History

2023-08-09 05:39:08 +00:00
{
"cells": [
{
"cell_type": "markdown",
"id": "34883374",
"metadata": {},
"source": [
"# Parent Document Retriever\n",
"\n",
"When splitting documents for retrieval, there are often conflicting desires:\n",
"\n",
"1. You may want to have small documents, so that their embeddings can most\n",
" accurately reflect their meaning. If too long, then the embeddings can\n",
" lose meaning.\n",
"2. You want to have long enough documents that the context of each chunk is\n",
" retained.\n",
"\n",
"The `ParentDocumentRetriever` strikes that balance by splitting and storing\n",
2023-08-09 05:39:08 +00:00
"small chunks of data. During retrieval, it first fetches the small chunks\n",
"but then looks up the parent ids for those chunks and returns those larger\n",
"documents.\n",
"\n",
"Note that \"parent document\" refers to the document that a small chunk\n",
"originated from. This can either be the whole raw document OR a larger\n",
"chunk."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8b6e74b2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import ParentDocumentRetriever"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1d17af96",
"metadata": {},
"outputs": [],
"source": [
"from langchain.vectorstores import Chroma\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.storage import InMemoryStore\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "604ff981",
"metadata": {},
"outputs": [],
"source": [
"loaders = [\n",
" TextLoader('../../paul_graham_essay.txt'),\n",
" TextLoader('../../state_of_the_union.txt'),\n",
"]\n",
"docs = []\n",
"for l in loaders:\n",
" docs.extend(l.load())"
]
},
{
"cell_type": "markdown",
"id": "d3943f72",
"metadata": {},
"source": [
"## Retrieving full documents\n",
2023-08-09 05:39:08 +00:00
"\n",
"In this mode, we want to retrieve the full documents. Therefore, we only specify a child splitter."
2023-08-09 05:39:08 +00:00
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1a8b2e5f",
"metadata": {},
"outputs": [],
"source": [
"# This text splitter is used to create the child documents\n",
"child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(\n",
" collection_name=\"full_documents\",\n",
" embedding_function=OpenAIEmbeddings()\n",
")\n",
"# The storage layer for the parent documents\n",
"store = InMemoryStore()\n",
"retriever = ParentDocumentRetriever(\n",
" vectorstore=vectorstore, \n",
" docstore=store, \n",
" child_splitter=child_splitter,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2b107935",
"metadata": {},
"outputs": [],
"source": [
"retriever.add_documents(docs, ids=None)"
2023-08-09 05:39:08 +00:00
]
},
{
"cell_type": "markdown",
"id": "d05b97b7",
"metadata": {},
"source": [
"This should yield two keys, because we added two documents."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "30e3812b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['05fe8d8a-bf60-4f87-b576-4351b23df266',\n",
" '571cc9e5-9ef7-4f6c-b800-835c83a1858b']"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"list(store.yield_keys())"
]
},
{
"cell_type": "markdown",
"id": "f895d62b",
"metadata": {},
"source": [
"Let's now call the vector store search functionality - we should see that it returns small chunks (since we're storing the small chunks)."
2023-08-09 05:39:08 +00:00
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "b261c02c",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5108222f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.\n"
]
}
],
"source": [
"print(sub_docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "bda8ed5a",
"metadata": {},
"source": [
"Let's now retrieve from the overall retriever. This should return large documents - since it returns the documents where the smaller chunks are located."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "419a91c4",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "cf10d250",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"38539"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "14f813a5",
"metadata": {},
"source": [
"## Retrieving larger chunks\n",
2023-08-09 05:39:08 +00:00
"\n",
"Sometimes, the full documents can be too big to want to retrieve them as is. In that case, what we really want to do is to first split the raw documents into larger chunks, and then split it into smaller chunks. We then index the smaller chunks, but on retrieval we retrieve the larger chunks (but still not the full documents)."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "b6f9a4f0",
"metadata": {},
"outputs": [],
"source": [
"# This text splitter is used to create the parent documents\n",
"parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)\n",
"# This text splitter is used to create the child documents\n",
"# It should create documents smaller than the parent\n",
"child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"split_parents\", embedding_function=OpenAIEmbeddings())\n",
"# The storage layer for the parent documents\n",
"store = InMemoryStore()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "19478ff3",
"metadata": {},
"outputs": [],
"source": [
"retriever = ParentDocumentRetriever(\n",
" vectorstore=vectorstore, \n",
" docstore=store, \n",
" child_splitter=child_splitter,\n",
" parent_splitter=parent_splitter,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "fe16e620",
"metadata": {},
"outputs": [],
"source": [
"retriever.add_documents(docs)"
]
},
{
"cell_type": "markdown",
"id": "64ad3c8c",
"metadata": {},
"source": [
"We can see that there are much more than two documents now - these are the larger chunks."
2023-08-09 05:39:08 +00:00
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "24d81886",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"66"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(list(store.yield_keys()))"
]
},
{
"cell_type": "markdown",
"id": "baaef673",
"metadata": {},
"source": [
"Let's make sure the underlying vector store still retrieves the small chunks."
2023-08-09 05:39:08 +00:00
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "b1c859de",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = vectorstore.similarity_search(\"justice breyer\")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6fffa2eb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.\n"
]
}
],
"source": [
"print(sub_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "3a3202df",
"metadata": {},
"outputs": [],
"source": [
"retrieved_docs = retriever.get_relevant_documents(\"justice breyer\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "684fdb2c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1849"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "9f17f662",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
"\n",
"We can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
"\n",
"Weve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n",
"\n",
"Were putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n",
"\n",
"Were securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n"
]
}
],
"source": [
"print(retrieved_docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "facfdacb",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2023-08-24 13:42:42 +00:00
"version": "3.10.1"
2023-08-09 05:39:08 +00:00
}
},
"nbformat": 4,
"nbformat_minor": 5
}