mirror of
https://github.com/hwchase17/langchain
synced 2024-11-04 06:00:26 +00:00
add LCEL to retriever doc (#11888)
This commit is contained in:
parent
d62369f478
commit
efa9ef75c0
184
docs/docs/modules/data_connection/retrievers/index.ipynb
Normal file
184
docs/docs/modules/data_connection/retrievers/index.ipynb
Normal file
@ -0,0 +1,184 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "raw",
|
||||||
|
"id": "dbb38c29-59a4-43a0-87d1-8a09796f8ed8",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"---\n",
|
||||||
|
"sidebar_position: 4\n",
|
||||||
|
"title: Retrievers\n",
|
||||||
|
"---"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "f1d4b55d-d8ef-4b3c-852f-837b1a217227",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
":::info\n",
|
||||||
|
"Head to [Integrations](/docs/integrations/retrievers/) for documentation on built-in retriever integrations with 3rd-party tools.\n",
|
||||||
|
":::\n",
|
||||||
|
"\n",
|
||||||
|
"A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.\n",
|
||||||
|
"A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used\n",
|
||||||
|
"as the backbone of a retriever, but there are other types of retrievers as well.\n",
|
||||||
|
"\n",
|
||||||
|
"Retrievers implement the [Runnable interface](/docs/expression_language/interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/expression_language/). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
|
||||||
|
"\n",
|
||||||
|
"Retrievers accept a string query as input and return a list of `Document`'s as output."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "9bf5d37b-20ae-4b70-ae9d-4c0a3fcc9f77",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Get started\n",
|
||||||
|
"\n",
|
||||||
|
"In this example we'll use a `Chroma` vector store-backed retriever. To get setup we'll need to run:\n",
|
||||||
|
"\n",
|
||||||
|
"```bash\n",
|
||||||
|
"pip install chromadb\n",
|
||||||
|
"```\n",
|
||||||
|
"\n",
|
||||||
|
"And download the state_of_the_union.txt file [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/state_of_the_union.txt)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 12,
|
||||||
|
"id": "8cf15d4a-613b-4d2f-b1e6-5e9302bfac66",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||||
|
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||||
|
"from langchain.vectorstores import Chroma\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"full_text = open(\"state_of_the_union.txt\", \"r\").read()\n",
|
||||||
|
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n",
|
||||||
|
"texts = text_splitter.split_text(full_text)\n",
|
||||||
|
"\n",
|
||||||
|
"embeddings = OpenAIEmbeddings()\n",
|
||||||
|
"db = Chroma.from_texts(texts, embeddings)\n",
|
||||||
|
"retriever = db.as_retriever()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 18,
|
||||||
|
"id": "3275187b-4a21-45a1-8419-d14c9a54646f",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
|
||||||
|
"\n",
|
||||||
|
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. \n",
|
||||||
|
"\n",
|
||||||
|
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
|
||||||
|
"\n",
|
||||||
|
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
|
||||||
|
"\n",
|
||||||
|
"We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
|
||||||
|
"\n",
|
||||||
|
"We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"retrieved_docs = retriever.invoke(\"What did the president say about Ketanji Brown Jackson?\")\n",
|
||||||
|
"print(retrieved_docs[0].page_content)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "cbeeda8b-a828-415e-9de4-0343696e40af",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## LCEL\n",
|
||||||
|
"\n",
|
||||||
|
"Since retrievers are `Runnable`'s, we can easily compose them with other `Runnable` objects:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 19,
|
||||||
|
"id": "0164dcc1-4734-4a30-ab94-9c035add008d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain.chat_models import ChatOpenAI\n",
|
||||||
|
"from langchain.prompts import ChatPromptTemplate\n",
|
||||||
|
"from langchain.schema import StrOutputParser\n",
|
||||||
|
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||||
|
"\n",
|
||||||
|
"{context}\n",
|
||||||
|
"\n",
|
||||||
|
"Question: {question}\n",
|
||||||
|
"\"\"\"\n",
|
||||||
|
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||||
|
"model = ChatOpenAI()\n",
|
||||||
|
"\n",
|
||||||
|
"def format_docs(docs):\n",
|
||||||
|
" return \"\\n\\n\".join([d.page_content for d in docs])\n",
|
||||||
|
"\n",
|
||||||
|
"chain = (\n",
|
||||||
|
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
|
||||||
|
" | prompt\n",
|
||||||
|
" | model\n",
|
||||||
|
" | StrOutputParser()\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 20,
|
||||||
|
"id": "b8ce3176-aadd-4dfe-bfc5-7fe8a1d6d9e2",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"'The president said that technology plays a crucial role in the future and that passing the Bipartisan Innovation Act will make record investments in emerging technologies and American manufacturing. The president also mentioned Intel\\'s plans to build a semiconductor \"mega site\" and increase their investment from $20 billion to $100 billion, which would be one of the biggest investments in manufacturing in American history.'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 20,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"chain.invoke(\"What did the president say about technology?\")"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.9.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
@ -1,286 +0,0 @@
|
|||||||
---
|
|
||||||
sidebar_position: 4
|
|
||||||
---
|
|
||||||
# Retrievers
|
|
||||||
|
|
||||||
:::info
|
|
||||||
Head to [Integrations](/docs/integrations/retrievers/) for documentation on built-in retriever integrations with 3rd-party tools.
|
|
||||||
:::
|
|
||||||
|
|
||||||
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
|
|
||||||
A retriever does not need to be able to store documents, only to return (or retrieve) them. Vector stores can be used
|
|
||||||
as the backbone of a retriever, but there are other types of retrievers as well.
|
|
||||||
|
|
||||||
## Get started
|
|
||||||
|
|
||||||
The public API of the `BaseRetriever` class in LangChain is as follows:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from typing import Any, List
|
|
||||||
from langchain.schema import Document
|
|
||||||
from langchain.callbacks.manager import Callbacks
|
|
||||||
|
|
||||||
class BaseRetriever(ABC):
|
|
||||||
...
|
|
||||||
def get_relevant_documents(
|
|
||||||
self, query: str, *, callbacks: Callbacks = None, **kwargs: Any
|
|
||||||
) -> List[Document]:
|
|
||||||
"""Retrieve documents relevant to a query.
|
|
||||||
Args:
|
|
||||||
query: string to find relevant documents for
|
|
||||||
callbacks: Callback manager or list of callbacks
|
|
||||||
Returns:
|
|
||||||
List of relevant documents
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
async def aget_relevant_documents(
|
|
||||||
self, query: str, *, callbacks: Callbacks = None, **kwargs: Any
|
|
||||||
) -> List[Document]:
|
|
||||||
"""Asynchronously get documents relevant to a query.
|
|
||||||
Args:
|
|
||||||
query: string to find relevant documents for
|
|
||||||
callbacks: Callback manager or list of callbacks
|
|
||||||
Returns:
|
|
||||||
List of relevant documents
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
It's that simple! You can call `get_relevant_documents` or the async `aget_relevant_documents` methods to retrieve documents relevant to a query, where "relevance" is defined by
|
|
||||||
the specific retriever object you are calling.
|
|
||||||
|
|
||||||
Of course, we also help construct what we think useful retrievers are. The main type of retriever that we focus on is a vector store retriever. We will focus on that for the rest of this guide.
|
|
||||||
|
|
||||||
In order to understand what a vector store retriever is, it's important to understand what a vector store is. So let's look at that.
|
|
||||||
|
|
||||||
By default, LangChain uses [Chroma](/docs/ecosystem/integrations/chroma.html) as the vector store to index and search embeddings. To walk through this tutorial, we'll first need to install `chromadb`.
|
|
||||||
|
|
||||||
```
|
|
||||||
pip install chromadb
|
|
||||||
```
|
|
||||||
|
|
||||||
This example showcases question answering over documents.
|
|
||||||
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vector stores) and then also shows how to use them in a chain.
|
|
||||||
|
|
||||||
Question answering over documents consists of four steps:
|
|
||||||
|
|
||||||
1. Create an index
|
|
||||||
2. Create a retriever from that index
|
|
||||||
3. Create a question answering chain
|
|
||||||
4. Ask questions!
|
|
||||||
|
|
||||||
Each of the steps has multiple substeps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.
|
|
||||||
|
|
||||||
First, let's import some common classes we'll use no matter what.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.chains import RetrievalQA
|
|
||||||
from langchain.llms import OpenAI
|
|
||||||
```
|
|
||||||
|
|
||||||
Next in the generic setup, let's specify the document loader we want to use. You can download the `state_of_the_union.txt` file [here](https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/state_of_the_union.txt).
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.document_loaders import TextLoader
|
|
||||||
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
|
|
||||||
```
|
|
||||||
|
|
||||||
## One Line Index Creation
|
|
||||||
|
|
||||||
To get started as quickly as possible, we can use the `VectorstoreIndexCreator`.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.indexes import VectorstoreIndexCreator
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
index = VectorstoreIndexCreator().from_loaders([loader])
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
Running Chroma using direct local API.
|
|
||||||
Using DuckDB in-memory for database. Data will be transient.
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
query = "What did the president say about Ketanji Brown Jackson?"
|
|
||||||
index.query(query)
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
query = "What did the president say about Ketanji Brown Jackson?"
|
|
||||||
index.query_with_sources(query)
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
{'question': 'What did the president say about Ketanji Brown Jackson?',
|
|
||||||
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
|
|
||||||
'sources': '../state_of_the_union.txt'}
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
What is returned from the `VectorstoreIndexCreator` is a `VectorStoreIndexWrapper`, which provides these nice `query` and `query_with_sources` functionalities. If we just want to access the vector store directly, we can also do that.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
index.vectorstore
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
<langchain.vectorstores.chroma.Chroma at 0x119aa5940>
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
If we then want to access the `VectorStoreRetriever`, we can do that with:
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
index.vectorstore.as_retriever()
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
It can also be convenient to filter the vector store by the metadata associated with documents, particularly when your vector store has multiple sources. This can be done using the `query` method, like this:
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
index.query("Summarize the general content of this document.", retriever_kwargs={"search_kwargs": {"filter": {"source": "../state_of_the_union.txt"}}})
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
" The document is a speech given by President Trump to the nation on the occasion of his 245th birthday. The speech highlights the importance of American values and the challenges facing the country, including the ongoing conflict in Ukraine, the ongoing trade war with China, and the ongoing conflict in Syria. The speech also discusses the importance of investing in emerging technologies and American manufacturing, and calls on Congress to pass the Bipartisan Innovation Act and other important legislation."
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
|
|
||||||
## Walkthrough
|
|
||||||
|
|
||||||
Okay, so what's actually going on? How is this index getting created?
|
|
||||||
|
|
||||||
A lot of the magic is being hid in this `VectorstoreIndexCreator`. What is this doing?
|
|
||||||
|
|
||||||
There are three main steps going on after the documents are loaded:
|
|
||||||
|
|
||||||
1. Splitting documents into chunks
|
|
||||||
2. Creating embeddings for each document
|
|
||||||
3. Storing documents and embeddings in a vector store
|
|
||||||
|
|
||||||
Let's walk through this in code
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
documents = loader.load()
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, we will split the documents into chunks.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.text_splitter import CharacterTextSplitter
|
|
||||||
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
|
|
||||||
texts = text_splitter.split_documents(documents)
|
|
||||||
```
|
|
||||||
|
|
||||||
We will then select which embeddings we want to use.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.embeddings import OpenAIEmbeddings
|
|
||||||
embeddings = OpenAIEmbeddings()
|
|
||||||
```
|
|
||||||
|
|
||||||
We now create the vector store to use as the index.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.vectorstores import Chroma
|
|
||||||
db = Chroma.from_documents(texts, embeddings)
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
Running Chroma using direct local API.
|
|
||||||
Using DuckDB in-memory for database. Data will be transient.
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
So that's creating the index. Then, we expose this index in a retriever interface.
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
retriever = db.as_retriever()
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, as before, we create a chain and use it to answer questions!
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
query = "What did the president say about Ketanji Brown Jackson?"
|
|
||||||
qa.run(query)
|
|
||||||
```
|
|
||||||
|
|
||||||
<CodeOutputBlock lang="python">
|
|
||||||
|
|
||||||
```
|
|
||||||
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
|
|
||||||
```
|
|
||||||
|
|
||||||
</CodeOutputBlock>
|
|
||||||
|
|
||||||
`VectorstoreIndexCreator` is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
index_creator = VectorstoreIndexCreator(
|
|
||||||
vectorstore_cls=Chroma,
|
|
||||||
embedding=OpenAIEmbeddings(),
|
|
||||||
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Hopefully this highlights what is going on under the hood of `VectorstoreIndexCreator`. While we think it's important to have a simple way to create indexes, we also think it's important to understand what's going on under the hood.
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user