more query analysis docs (#18358)

pull/11527/head
Harrison Chase 4 months ago committed by GitHub
parent f96dd57501
commit bc768a12ed
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -0,0 +1,190 @@
{
"cells": [
{
"cell_type": "raw",
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 6\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# Construct Filters\n",
"\n",
"We may want to do query analysis to extract filters to pass into retrievers. One way we ask the LLM to represent these filters is as a Pydantic model. There is then the issue of converting that Pydantic model into a filter that can be passed into a retriever. \n",
"\n",
"This can be done manually, but LangChain also provides some \"Translators\" that are able to translate from a common syntax into filters specific to each retriever. Here, we will cover how to use those translators."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "8ca446a0",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain.chains.query_constructor.ir import (\n",
" Comparator,\n",
" Comparison,\n",
" Operation,\n",
" Operator,\n",
" StructuredQuery,\n",
")\n",
"from langchain.retrievers.self_query.chroma import ChromaTranslator\n",
"from langchain.retrievers.self_query.elasticsearch import ElasticsearchTranslator\n",
"from langchain_core.pydantic_v1 import BaseModel"
]
},
{
"cell_type": "markdown",
"id": "bc1302ff",
"metadata": {},
"source": [
"In this example, `year` and `author` are both attributes to filter on."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "64055006",
"metadata": {},
"outputs": [],
"source": [
"class Search(BaseModel):\n",
" query: str\n",
" start_year: Optional[int]\n",
" author: Optional[str]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "44eb6d98",
"metadata": {},
"outputs": [],
"source": [
"search_query = Search(query=\"RAG\", start_year=2022, author=\"LangChain\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e8ba6705",
"metadata": {},
"outputs": [],
"source": [
"def construct_comparisons(query: Search):\n",
" comparisons = []\n",
" if query.start_year is not None:\n",
" comparisons.append(\n",
" Comparison(\n",
" comparator=Comparator.GT,\n",
" attribute=\"start_year\",\n",
" value=query.start_year,\n",
" )\n",
" )\n",
" if query.author is not None:\n",
" comparisons.append(\n",
" Comparison(\n",
" comparator=Comparator.EQ,\n",
" attribute=\"author\",\n",
" value=query.author,\n",
" )\n",
" )\n",
" return comparisons"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6a79c9da",
"metadata": {},
"outputs": [],
"source": [
"comparisons = construct_comparisons(search_query)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "2d0e9689",
"metadata": {},
"outputs": [],
"source": [
"_filter = Operation(operator=Operator.AND, arguments=comparisons)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "e4c0b2ce",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'bool': {'must': [{'range': {'metadata.start_year': {'gt': 2022}}},\n",
" {'term': {'metadata.author.keyword': 'LangChain'}}]}}"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ElasticsearchTranslator().visit_operation(_filter)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d75455ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'$and': [{'start_year': {'$gt': 2022}}, {'author': {'$eq': 'LangChain'}}]}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ChromaTranslator().visit_operation(_filter)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -15,9 +15,9 @@
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# Adding examples to the prompt\n",
"# Add Examples to the Prompt\n",
"\n",
"As our query analysis becomes more complex, adding examples to the prompt can meaningfully improve performance.\n",
"As our query analysis becomes more complex, the LLM may struggle to understand how exactly it should respond in certain scenarios. In order to improve performance here, we can add examples to the prompt to guide the LLM.\n",
"\n",
"Let's take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [Quickstart](/docs/use_cases/query_analysis/quickstart)."
]
@ -377,7 +377,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -0,0 +1,585 @@
{
"cells": [
{
"cell_type": "raw",
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 7\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# High Cardinality\n",
"\n",
"You may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.\n",
"\n",
"In this notebook we take a look at how to approach this."
]
},
{
"cell_type": "markdown",
"id": "a4079b57-4369-49c9-b2ad-c809b5408d7e",
"metadata": {},
"source": [
"## Setup\n",
"#### Install dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain langchain-community langchain-openai faker"
]
},
{
"cell_type": "markdown",
"id": "79d66a45-a05c-4d22-b011-b1cdbdfc8f9c",
"metadata": {},
"source": [
"#### Set environment variables\n",
"\n",
"We'll use OpenAI in this example:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "d8d47f4b",
"metadata": {},
"source": [
"#### Set up data\n",
"\n",
"We will generate a bunch of fake names"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e5ba65c2",
"metadata": {},
"outputs": [],
"source": [
"from faker import Faker\n",
"\n",
"fake = Faker()\n",
"\n",
"names = [fake.name() for _ in range(10000)]"
]
},
{
"cell_type": "markdown",
"id": "41133694",
"metadata": {},
"source": [
"Let's look at some of the names"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c901ea97",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hayley Gonzalez'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"names[0]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b0d42ae2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Jesse Knight'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"names[567]"
]
},
{
"cell_type": "markdown",
"id": "1725883d",
"metadata": {},
"source": [
"## Query Analysis\n",
"\n",
"We can now set up a baseline query analysis"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0ae69afc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6c9485ce",
"metadata": {},
"outputs": [],
"source": [
"class Search(BaseModel):\n",
" query: str\n",
" author: str"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "aebd704a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.\n",
" warn_beta(\n"
]
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"system = \"\"\"Generate a relevant search query for a library system\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "41709a2e",
"metadata": {},
"source": [
"We can see that if we spell the name exactly correctly, it knows how to handle it"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "cc0d344b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"what are books about aliens by Jesse Knight\")"
]
},
{
"cell_type": "markdown",
"id": "a1b57eab",
"metadata": {},
"source": [
"The issue is that the values you want to filter on may NOT be spelled exactly correctly"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "82b6b2ad",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jess Knight')"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"what are books about aliens by jess knight\")"
]
},
{
"cell_type": "markdown",
"id": "0b60b7c2",
"metadata": {},
"source": [
"### Add in all values\n",
"\n",
"One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "98788a94",
"metadata": {},
"outputs": [],
"source": [
"system = \"\"\"Generate a relevant search query for a library system.\n",
"\n",
"`author` attribute MUST be one of:\n",
"\n",
"{authors}\n",
"\n",
"Do NOT hallucinate author name!\"\"\"\n",
"base_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"prompt = base_prompt.partial(authors=\", \".join(names))"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "e65412f5",
"metadata": {},
"outputs": [],
"source": [
"query_analyzer_all = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "e639285a",
"metadata": {},
"source": [
"However... if the list of categoricals is long enough, it may error!"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "696b000f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Error code: 400 - {'error': {'message': \"This model's maximum context length is 16385 tokens. However, your messages resulted in 33885 tokens (33855 in the messages, 30 in the functions). Please reduce the length of the messages or functions.\", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}\n"
]
}
],
"source": [
"try:\n",
" res = query_analyzer_all.invoke(\"what are books about aliens by jess knight\")\n",
"except Exception as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"id": "1d5d7891",
"metadata": {},
"source": [
"We can try to use a longer context window... but with so much information in there, it is not garunteed to pick it up reliably"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "0f0d0757",
"metadata": {},
"outputs": [],
"source": [
"llm_long = ChatOpenAI(model=\"gpt-4-turbo-preview\", temperature=0)\n",
"structured_llm_long = llm_long.with_structured_output(Search)\n",
"query_analyzer_all = {\"question\": RunnablePassthrough()} | prompt | structured_llm_long"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "03e5b7b2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='aliens', author='Kevin Knight')"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer_all.invoke(\"what are books about aliens by jess knight\")"
]
},
{
"cell_type": "markdown",
"id": "73ecf52b",
"metadata": {},
"source": [
"### Find and all relevant values\n",
"\n",
"Instead, what we can do is create an index over the relevant values and then query that for the N most relevant values,"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "32b19e07",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"vectorstore = Chroma.from_texts(names, embeddings, collection_name=\"author_names\")"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "774cb7b0",
"metadata": {},
"outputs": [],
"source": [
"def select_names(question):\n",
" _docs = vectorstore.similarity_search(question, k=10)\n",
" _names = [d.page_content for d in _docs]\n",
" return \", \".join(_names)"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "1173159c",
"metadata": {},
"outputs": [],
"source": [
"create_prompt = {\n",
" \"question\": RunnablePassthrough(),\n",
" \"authors\": select_names,\n",
"} | base_prompt"
]
},
{
"cell_type": "code",
"execution_count": 53,
"id": "0a892607",
"metadata": {},
"outputs": [],
"source": [
"query_analyzer_select = create_prompt | structured_llm"
]
},
{
"cell_type": "code",
"execution_count": 54,
"id": "8195d7cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[SystemMessage(content='Generate a relevant search query for a library system.\\n\\n`author` attribute MUST be one of:\\n\\nJesse Knight, Kelly Knight, Scott Knight, Richard Knight, Andrew Knight, Katherine Knight, Erica Knight, Ashley Knight, Becky Knight, Kevin Knight\\n\\nDo NOT hallucinate author name!'), HumanMessage(content='what are books by jess knight')])"
]
},
"execution_count": 54,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"create_prompt.invoke(\"what are books by jess knight\")"
]
},
{
"cell_type": "code",
"execution_count": 55,
"id": "d3228b4e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 55,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer_select.invoke(\"what are books about aliens by jess knight\")"
]
},
{
"cell_type": "markdown",
"id": "46ef88bb",
"metadata": {},
"source": [
"### Replace after selection\n",
"\n",
"Another method is to let the LLM fill in whatever value, but then convert that value to a valid value.\n",
"This can actually be done with the Pydantic class itself!"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "a2e8b434",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import validator\n",
"\n",
"\n",
"class Search(BaseModel):\n",
" query: str\n",
" author: str\n",
"\n",
" @validator(\"author\")\n",
" def double(cls, v: str) -> str:\n",
" return vectorstore.similarity_search(v, k=1)[0].page_content"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "919c0601",
"metadata": {},
"outputs": [],
"source": [
"system = \"\"\"Generate a relevant search query for a library system\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"corrective_structure_llm = llm.with_structured_output(Search)\n",
"corrective_query_analyzer = (\n",
" {\"question\": RunnablePassthrough()} | prompt | corrective_structure_llm\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "6c4f3e9a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"corrective_query_analyzer.invoke(\"what are books about aliens by jes knight\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a309cb11",
"metadata": {},
"outputs": [],
"source": [
"# TODO: show trigram similarity"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,329 @@
{
"cells": [
{
"cell_type": "raw",
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 4\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# Handle Multiple Queries\n",
"\n",
"Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that."
]
},
{
"cell_type": "markdown",
"id": "a4079b57-4369-49c9-b2ad-c809b5408d7e",
"metadata": {},
"source": [
"## Setup\n",
"#### Install dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain langchain-community langchain-openai chromadb"
]
},
{
"cell_type": "markdown",
"id": "79d66a45-a05c-4d22-b011-b1cdbdfc8f9c",
"metadata": {},
"source": [
"#### Set environment variables\n",
"\n",
"We'll use OpenAI in this example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "c20b48b8-16d7-4089-bc17-f2d240b3935a",
"metadata": {},
"source": [
"### Create Index\n",
"\n",
"We will create a vectorstore over fake information."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1f621694",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"texts = [\"Harrison worked at Kensho\", \"Ankush worked at Facebook\"]\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"vectorstore = Chroma.from_texts(\n",
" texts,\n",
" embeddings,\n",
")\n",
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": 1})"
]
},
{
"cell_type": "markdown",
"id": "57396e23-c192-4d97-846b-5eacea4d6b8d",
"metadata": {},
"source": [
"## Query analysis\n",
"\n",
"We will use function calling to structure the output. We will let it return multiple queries."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Search(BaseModel):\n",
" \"\"\"Search over a database of job records.\"\"\"\n",
"\n",
" queries: List[str] = Field(\n",
" ...,\n",
" description=\"Distinct queries to search for\",\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.\n",
" warn_beta(\n"
]
}
],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"output_parser = PydanticToolsParser(tools=[Search])\n",
"\n",
"system = \"\"\"You have the ability to issue search queries to get information to help answer user information.\n",
"\n",
"If you need to look up two distinct pieces of information, you are allowed to do that!\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "b9564078",
"metadata": {},
"source": [
"We can see that this allows for creating multiple queries"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bc1d3863",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(queries=['Harrison work location'])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"where did Harrison Work\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "af62af17-4f90-4dbd-a8b4-dfff51f1db95",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(queries=['Harrison work place', 'Ankush work place'])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"where did Harrison and ankush Work\")"
]
},
{
"cell_type": "markdown",
"id": "c7c65b2f-7881-45fc-a47b-a4eaaf48245f",
"metadata": {},
"source": [
"## Retrieval with query analysis\n",
"\n",
"So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1e047d87",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import chain"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "8dac7866",
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
"async def custom_chain(question):\n",
" response = await query_analyzer.ainvoke(question)\n",
" docs = []\n",
" for query in response.queries:\n",
" new_docs = await retriever.ainvoke(query)\n",
" docs.extend(new_docs)\n",
" # You probably want to think about reranking or deduplicating documents here\n",
" # But that is a separate topic\n",
" return docs"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "232ad8a7-7990-4066-9228-d35a555f7293",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Harrison worked at Kensho')]"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await custom_chain.ainvoke(\"where did Harrison Work\")"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "28e14ba5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Harrison worked at Kensho'),\n",
" Document(page_content='Ankush worked at Facebook')]"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await custom_chain.ainvoke(\"where did Harrison and ankush Work\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88de5a36",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,331 @@
{
"cells": [
{
"cell_type": "raw",
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 5\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# Handle Multiple Retrievers\n",
"\n",
"Sometimes, a query analysis technique may allow for selection of which retriever to use. To use this, you will need to add some logic to select the retriever to do. We will show a simple example (using mock data) of how to do that."
]
},
{
"cell_type": "markdown",
"id": "a4079b57-4369-49c9-b2ad-c809b5408d7e",
"metadata": {},
"source": [
"## Setup\n",
"#### Install dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain langchain-community langchain-openai chromadb"
]
},
{
"cell_type": "markdown",
"id": "79d66a45-a05c-4d22-b011-b1cdbdfc8f9c",
"metadata": {},
"source": [
"#### Set environment variables\n",
"\n",
"We'll use OpenAI in this example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "c20b48b8-16d7-4089-bc17-f2d240b3935a",
"metadata": {},
"source": [
"### Create Index\n",
"\n",
"We will create a vectorstore over fake information."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "1f621694",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"texts = [\"Harrison worked at Kensho\"]\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"vectorstore = Chroma.from_texts(texts, embeddings, collection_name=\"harrison\")\n",
"retriever_harrison = vectorstore.as_retriever(search_kwargs={\"k\": 1})\n",
"\n",
"texts = [\"Ankush worked at Facebook\"]\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"vectorstore = Chroma.from_texts(texts, embeddings, collection_name=\"ankush\")\n",
"retriever_ankush = vectorstore.as_retriever(search_kwargs={\"k\": 1})"
]
},
{
"cell_type": "markdown",
"id": "57396e23-c192-4d97-846b-5eacea4d6b8d",
"metadata": {},
"source": [
"## Query analysis\n",
"\n",
"We will use function calling to structure the output. We will let it return multiple queries."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Search(BaseModel):\n",
" \"\"\"Search for information about a person.\"\"\"\n",
"\n",
" query: str = Field(\n",
" ...,\n",
" description=\"Query to look up\",\n",
" )\n",
" person: str = Field(\n",
" ...,\n",
" description=\"Person to look things up for. Should be `HARRISON` or `ANKUSH`.\",\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"output_parser = PydanticToolsParser(tools=[Search])\n",
"\n",
"system = \"\"\"You have the ability to issue search queries to get information to help answer user information.\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "b9564078",
"metadata": {},
"source": [
"We can see that this allows for routing between retrievers"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "bc1d3863",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='workplace', person='HARRISON')"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"where did Harrison Work\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "af62af17-4f90-4dbd-a8b4-dfff51f1db95",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='workplace', person='ANKUSH')"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"where did ankush Work\")"
]
},
{
"cell_type": "markdown",
"id": "c7c65b2f-7881-45fc-a47b-a4eaaf48245f",
"metadata": {},
"source": [
"## Retrieval with query analysis\n",
"\n",
"So how would we include this in a chain? We just need some simple logic to select the retriever and pass in the search query"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "1e047d87",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import chain"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "4ec0c7fe",
"metadata": {},
"outputs": [],
"source": [
"retrievers = {\n",
" \"HARRISON\": retriever_harrison,\n",
" \"ANKUSH\": retriever_ankush,\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "8dac7866",
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
"def custom_chain(question):\n",
" response = query_analyzer.invoke(question)\n",
" retriever = retrievers[response.person]\n",
" return retriever.invoke(response.query)"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "232ad8a7-7990-4066-9228-d35a555f7293",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Harrison worked at Kensho')]"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"custom_chain.invoke(\"where did Harrison Work\")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "28e14ba5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Ankush worked at Facebook')]"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"custom_chain.invoke(\"where did ankush Work\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33338d4f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,328 @@
{
"cells": [
{
"cell_type": "raw",
"id": "df7d42b9-58a6-434c-a2d7-0b61142f6d3e",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# Handle Cases Where No Queries are Generated\n",
"\n",
"Sometimes, a query analysis technique may allow for any number of queries to be generated - including no queries! In this case, our overall chain will need to inspect the result of the query analysis before deciding whether to call the retriever or not.\n",
"\n",
"We will use mock data for this example."
]
},
{
"cell_type": "markdown",
"id": "a4079b57-4369-49c9-b2ad-c809b5408d7e",
"metadata": {},
"source": [
"## Setup\n",
"#### Install dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain langchain-community langchain-openai chromadb"
]
},
{
"cell_type": "markdown",
"id": "79d66a45-a05c-4d22-b011-b1cdbdfc8f9c",
"metadata": {},
"source": [
"#### Set environment variables\n",
"\n",
"We'll use OpenAI in this example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "c20b48b8-16d7-4089-bc17-f2d240b3935a",
"metadata": {},
"source": [
"### Create Index\n",
"\n",
"We will create a vectorstore over fake information."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1f621694",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"texts = [\"Harrison worked at Kensho\"]\n",
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"vectorstore = Chroma.from_texts(\n",
" texts,\n",
" embeddings,\n",
")\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "markdown",
"id": "57396e23-c192-4d97-846b-5eacea4d6b8d",
"metadata": {},
"source": [
"## Query analysis\n",
"\n",
"We will use function calling to structure the output. However, we will configure the LLM such that is doesn't NEED to call the function representing a search query (should it decide not to). We will also then use a prompt to do query analysis that explicitly lays when it should and shouldn't make a search."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Search(BaseModel):\n",
" \"\"\"Search over a database of job records.\"\"\"\n",
"\n",
" query: str = Field(\n",
" ...,\n",
" description=\"Similarity search query applied to job record.\",\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"system = \"\"\"You have the ability to issue search queries to get information to help answer user information.\n",
"\n",
"You do not NEED to look things up. If you don't need to, then just respond normally.\"\"\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.bind_tools([Search])\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"id": "b9564078",
"metadata": {},
"source": [
"We can see that by invoking this we get an message that sometimes - but not always - returns a tool call."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bc1d3863",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ZnoVX4j9Mn8wgChaORyd1cvq', 'function': {'arguments': '{\"query\":\"Harrison\"}', 'name': 'Search'}, 'type': 'function'}]})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"where did Harrison Work\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "af62af17-4f90-4dbd-a8b4-dfff51f1db95",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_analyzer.invoke(\"hi!\")"
]
},
{
"cell_type": "markdown",
"id": "c7c65b2f-7881-45fc-a47b-a4eaaf48245f",
"metadata": {},
"source": [
"## Retrieval with query analysis\n",
"\n",
"So how would we include this in a chain? Let's look at an example below."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1e047d87",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.runnables import chain\n",
"\n",
"output_parser = PydanticToolsParser(tools=[Search])"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "8dac7866",
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
"def custom_chain(question):\n",
" response = query_analyzer.invoke(question)\n",
" if \"tool_calls\" in response.additional_kwargs:\n",
" query = output_parser.invoke(response)\n",
" docs = retriever.invoke(query[0].query)\n",
" # Could add more logic - like another LLM call - here\n",
" return docs\n",
" else:\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "232ad8a7-7990-4066-9228-d35a555f7293",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1\n"
]
},
{
"data": {
"text/plain": [
"[Document(page_content='Harrison worked at Kensho')]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"custom_chain.invoke(\"where did Harrison Work\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "28e14ba5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"custom_chain.invoke(\"hi!\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33338d4f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -17,29 +17,26 @@
"source": [
"# Query analysis\n",
"\n",
"In any question answering application we need to retrieve information based on a user question. The simplest way to do this involves passing the user question directly to a retriever. However, in many cases it can improve performance by \"optimizing\" the query in some way. This is typically done by an LLM. Specifically, this involves passing the raw question (or list of messages) into an LLM and returning one or more optimized queries, which typically contain a string and optionally other structured information.\n",
"\"Search\" powers many use cases - including the \"retrieval\" part of Retrieval Augmented Generation. The simplest way to do this involves passing the user question directly to a retriever. In order to improve performance, you can also \"optimize\" the query in some way using *query analysis*. This is traditionally done by rule-based techniques, but with the rise of LLMs it is becoming more popular and more feasible to use an LLM for this. Specifically, this involves passing the raw question (or list of messages) into an LLM and returning one or more optimized queries, which typically contain a string and optionally other structured information.\n",
"\n",
"![Query Analysis](../../../static/img/query_analysis.png)\n",
"\n",
"## Background Information\n",
"\n",
"This guide assumes familiarity with the basic building blocks of a simple RAG application outlined in the [Q&A with RAG Quickstart](/docs/use_cases/question_answering/quickstart). Please read and understand that before diving in here.\n",
"\n",
"## Problems Solved\n",
"\n",
"Query analysis helps solves problems where the user question is not optimal to pass into the retriever. This can be the case when:\n",
"Query analysis helps to optimize the search query to send to the retriever. This can be the case when:\n",
"\n",
"* The retriever supports searches and filters against specific fields of the data, and user input could be referring to any of these fields,\n",
"* The user input contains multiple distinct questions in it,\n",
"* To get the relevant information multiple queries are needed,\n",
"* To retrieve relevant information multiple queries are needed,\n",
"* Search quality is sensitive to phrasing,\n",
"* There are multiple retrievers that could be searched over, and the user input could be reffering to any of them.\n",
"\n",
"Note that different problems will require different solutions. In order to determine what query analysis technique you should use, you will want to understand exactly what the problem with your current retrieval system is. This is best done by looking at failure data points of your current application and identifying common themes. Only once you know what your problems are can you begin to solve them.\n",
"Note that different problems will require different solutions. In order to determine what query analysis technique you should use, you will want to understand exactly what is the problem with your current retrieval system. This is best done by looking at failure data points of your current application and identifying common themes. Only once you know what your problems are can you begin to solve them.\n",
"\n",
"## Quickstart\n",
"\n",
"Head to the [quickstart](/docs/use_cases/query_analysis/quickstart) to see how to use query analysis in a basic end-to-end example. This will cover creating a simple index, showing a failure mode that occur when passing a raw user question to that index, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques (see below) and this end-to-end example will not show all of them.\n",
"Head to the [quickstart](/docs/use_cases/query_analysis/quickstart) to see how to use query analysis in a basic end-to-end example. This will cover creating a search engine over the content of LangChain YouTube videos, showing a failure mode that occurs when passing a raw user question to that index, and then an example of how query analysis can help address that issue. The quickstart focuses on **query structuring**. Below are additional query analysis techniques that may be relevant based on your data and use case\n",
"\n",
"\n",
"## Techniques\n",
@ -57,13 +54,21 @@
"\n",
"* [Add examples to prompt](/docs/use_cases/query_analysis/few_shot): As our query analysis becomes more complex, adding examples to the prompt can meaningfully improve performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4581f8b1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-2"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -75,7 +80,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -17,7 +17,7 @@
"source": [
"# Quickstart\n",
"\n",
"This example will show how to use query analysis in a basic end-to-end example. This will cover creating a simple index, showing a failure mode that occur when passing a raw user question to that index, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques and this end-to-end example will not show all of them.\n",
"This page will show how to use query analysis in a basic end-to-end example. This will cover creating a simple search engine, showing a failure mode that occurs when passing a raw user question to that search, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques and this end-to-end example will not show all of them.\n",
"\n",
"For the purpose of this example, we will do retrieval over the LangChain YouTube videos."
]
@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain langchain-community langchain-openai youtube-transcript-api pytube faiss-cpu"
"# %pip install -qU langchain langchain-community langchain-openai youtube-transcript-api pytube chromadb"
]
},
{
@ -337,7 +337,7 @@
"id": "4790e2db-3c6e-440b-b6e8-ebdd6600fda5",
"metadata": {},
"source": [
"Our first result is from 2024, and not very relevant to the input. Since we're just searching against document contents, there's no way for the results to be filtered on any document attributes.\n",
"Our first result is from 2024 (despite us asking for videos from 2023), and not very relevant to the input. Since we're just searching against document contents, there's no way for the results to be filtered on any document attributes.\n",
"\n",
"This is just one failure mode that can arise. Let's now take a look at how a basic form of query analysis can fix it!"
]
@ -349,7 +349,7 @@
"source": [
"## Query analysis\n",
"\n",
"To handle these failure modes we'll do some query structuring. This will involve defining a **query schema** that contains some date filters and use a function-calling model to convert a user question into a structured queries. \n",
"We can use query analysis to improve the results of retrieval. This will involve defining a **query schema** that contains some date filters and use a function-calling model to convert a user question into a structured queries. \n",
"\n",
"### Query schema\n",
"In this case we'll have explicit min and max attributes for publication date so that it can be filtered on."
@ -384,7 +384,7 @@
"source": [
"### Query generation\n",
"\n",
"To convert user questions to structured queries we'll make use of OpenAI's function-calling API. Specifically we'll use the new [ChatModel.with_structured_output()](/docs/guides/structured_output) constructor to handle passing the schema to the model and parsing the output."
"To convert user questions to structured queries we'll make use of OpenAI's tool-calling API. Specifically we'll use the new [ChatModel.with_structured_output()](/docs/guides/structured_output) constructor to handle passing the schema to the model and parsing the output."
]
},
{
@ -482,7 +482,7 @@
"\n",
"Our query analysis looks pretty good; now let's try using our generated queries to actually perform retrieval. \n",
"\n",
"**Note:** in our example, we specified `tool_choice=\"Search\"`. This will force the LLM to call one - and only one - function, meaning that we will always have one optimized query to look up. Note that this is not always the case - see other guides for how to deal with situations when no - or multiple - optmized queries are returned."
"**Note:** in our example, we specified `tool_choice=\"Search\"`. This will force the LLM to call one - and only one - tool, meaning that we will always have one optimized query to look up. Note that this is not always the case - see other guides for how to deal with situations when no - or multiple - optmized queries are returned."
]
},
{
@ -583,7 +583,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -15,7 +15,7 @@
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# HyDE\n",
"# Hypothetical Document Embeddings\n",
"\n",
"If we're working with a similarity search-based index, like a vector store, then searching on raw questions may not work well because their embeddings may not be very similar to those of the relevant documents. Instead it might help to have the model generate a hypothetical relevant document, and then use that to perform similarity search. This is the key idea behind [Hypothetical Document Embedding, or HyDE](https://arxiv.org/pdf/2212.10496.pdf).\n",
"\n",
@ -252,9 +252,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-2"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -266,7 +266,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -15,7 +15,7 @@
"id": "f2195672-0cab-4967-ba8a-c6544635547d",
"metadata": {},
"source": [
"# Step back prompting\n",
"# Step Back Prompting\n",
"\n",
"Sometimes search quality and model generations can be tripped up by the specifics of a question. One way to handle this is to first generate a more abstract, \"step back\" question and to query based on both the original and step back question.\n",
"\n",
@ -222,9 +222,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-2"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -236,7 +236,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

Loading…
Cancel
Save