{ "cells": [ { "cell_type": "markdown", "id": "efc5be67", "metadata": {}, "source": [ "# Retrieval Question Answering with Sources\n", "\n", "This notebook goes over how to do question-answering with sources over an Index. It does this by using the `RetrievalQAWithSourcesChain`, which does the lookup of the documents from an Index. " ] }, { "cell_type": "code", "execution_count": 1, "id": "1c613960", "metadata": {}, "outputs": [], "source": [ "from langchain.embeddings.openai import OpenAIEmbeddings\n", "from langchain.embeddings.cohere import CohereEmbeddings\n", "from langchain.text_splitter import CharacterTextSplitter\n", "from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n", "from langchain.vectorstores import Chroma" ] }, { "cell_type": "code", "execution_count": 2, "id": "17d1306e", "metadata": {}, "outputs": [], "source": [ "with open('../../state_of_the_union.txt') as f:\n", " state_of_the_union = f.read()\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "texts = text_splitter.split_text(state_of_the_union)\n", "\n", "embeddings = OpenAIEmbeddings()" ] }, { "cell_type": "code", "execution_count": 3, "id": "0e745d99", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running Chroma using direct local API.\n", "Using DuckDB in-memory for database. Data will be transient.\n" ] } ], "source": [ "docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{\"source\": f\"{i}-pl\"} for i in range(len(texts))])" ] }, { "cell_type": "code", "execution_count": 4, "id": "8aa571ae", "metadata": {}, "outputs": [], "source": [ "from langchain.chains import RetrievalQAWithSourcesChain" ] }, { "cell_type": "code", "execution_count": 5, "id": "aa859d4c", "metadata": {}, "outputs": [], "source": [ "from langchain import OpenAI\n", "\n", "chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type=\"stuff\", retriever=docsearch.as_retriever())" ] }, { "cell_type": "code", "execution_count": 6, "id": "8ba36fa7", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\\n',\n", " 'sources': '31-pl'}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain({\"question\": \"What did the president say about Justice Breyer\"}, return_only_outputs=True)" ] }, { "cell_type": "markdown", "id": "718ecbda", "metadata": {}, "source": [ "## Chain Type\n", "You can easily specify different chain types to load and use in the RetrievalQAWithSourcesChain chain. For a more detailed walkthrough of these types, please see [this notebook](qa_with_sources.ipynb).\n", "\n", "There are two ways to load different chain types. First, you can specify the chain type argument in the `from_chain_type` method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to `map_reduce`." ] }, { "cell_type": "code", "execution_count": 7, "id": "8b35b30a", "metadata": {}, "outputs": [], "source": [ "chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type=\"map_reduce\", retriever=docsearch.as_retriever())" ] }, { "cell_type": "code", "execution_count": 8, "id": "58bd424f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'answer': ' The president said \"Justice Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\"\\n',\n", " 'sources': '31-pl'}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain({\"question\": \"What did the president say about Justice Breyer\"}, return_only_outputs=True)" ] }, { "cell_type": "markdown", "id": "21e14eed", "metadata": {}, "source": [ "The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](qa_with_sources.ipynb)) and then pass that directly to the the RetrievalQAWithSourcesChain chain with the `combine_documents_chain` parameter. For example:" ] }, { "cell_type": "code", "execution_count": 10, "id": "af35f0c6", "metadata": {}, "outputs": [], "source": [ "from langchain.chains.qa_with_sources import load_qa_with_sources_chain\n", "qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"stuff\")\n", "qa = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())" ] }, { "cell_type": "code", "execution_count": 11, "id": "c91fdc8a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'answer': ' The president honored Justice Breyer for his service and mentioned his legacy of excellence.\\n',\n", " 'sources': '31-pl'}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "qa({\"question\": \"What did the president say about Justice Breyer\"}, return_only_outputs=True)" ] }, { "cell_type": "code", "execution_count": null, "id": "3c594296", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" }, "vscode": { "interpreter": { "hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103" } } }, "nbformat": 4, "nbformat_minor": 5 }