You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/extras/use_cases/summarization.ipynb

512 lines
22 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "cf13f702",
"metadata": {},
"source": [
"# Summarization\n",
"\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/use_cases/summarization.ipynb)\n",
"\n",
"## Use case\n",
"\n",
"Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. \n",
"\n",
"LLMs are a great tool for this given their proficiency in understanding and synthesizing text.\n",
"\n",
"In this walkthrough we'll go over how to perform document summarization using LLMs."
]
},
{
"cell_type": "markdown",
"id": "8e233997",
"metadata": {},
"source": [
"![Image description](/img/summarization_use_case_1.png)"
]
},
{
"cell_type": "markdown",
"id": "4715b4ff",
"metadata": {
"jp-MarkdownHeadingCollapsed": true
},
"source": [
"## Overview\n",
"\n",
"A central question for building a summarizer is how to pass your documents into the LLM's context window. Two common approaches for this are:\n",
"\n",
"1. `Stuff`: Simply \"stuff\" all your documents into a single prompt. This is the simplest approach (see [here](/docs/modules/chains/document/stuff) for more on the `StuffDocumentsChains`, which is used for this method).\n",
"\n",
"2. `Map-reduce`: Summarize each document on it's own in a \"map\" step and then \"reduce\" the summaries into a final summary (see [here](/docs/modules/chains/document/map_reduce) for more on the `MapReduceDocumentsChain`, which is used for this method)."
]
},
{
"cell_type": "markdown",
"id": "08ec66bc",
"metadata": {},
"source": [
"![Image description](/img/summarization_use_case_2.png)"
]
},
{
"cell_type": "markdown",
"id": "bea785ac",
"metadata": {},
"source": [
"## Quickstart\n",
"\n",
"To give you a sneak preview, either pipeline can be wrapped in a single object: `load_summarize_chain`. \n",
"\n",
"Suppose we want to summarize a blog post. We can create this in a few lines of code.\n",
"\n",
"First set environment variables and install packages:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "578d6a90",
"metadata": {},
"outputs": [],
"source": [
"!pip install openai tiktoken chromadb langchain\n",
"\n",
"# Set env var OPENAI_API_KEY or load from a .env file\n",
"# import dotenv\n",
"\n",
"# dotenv.load_env()"
]
},
{
"cell_type": "markdown",
"id": "36138740",
"metadata": {},
"source": [
"We can use `chain_type=\"stuff\"`, especially if using larger context window models such as:\n",
"\n",
"* 16k token OpenAI `gpt-3.5-turbo-16k` \n",
"* 100k token Anthropic [Claude-2](https://www.anthropic.com/index/claude-2)\n",
"\n",
"We can also supply `chain_type=\"map_reduce\"` or `chain_type=\"refine\"` (read more [here](/docs/modules/chains/document/refine))."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fd271681",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains. It also highlights the challenges and limitations of using LLMs in agent systems.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.document_loaders import WebBaseLoader\n",
"from langchain.chains.summarize import load_summarize_chain\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"docs = loader.load()\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo-16k\")\n",
"chain = load_summarize_chain(llm, chain_type=\"stuff\")\n",
"\n",
"chain.run(docs)"
]
},
{
"cell_type": "markdown",
"id": "615b36e1",
"metadata": {},
"source": [
"## Option 1. Stuff\n",
"\n",
"When we use `load_summarize_chain` with `chain_type=\"stuff\"`, we will use the [StuffDocumentsChain](/docs/modules/chains/document/stuff).\n",
"\n",
"The chain will take a list of documents, inserts them all into a prompt, and passes that prompt to an LLM:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ef45585d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and examples of proof-of-concept demos, highlighting the challenges and limitations of LLM-powered agents. It also includes references to related research papers and provides a citation for the article.\n"
]
}
],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n",
"\n",
"# Define prompt\n",
"prompt_template = \"\"\"Write a concise summary of the following:\n",
"\"{text}\"\n",
"CONCISE SUMMARY:\"\"\"\n",
"prompt = PromptTemplate.from_template(prompt_template)\n",
"\n",
"# Define LLM chain\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo-16k\")\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
"# Define StuffDocumentsChain\n",
"stuff_chain = StuffDocumentsChain(\n",
" llm_chain=llm_chain, document_variable_name=\"text\"\n",
")\n",
"\n",
"docs = loader.load()\n",
"print(stuff_chain.run(docs))"
]
},
{
"cell_type": "markdown",
"id": "4e4e4a43",
"metadata": {},
"source": [
"Great! We can see that we reproduce the earlier result using the `load_summarize_chain`.\n",
"\n",
"### Go deeper\n",
"\n",
"* You can easily customize the prompt. \n",
"* You can easily try different LLMs, (e.g., [Claude](/docs/integrations/chat/anthropic)) via the `llm` parameter."
]
},
{
"cell_type": "markdown",
"id": "ad6cabee",
"metadata": {},
"source": [
"## Option 2. Map-Reduce\n",
"\n",
"Let's unpack the map reduce approach. For this, we'll first map each document to an individual summary using an `LLMChain`. Then we'll use a `ReduceDocumentsChain` to combine those summaries into a single global summary.\n",
" \n",
"First, we specfy the LLMChain to use for mapping each document to an individual summary:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a1e6773c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.mapreduce import MapReduceChain\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains import ReduceDocumentsChain, MapReduceDocumentsChain\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"# Map\n",
"map_template = \"\"\"The following is a set of documents\n",
"{docs}\n",
"Based on this list of docs, please identify the main themes \n",
"Helpful Answer:\"\"\"\n",
"map_prompt = PromptTemplate.from_template(map_template)\n",
"map_chain = LLMChain(llm=llm, prompt=map_prompt)"
]
},
{
"cell_type": "markdown",
"id": "bee3c331",
"metadata": {},
"source": [
"The `ReduceDocumentsChain` handles taking the document mapping results and reducing them into a single output. It wraps a generic `CombineDocumentsChain` (like `StuffDocumentsChain`) but adds the ability to collapse documents before passing it to the `CombineDocumentsChain` if their cumulative size exceeds `token_max`. In this example, we can actually re-use our chain for combining our docs to also collapse our docs.\n",
"\n",
"So if the cumulative number of tokens in our mapped documents exceeds 4000 tokens, then we'll recursively pass in the documents in batches of < 4000 tokens to our `StuffDocumentsChain` to create batched summaries. And once those batched summaries are cumulatively less than 4000 tokens, we'll pass them all one last time to the `StuffDocumentsChain` to create the final summary."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1edb1b0d",
"metadata": {},
"outputs": [],
"source": [
"# Reduce\n",
"reduce_template = \"\"\"The following is set of summaries:\n",
"{doc_summaries}\n",
"Take these and distill it into a final, consolidated summary of the main themes. \n",
"Helpful Answer:\"\"\"\n",
"reduce_prompt = PromptTemplate.from_template(reduce_template)\n",
"reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)\n",
"\n",
"# Takes a list of documents, combines them into a single string, and passes this to an LLMChain\n",
"combine_documents_chain = StuffDocumentsChain(\n",
" llm_chain=reduce_chain, document_variable_name=\"doc_summaries\"\n",
")\n",
"\n",
"# Combines and iteravely reduces the mapped documents\n",
"reduce_documents_chain = ReduceDocumentsChain(\n",
" # This is final chain that is called.\n",
" combine_documents_chain=combine_documents_chain,\n",
" # If documents exceed context for `StuffDocumentsChain`\n",
" collapse_documents_chain=combine_documents_chain,\n",
" # The maximum number of tokens to group documents into.\n",
" token_max=4000,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "fdb5ae1a",
"metadata": {},
"source": [
"Combining our map and reduce chains into one:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "22f1cdc2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 1003, which is longer than the specified 1000\n"
]
}
],
"source": [
"# Combining documents by mapping a chain over them, then combining results\n",
"map_reduce_chain = MapReduceDocumentsChain(\n",
" # Map chain\n",
" llm_chain=map_chain,\n",
" # Reduce chain\n",
" reduce_documents_chain=reduce_documents_chain,\n",
" # The variable name in the llm_chain to put the documents in\n",
" document_variable_name=\"docs\",\n",
" # Return the results of the map steps in the output\n",
" return_intermediate_steps=False,\n",
")\n",
"\n",
"text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n",
" chunk_size=1000, chunk_overlap=0\n",
")\n",
"split_docs = text_splitter.split_documents(docs)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "c7afb8c3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The main themes identified in the provided set of documents are:\n",
"\n",
"1. LLM-powered autonomous agent systems: The documents discuss the concept of building autonomous agents with large language models (LLMs) as the core controller. They explore the potential of LLMs beyond content generation and present them as powerful problem solvers.\n",
"\n",
"2. Components of the agent system: The documents outline the key components of LLM-powered agent systems, including planning, memory, and tool use. Each component is described in detail, highlighting its role in enhancing the agent's capabilities.\n",
"\n",
"3. Planning and task decomposition: The planning component focuses on task decomposition and self-reflection. The agent breaks down complex tasks into smaller subgoals and learns from past actions to improve future results.\n",
"\n",
"4. Memory and learning: The memory component includes short-term memory for in-context learning and long-term memory for retaining and recalling information over extended periods. The use of external vector stores for fast retrieval is also mentioned.\n",
"\n",
"5. Tool use and external APIs: The agent learns to utilize external APIs for accessing additional information, code execution, and proprietary sources. This enhances the agent's knowledge and problem-solving abilities.\n",
"\n",
"6. Case studies and proof-of-concept examples: The documents provide case studies and examples to demonstrate the application of LLM-powered agents in scientific discovery, generative simulations, and other domains. These examples serve as proof-of-concept for the effectiveness of the agent system.\n",
"\n",
"7. Challenges and limitations: The documents mention challenges associated with building LLM-powered autonomous agents, such as the limitations of finite context length, difficulties in long-term planning, and reliability issues with natural language interfaces.\n",
"\n",
"8. Citation and references: The documents include a citation and reference section for acknowledging the sources and inspirations for the concepts discussed.\n",
"\n",
"Overall, the main themes revolve around the development and capabilities of LLM-powered autonomous agent systems, including their components, planning and task decomposition, memory and learning mechanisms, tool use and external APIs, case studies and proof-of-concept examples, challenges and limitations, and the importance of proper citation and references.\n"
]
}
],
"source": [
"print(map_reduce_chain.run(split_docs))"
]
},
{
"cell_type": "markdown",
"id": "e62c21cf",
"metadata": {},
"source": [
"### Go deeper\n",
" \n",
"**Customization** \n",
"\n",
"* As shown above, you can customize the LLMs and prompts for map and reduce stages.\n",
"\n",
"**Real-world use-case**\n",
"\n",
"* See [this blog post](https://blog.langchain.dev/llms-to-improve-documentation/) case-study on analyzing user interactions (questions about LangChain documentation)! \n",
"* The blog post and associated [repo](https://github.com/mendableai/QA_clustering) also introduce clustering as a means of summarization.\n",
"* This opens up a third path beyond the `stuff` or `map-reduce` approaches that is worth considering.\n",
"\n",
"![Image description](/img/summarization_use_case_3.png)"
]
},
{
"cell_type": "markdown",
"id": "f08ff365",
"metadata": {},
"source": [
"## Option 3. Refine\n",
" \n",
"[Refine](/docs/modules/chains/document/refine) is similar to map-reduce:\n",
"\n",
"> The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.\n",
"\n",
"This can be easily run with the `chain_type=\"refine\"` specified."
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "de1dc10e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The GPT-Engineer project aims to create a repository of code for specific tasks specified in natural language. It involves breaking down tasks into smaller components and seeking clarification from the user when needed. The project emphasizes the importance of implementing every detail of the architecture as code and provides guidelines for file organization, code structure, and dependencies. However, there are challenges in long-term planning and task decomposition, as well as the reliability of the natural language interface. The system has limited communication bandwidth and struggles to adjust plans when faced with unexpected errors. The reliability of model outputs is questionable, as formatting errors and rebellious behavior can occur. The conversation also includes instructions for writing the code, including laying out the core classes, functions, and methods, and providing the code in a markdown code block format. The user is reminded to ensure that the code is fully functional and follows best practices for file naming, imports, and types. The project is powered by LLM (Large Language Models) and incorporates prompting techniques from various research papers.'"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = load_summarize_chain(llm, chain_type=\"refine\")\n",
"chain.run(split_docs)"
]
},
{
"cell_type": "markdown",
"id": "5b46f44d",
"metadata": {},
"source": [
"It's also possible to supply a prompt and return intermediate steps."
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "f86c8072",
"metadata": {},
"outputs": [],
"source": [
"prompt_template = \"\"\"Write a concise summary of the following:\n",
"{text}\n",
"CONCISE SUMMARY:\"\"\"\n",
"prompt = PromptTemplate.from_template(prompt_template)\n",
"\n",
"refine_template = (\n",
" \"Your job is to produce a final summary\\n\"\n",
" \"We have provided an existing summary up to a certain point: {existing_answer}\\n\"\n",
" \"We have the opportunity to refine the existing summary\"\n",
" \"(only if needed) with some more context below.\\n\"\n",
" \"------------\\n\"\n",
" \"{text}\\n\"\n",
" \"------------\\n\"\n",
" \"Given the new context, refine the original summary in Italian\"\n",
" \"If the context isn't useful, return the original summary.\"\n",
")\n",
"refine_prompt = PromptTemplate.from_template(refine_template)\n",
"chain = load_summarize_chain(\n",
" llm=llm,\n",
" chain_type=\"refine\",\n",
" question_prompt=prompt,\n",
" refine_prompt=refine_prompt,\n",
" return_intermediate_steps=True,\n",
" input_key=\"input_documents\",\n",
" output_key=\"output_text\",\n",
")\n",
"result = chain({\"input_documents\": split_docs}, return_only_outputs=True)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "d9600b67-79d4-4f85-aba2-9fe81fa29f49",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"L'articolo discute il concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. Esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso di strumenti. Dimostrazioni di concetto come AutoGPT mostrano la possibilità di creare agenti autonomi con LLM come controller principale. Approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Tuttavia, ci sono sfide legate alla lunghezza del contesto, alla pianificazione a lungo termine e alla decomposizione delle attività. Inoltre, l'affidabilità dell'interfaccia di linguaggio naturale tra LLM e componenti esterni come la memoria e gli strumenti è incerta. Nonostante ciò, l'uso di LLM come router per indirizzare le richieste ai moduli esperti più adatti è stato proposto come architettura neuro-simbolica per agenti autonomi nel sistema MRKL. L'articolo fa riferimento a diverse pubblicazioni che approfondiscono l'argomento, tra cui Chain of Thought, Tree of Thoughts, LLM+P, ReAct, Reflexion, e MRKL Systems.\n"
]
}
],
"source": [
"print(result[\"output_text\"])"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "5f91a8eb-daa5-4191-ace4-01765801db3e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This article discusses the concept of building autonomous agents using LLM (large language model) as the core controller. The article explores the different components of an LLM-powered agent system, including planning, memory, and tool use. It also provides examples of proof-of-concept demos and highlights the potential of LLM as a general problem solver.\n",
"\n",
"Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono anche forniti esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente.\n",
"\n",
"Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono anche forniti esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Il nuovo contesto riguarda l'approccio Chain of Hindsight (CoH) che permette al modello di migliorare autonomamente i propri output attraverso un processo di apprendimento supervisionato. Viene anche presentato l'approccio Algorithm Distillation (AD) che applica lo stesso concetto alle traiettorie di apprendimento per compiti di reinforcement learning.\n"
]
}
],
"source": [
"print(\"\\n\\n\".join(result[\"intermediate_steps\"][:3]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ddd522e-30dc-4f6a-b993-c4f97e656c4f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}