mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-08 01:10:29 +00:00
334 lines
14 KiB
Plaintext
334 lines
14 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Long Document Content Extraction\n",
|
|
"\n",
|
|
"GPT-3 can help us extract key figures, dates or other bits of important content from documents that are too big to fit into the context window. One approach for solving this is to chunk the document up and process each chunk separately, before combining into one list of answers. \n",
|
|
"\n",
|
|
"In this notebook we'll run through this approach:\n",
|
|
"- Load in a long PDF and pull the text out\n",
|
|
"- Create a prompt to be used to extract key bits of information\n",
|
|
"- Chunk up our document and process each chunk to pull any answers out\n",
|
|
"- Combine them at the end\n",
|
|
"- This simple approach will then be extended to three more difficult questions\n",
|
|
"\n",
|
|
"## Approach\n",
|
|
"\n",
|
|
"- **Setup**: Take a PDF, a Formula 1 Financial Regulation document on Power Units, and extract the text from it for entity extraction. We'll use this to try to extract answers that are buried in the content.\n",
|
|
"- **Simple Entity Extraction**: Extract key bits of information from chunks of a document by:\n",
|
|
" - Creating a template prompt with our questions and an example of the format it expects\n",
|
|
" - Create a function to take a chunk of text as input, combine with the prompt and get a response\n",
|
|
" - Run a script to chunk the text, extract answers and output them for parsing\n",
|
|
"- **Complex Entity Extraction**: Ask some more difficult questions which require tougher reasoning to work out"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Setup"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install textract\n",
|
|
"!pip install tiktoken"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import textract\n",
|
|
"import os\n",
|
|
"import openai\n",
|
|
"import tiktoken\n",
|
|
"\n",
|
|
"# Extract the raw text from each PDF using textract\n",
|
|
"text = textract.process('data/fia_f1_power_unit_financial_regulations_issue_1_-_2022-08-16.pdf', method='pdfminer').decode('utf-8')\n",
|
|
"clean_text = text.replace(\" \", \" \").replace(\"\\n\", \"; \").replace(';',' ')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Simple Entity Extraction"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Extract key pieces of information from this regulation document.\n",
|
|
"If a particular piece of information is not present, output \"Not specified\".\n",
|
|
"When you extract a key piece of information, include the closest page number.\n",
|
|
"Use the following format:\n",
|
|
"0. Who is the author\n",
|
|
"1. What is the amount of the \"Power Unit Cost Cap\" in USD, GBP and EUR\n",
|
|
"2. What is the value of External Manufacturing Costs in USD\n",
|
|
"3. What is the Capital Expenditure Limit in USD\n",
|
|
"\n",
|
|
"Document: \"\"\"<document>\"\"\"\n",
|
|
"\n",
|
|
"0. Who is the author: Tom Anderson (Page 1)\n",
|
|
"1.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Example prompt - \n",
|
|
"document = '<document>'\n",
|
|
"template_prompt=f'''Extract key pieces of information from this regulation document.\n",
|
|
"If a particular piece of information is not present, output \\\"Not specified\\\".\n",
|
|
"When you extract a key piece of information, include the closest page number.\n",
|
|
"Use the following format:\\n0. Who is the author\\n1. What is the amount of the \"Power Unit Cost Cap\" in USD, GBP and EUR\\n2. What is the value of External Manufacturing Costs in USD\\n3. What is the Capital Expenditure Limit in USD\\n\\nDocument: \\\"\\\"\\\"{document}\\\"\\\"\\\"\\n\\n0. Who is the author: Tom Anderson (Page 1)\\n1.'''\n",
|
|
"print(template_prompt)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 29,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Split a text into smaller chunks of size n, preferably ending at the end of a sentence\n",
|
|
"def create_chunks(text, n, tokenizer):\n",
|
|
" tokens = tokenizer.encode(text)\n",
|
|
" \"\"\"Yield successive n-sized chunks from text.\"\"\"\n",
|
|
" i = 0\n",
|
|
" while i < len(tokens):\n",
|
|
" # Find the nearest end of sentence within a range of 0.5 * n and 1.5 * n tokens\n",
|
|
" j = min(i + int(1.5 * n), len(tokens))\n",
|
|
" while j > i + int(0.5 * n):\n",
|
|
" # Decode the tokens and check for full stop or newline\n",
|
|
" chunk = tokenizer.decode(tokens[i:j])\n",
|
|
" if chunk.endswith(\".\") or chunk.endswith(\"\\n\"):\n",
|
|
" break\n",
|
|
" j -= 1\n",
|
|
" # If no end of sentence found, use n tokens as the chunk size\n",
|
|
" if j == i + int(0.5 * n):\n",
|
|
" j = min(i + n, len(tokens))\n",
|
|
" yield tokens[i:j]\n",
|
|
" i = j\n",
|
|
"\n",
|
|
"def extract_chunk(document,template_prompt):\n",
|
|
" \n",
|
|
" prompt=template_prompt.replace('<document>',document)\n",
|
|
"\n",
|
|
" response = openai.Completion.create(\n",
|
|
" model='text-davinci-003', \n",
|
|
" prompt=prompt,\n",
|
|
" temperature=0,\n",
|
|
" max_tokens=1500,\n",
|
|
" top_p=1,\n",
|
|
" frequency_penalty=0,\n",
|
|
" presence_penalty=0\n",
|
|
" )\n",
|
|
" return \"1.\" + response['choices'][0]['text']"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Initialise tokenizer\n",
|
|
"tokenizer = tiktoken.get_encoding(\"cl100k_base\")\n",
|
|
"\n",
|
|
"results = []\n",
|
|
" \n",
|
|
"chunks = create_chunks(clean_text,1000,tokenizer)\n",
|
|
"text_chunks = [tokenizer.decode(chunk) for chunk in chunks]\n",
|
|
"\n",
|
|
"for chunk in text_chunks:\n",
|
|
" results.append(extract_chunk(chunk,template_prompt))\n",
|
|
" #print(chunk)\n",
|
|
" print(results[-1])\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 31,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['1. What is the amount of the \"Power Unit Cost Cap\" in USD, GBP and EUR: USD 95,000,000 (Page 2); GBP 76,459,000 (Page 2); EUR 90,210,000 (Page 2)',\n",
|
|
" '2. What is the value of External Manufacturing Costs in USD: US Dollars 20,000,000 in respect of each of the Full Year Reporting Periods ending on 31 December 2023, 31 December 2024 and 31 December 2025, adjusted for Indexation (Page 10)',\n",
|
|
" '3. What is the Capital Expenditure Limit in USD: US Dollars 30,000,000 (Page 32)']"
|
|
]
|
|
},
|
|
"execution_count": 31,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"groups = [r.split('\\n') for r in results]\n",
|
|
"\n",
|
|
"# zip the groups together\n",
|
|
"zipped = list(zip(*groups))\n",
|
|
"zipped = [x for y in zipped for x in y if \"Not specified\" not in x and \"__\" not in x]\n",
|
|
"zipped"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Complex Entity Extraction"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 32,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Extract key pieces of information from this regulation document.\n",
|
|
"If a particular piece of information is not present, output \"Not specified\".\n",
|
|
"When you extract a key piece of information, include the closest page number.\n",
|
|
"Use the following format:\n",
|
|
"0. Who is the author\n",
|
|
"1. How is a Minor Overspend Breach calculated\n",
|
|
"2. How is a Major Overspend Breach calculated\n",
|
|
"3. Which years do these financial regulations apply to\n",
|
|
"\n",
|
|
"Document: \"\"\"<document>\"\"\"\n",
|
|
"\n",
|
|
"0. Who is the author: Tom Anderson (Page 1)\n",
|
|
"1.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Example prompt - \n",
|
|
"template_prompt=f'''Extract key pieces of information from this regulation document.\n",
|
|
"If a particular piece of information is not present, output \\\"Not specified\\\".\n",
|
|
"When you extract a key piece of information, include the closest page number.\n",
|
|
"Use the following format:\\n0. Who is the author\\n1. How is a Minor Overspend Breach calculated\\n2. How is a Major Overspend Breach calculated\\n3. Which years do these financial regulations apply to\\n\\nDocument: \\\"\\\"\\\"{document}\\\"\\\"\\\"\\n\\n0. Who is the author: Tom Anderson (Page 1)\\n1.'''\n",
|
|
"print(template_prompt)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 34,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['1. How is a Minor Overspend Breach calculated: A Minor Overspend Breach arises when a Power Unit Manufacturer submits its Full Year Reporting Documentation and Relevant Costs reported therein exceed the Power Unit Cost Cap by less than 5% (Page 24)',\n",
|
|
" '2. How is a Major Overspend Breach calculated: A Material Overspend Breach arises when a Power Unit Manufacturer submits its Full Year Reporting Documentation and Relevant Costs reported therein exceed the Power Unit Cost Cap by 5% or more (Page 25)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2026 onwards (Page 1)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2023, 2024, 2025, 2026 and subsequent Full Year Reporting Periods (Page 2)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022-2025 (Page 6)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2023, 2024, 2025, 2026 and subsequent Full Year Reporting Periods (Page 10)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 14)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 16)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 19)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 21)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2026 onwards (Page 26)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2026 (Page 2)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 30)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 32)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2023, 2024 and 2025 (Page 1)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 37)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2026 onwards (Page 40)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 1)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2026 to 2030 seasons (Page 46)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 47)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 1)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 1)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 56)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 1)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 16)',\n",
|
|
" '3. Which years do these financial regulations apply to: 2022 (Page 16)']"
|
|
]
|
|
},
|
|
"execution_count": 34,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"results = []\n",
|
|
"\n",
|
|
"for chunk in text_chunks:\n",
|
|
" results.append(extract_chunk(chunk,template_prompt))\n",
|
|
" \n",
|
|
"groups = [r.split('\\n') for r in results]\n",
|
|
"\n",
|
|
"# zip the groups together\n",
|
|
"zipped = list(zip(*groups))\n",
|
|
"zipped = [x for y in zipped for x in y if \"Not specified\" not in x and \"__\" not in x]\n",
|
|
"zipped"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Consolidation\n",
|
|
"\n",
|
|
"We've been able to extract the first two answers safely, while the third was confounded by the date that appeared on every page, though the correct answer is in there as well.\n",
|
|
"\n",
|
|
"To tune this further you can consider experimenting with:\n",
|
|
"- A more descriptive or specific prompt\n",
|
|
"- If you have sufficient training data, fine-tuning a model to find a set of outputs very well\n",
|
|
"- The way you chunk your data - we have gone for 1000 tokens with no overlap, but more intelligent chunking that breaks info into sections, cuts by tokens or similar may get better results\n",
|
|
"\n",
|
|
"However, with minimal tuning we have now answered 6 questions of varying difficulty using the contents of a long document, and have a reusable approach that we can apply to any long document requiring entity extraction. Look forward to seeing what you can do with this!"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "embed_retrieve",
|
|
"language": "python",
|
|
"name": "embed_retrieve"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.9"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "5997d090960a54cd76552f75eca12ec3b416cf9d01a1a5af08ae48cf90878791"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|