"This notebook shows how we prepared a dataset of Wikipedia articles for search, used in [Question_answering_using_embeddings.ipynb](Question_answering_using_embeddings.ipynb).\n",
"\n",
"Procedure:\n",
"\n",
"0. Prerequisites: Import libraries, set API key (if needed)\n",
"1. Collect: We download a few hundred Wikipedia articles about the 2022 Olympics\n",
"2. Chunk: Documents are split into short, semi-self-contained sections to be embedded\n",
"3. Embed: Each section is embedded with the OpenAI API\n",
"4. Store: Embeddings are saved in a CSV file (for large datasets, use a vector database)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 0. Prerequisites\n",
"\n",
"### Import libraries"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import mwclient # for downloading example Wikipedia articles\n",
"import mwparserfromhell # for splitting Wikipedia articles into sections\n",
"import openai # for generating embeddings\n",
"import pandas as pd # for DataFrames to store article sections and embeddings\n",
"import re # for cutting <ref> links out of Wikipedia articles\n",
"import tiktoken # for counting tokens\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Install any missing libraries with `pip install` in your terminal. E.g.,\n",
"(You can also do this in a notebook cell with `!pip install openai`.)\n",
"\n",
"If you install any libraries, be sure to restart the notebook kernel."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set API key (if needed)\n",
"\n",
"Note that the OpenAI library will try to read your API key from the `OPENAI_API_KEY` environment variable. If you haven't already, set this environment variable by following [these instructions](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Collect documents\n",
"\n",
"In this example, we'll download a few hundred Wikipedia articles related to the 2022 Winter Olympics."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found 731 article titles in Category:2022 Winter Olympics.\n"
]
}
],
"source": [
"# get Wikipedia pages about the 2022 Winter Olympics\n",
"# ^note: max_depth=1 means we go one level deep in the category tree\n",
"print(f\"Found {len(titles)} article titles in {CATEGORY_TITLE}.\")\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Chunk documents\n",
"\n",
"Now that we have our reference documents, we need to prepare them for search.\n",
"\n",
"Because GPT can only read a limited amount of text at once, we'll split each document into chunks short enough to be read.\n",
"\n",
"For this specific example on Wikipedia articles, we'll:\n",
"- Discard less relevant-looking sections like External Links and Footnotes\n",
"- Clean up the text by removing reference tags (e.g., <ref>), whitespace, and super short sections\n",
"- Split each article into sections\n",
"- Prepend titles and subtitles to each section's text, to help GPT understand the context\n",
"- If a section is long (say, > 1,600 tokens), we'll recursively split it into smaller sections, trying to split along semantic boundaries like paragraphs"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# define functions to split Wikipedia pages into sections\n",
"Here, we'll use a simple approach and limit sections to 1,600 tokens each, recursively halving any sections that are too long. To avoid cutting in the middle of useful sentences, we'll split along paragraph boundaries when possible."
"print(f\"{len(wikipedia_sections)} Wikipedia sections split into {len(wikipedia_strings)} strings.\")\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Lviv bid for the 2022 Winter Olympics\n",
"\n",
"==History==\n",
"\n",
"[[Image:Lwów - Rynek 01.JPG|thumb|right|200px|View of Rynok Square in Lviv]]\n",
"\n",
"On 27 May 2010, [[President of Ukraine]] [[Viktor Yanukovych]] stated during a visit to [[Lviv]] that Ukraine \"will start working on the official nomination of our country as the holder of the Winter Olympic Games in [[Carpathian Mountains|Carpathians]]\".\n",
"\n",
"In September 2012, [[government of Ukraine]] approved a document about the technical-economic substantiation of the national project \"Olympic Hope 2022\". This was announced by Vladyslav Kaskiv, the head of Ukraine´s Derzhinvestproekt (State investment project). The organizers announced on their website venue plans featuring Lviv as the host city and location for the \"ice sport\" venues, [[Volovets]] (around {{convert|185|km|mi|abbr=on}} from Lviv) as venue for the [[Alpine skiing]] competitions and [[Tysovets, Skole Raion|Tysovets]] (around {{convert|130|km|mi|abbr=on}} from Lviv) as venue for all other \"snow sport\" competitions. By March 2013 no other preparations than the feasibility study had been approved.\n",
"\n",
"On 24 October 2013, session of the Lviv City Council adopted a resolution \"About submission to the International Olympic Committee for nomination of city to participate in the procedure for determining the host city of Olympic and Paralympic Winter Games in 2022\".\n",
"\n",
"On 5 November 2013, it was confirmed that Lviv was bidding to host the [[2022 Winter Olympics]]. Lviv would host the ice sport events, while the skiing events would be held in the [[Carpathian]] mountains. This was the first bid Ukraine had ever submitted for an Olympic Games.\n",
"\n",
"On 30 June 2014, the International Olympic Committee announced \"Lviv will turn its attention to an Olympic bid for 2026, and not continue with its application for 2022. The decision comes as a result of the present political and economic circumstances in Ukraine.\"\n",
"\n",
"Ukraine's Deputy Prime Minister Oleksandr Vilkul said that the Winter Games \"will be an impetus not just for promotion of sports and tourism in Ukraine, but a very important component in the economic development of Ukraine, the attraction of the investments, the creation of new jobs, opening Ukraine to the world, returning Ukrainians working abroad to their motherland.\"\n",
"\n",
"Lviv was one of the host cities of [[UEFA Euro 2012]].\n"
]
}
],
"source": [
"# print example data\n",
"print(wikipedia_strings[1])\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Embed document chunks\n",
"\n",
"Now that we've split our library into shorter self-contained strings, we can compute embeddings for each.\n",
"\n",
"(For large embedding jobs, use a script like [api_request_parallel_processor.py](api_request_parallel_processor.py) to parallelize requests while throttling to stay under rate limits.)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Batch 0 to 999\n",
"Batch 1000 to 1999\n",
"Batch 2000 to 2999\n",
"Batch 3000 to 3999\n",
"Batch 4000 to 4999\n",
"Batch 5000 to 5999\n",
"Batch 6000 to 6999\n"
]
}
],
"source": [
"# calculate embeddings\n",
"EMBEDDING_MODEL = \"text-embedding-ada-002\" # OpenAI's best embeddings as of Apr 2023\n",
"BATCH_SIZE = 1000 # you can submit up to 2048 embedding inputs per request\n",
"\n",
"embeddings = []\n",
"for batch_start in range(0, len(wikipedia_strings), BATCH_SIZE):\n",