"<span style=\"color:orange; font-weight:bold\">Note: To answer questions based on text documents, we recommend the procedure in <a href=\"https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb\">Question Answering using Embeddings</a>. Some of the code below may rely on <a href=\"https://github.com/openai/openai-cookbook/tree/main/transition_guides_for_deprecated_API_endpoints\">deprecated API endpoints</a>.</span>"
"# 1. Collect Wikipedia data about Olympic Games 2020\n",
"\n",
"The idea of this project is to create a question answering model, based on a few paragraphs of provided text. Base GPT-3 models do a good job at answering questions when the answer is contained within the paragraph, however if the answer isn't contained, the base models tend to try their best to answer anyway, often leading to confabulated answers. \n",
"\n",
"To create a model which answers questions only if there is sufficient context for doing so, we first create a dataset of questions and answers based on paragraphs of text. In order to train the model to answer only when the answer is present, we also add adversarial examples, where the question doesn't match the context. In those cases, we ask the model to output \"No sufficient context for answering the question\". \n",
"\n",
"We will perform this task in three notebooks:\n",
"1. The first (this) notebook focuses on collecting recent data, which GPT-3 didn't see during it's pre-training. We picked the topic of Olympic Games 2020 (which actually took place in the summer of 2021), and downloaded 713 unique pages. We organized the dataset by individual sections, which will serve as context for asking and answering the questions.\n",
"2. The [second notebook](olympics-2-create-qa.ipynb) will utilize Davinci-instruct to ask a few questions based on a Wikipedia section, as well as answer those questions, based on that section.\n",
"3. The [third notebook](olympics-3-train-qa.ipynb) will utilize the dataset of context, question and answer pairs to additionally create adversarial questions and context pairs, where the question was not generated on that context. In those cases the model will be prompted to answer \"No sufficient context for answering the question\". We will also train a discriminator model, which predicts whether the question can be answered based on the context or not."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.1 Data extraction using the wikipedia API\n",
"Extracting the data will take about half an hour, and processing will likely take about as much."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"909"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"import wikipedia\n",
"\n",
"\n",
"def filter_olympic_2020_titles(titles):\n",
" \"\"\"\n",
" Get the titles which are related to Olympic games hosted in 2020, given a list of titles\n",
" \"\"\"\n",
" titles = [title for title in titles if '2020' in title and 'olympi' in title.lower()]\n",
" \n",
" return titles\n",
"\n",
"def get_wiki_page(title):\n",
" \"\"\"\n",
" Get the wikipedia page given a title\n",
" \"\"\"\n",
" try:\n",
" return wikipedia.page(title)\n",
" except wikipedia.exceptions.DisambiguationError as e:\n",
"## 1.2 Filtering the Wikipedia pages and splitting them into sections by headings\n",
"We remove sections unlikely to contain textual information, and ensure that each section is not longer than the token limit"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('Bermuda at the 2020 Summer Olympics',\n",
" 'Equestrian',\n",
" \"Bermuda entered one dressage rider into the Olympic competition by finishing in the top four, outside the group selection, of the individual FEI Olympic Rankings for Groups D and E (North, Central, and South America), marking the country's recurrence to the sport after an eight-year absence. The quota was later withdrawn, following an injury of Annabelle Collins' main horse Joyero and a failure to obtain minimum eligibility requirements (MER) aboard a new horse Chuppy Checker.\",\n",
"### 1.2.1 We create a dataset and filter out any sections with fewer than 40 tokens, as those are unlikely to contain enough context to ask a good question."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Token indices sequence length is longer than the specified maximum sequence length for this model (1060 > 1024). Running this sequence through the model will result in indexing errors\n"
"Concerns and controversies at the 2020 Summer Olympics 51\n",
"United States at the 2020 Summer Olympics 46\n",
"Great Britain at the 2020 Summer Olympics 42\n",
"Canada at the 2020 Summer Olympics 39\n",
"Olympic Games 39\n",
"Name: title, dtype: int64"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df.title.value_counts().head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There appear to be winter and summer Olympics 2020. We chose to leave a little ambiguity and noise in the dataset, even though we were interested in only Summer Olympics 2020."