mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-04 06:00:33 +00:00
Merge pull request #77 from Pankaj-Baranwal/patch-1
Replaced non-existent model with the new version.
This commit is contained in:
commit
f90d5785c7
@ -12,7 +12,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# 2. Creating a synthetic Q&A dataset\n",
|
||||
"We use [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n",
|
||||
"We use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n",
|
||||
"\n",
|
||||
"This is expensive, and will also take a long time, as we call the davinci engine for each section. You can simply download the final dataset instead.\n",
|
||||
"\n",
|
||||
@ -175,7 +175,7 @@
|
||||
"def get_questions(context):\n",
|
||||
" try:\n",
|
||||
" response = openai.Completion.create(\n",
|
||||
" engine=\"davinci-instruct-beta-v2\",\n",
|
||||
" engine=\"davinci-instruct-beta-v3\",\n",
|
||||
" prompt=f\"Write questions based on the text below\\n\\nText: {context}\\n\\nQuestions:\\n1.\",\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=257,\n",
|
||||
@ -255,7 +255,7 @@
|
||||
"def get_answers(row):\n",
|
||||
" try:\n",
|
||||
" response = openai.Completion.create(\n",
|
||||
" engine=\"davinci-instruct-beta-v2\",\n",
|
||||
" engine=\"davinci-instruct-beta-v3\",\n",
|
||||
" prompt=f\"Write questions based on the text below\\n\\nText: {row.context}\\n\\nQuestions:\\n{row.questions}\\n\\nAnswers:\\n1.\",\n",
|
||||
" temperature=0,\n",
|
||||
" max_tokens=257,\n",
|
||||
@ -385,7 +385,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"answer_question(olympics_search_fileid, \"davinci-instruct-beta-v2\", \n",
|
||||
"answer_question(olympics_search_fileid, \"davinci-instruct-beta-v3\", \n",
|
||||
" \"Where did women's 4 x 100 metres relay event take place during the 2020 Summer Olympics?\")"
|
||||
]
|
||||
},
|
||||
@ -393,7 +393,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)"
|
||||
"After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -413,7 +413,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"answer_question(olympics_search_fileid, \"davinci-instruct-beta-v2\", \n",
|
||||
"answer_question(olympics_search_fileid, \"davinci-instruct-beta-v3\", \n",
|
||||
" \"Where did women's 4 x 100 metres relay event take place during the 2048 Summer Olympics?\", max_len=1000)"
|
||||
]
|
||||
},
|
||||
|
Loading…
Reference in New Issue
Block a user