mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-11 13:11:02 +00:00
polishes wording in a few places
This commit is contained in:
parent
21d1834e5f
commit
b59f105ed1
@ -83,7 +83,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Specifically, this notebook demonstrates the following procedure:\n",
|
"Specifically, this notebook demonstrates the following procedure:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"1. Prepare search data (once)\n",
|
"1. Prepare search data (once per document)\n",
|
||||||
" 1. Collect: We'll download a few hundred Wikipedia articles about the 2022 Olympics\n",
|
" 1. Collect: We'll download a few hundred Wikipedia articles about the 2022 Olympics\n",
|
||||||
" 2. Chunk: Documents are split into short, mostly self-contained sections to be embedded\n",
|
" 2. Chunk: Documents are split into short, mostly self-contained sections to be embedded\n",
|
||||||
" 3. Embed: Each section is embedded with the OpenAI API\n",
|
" 3. Embed: Each section is embedded with the OpenAI API\n",
|
||||||
@ -97,7 +97,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"### Costs\n",
|
"### Costs\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Because GPT is more expensive than embeddings search, a system with a high volume of queries will have its costs dominated by step 3.\n",
|
"Because GPT is more expensive than embeddings search, a system with a decent volume of queries will have its costs dominated by step 3.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- For `gpt-3.5-turbo` using ~1,000 tokens per query, it costs ~$0.002 per query, or ~500 queries per dollar (as of Apr 2023)\n",
|
"- For `gpt-3.5-turbo` using ~1,000 tokens per query, it costs ~$0.002 per query, or ~500 queries per dollar (as of Apr 2023)\n",
|
||||||
"- For `gpt-4`, again assuming ~1,000 tokens per query, it costs ~$0.03 per query, or ~30 queries per dollar (as of Apr 2023)\n",
|
"- For `gpt-4`, again assuming ~1,000 tokens per query, it costs ~$0.03 per query, or ~30 queries per dollar (as of Apr 2023)\n",
|
||||||
@ -574,7 +574,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"Thanks to the Wikipedia article included in the input message, GPT answers correctly.\n",
|
"Thanks to the Wikipedia article included in the input message, GPT answers correctly.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this particular case, GPT was intelligent enough to realize that the original question was underspecified, as there were three curling gold medals, not just one.\n",
|
"In this particular case, GPT was intelligent enough to realize that the original question was underspecified, as there were three curling gold medal events, not just one.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Of course, this example partly relied on human intelligence. We knew the question was about curling, so we inserted a Wikipedia article on curling.\n",
|
"Of course, this example partly relied on human intelligence. We knew the question was about curling, so we inserted a Wikipedia article on curling.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -591,7 +591,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"To save you the time & expense, we've prepared a pre-embedded dataset of a few hundred Wikipedia articles about the 2022 Winter Olympics.\n",
|
"To save you the time & expense, we've prepared a pre-embedded dataset of a few hundred Wikipedia articles about the 2022 Winter Olympics.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"To see how we constructed this dataset, or to modify it, see [Embedding Wikipedia articles for search](Embedding_Wikipedia_articles_for_search.ipynb)."
|
"To see how we constructed this dataset, or to modify it yourself, see [Embedding Wikipedia articles for search](Embedding_Wikipedia_articles_for_search.ipynb)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -1011,7 +1011,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"Despite `gpt-3.5-turbo` having no knowledge of the 2022 Winter Olympics, our search system was able to retrieve reference text for the model to read, allowing it to correctly list the gold medal winners in the Men's and Women's tournaments.\n",
|
"Despite `gpt-3.5-turbo` having no knowledge of the 2022 Winter Olympics, our search system was able to retrieve reference text for the model to read, allowing it to correctly list the gold medal winners in the Men's and Women's tournaments.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"However, it still wasn't quite perfect - the model failed to list the gold medal winners from the Mixed doubles event."
|
"However, it still wasn't quite perfect—the model failed to list the gold medal winners from the Mixed doubles event."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
Loading…
Reference in New Issue
Block a user