Fix typos in Fine-Tuning RAG Qdrant (#806)

This commit is contained in:
Guspan Tanadi 2023-11-01 01:39:09 +07:00 committed by GitHub
parent 5b37609209
commit cf9b3a4609

View File

@ -678,7 +678,7 @@
"\n",
"When we know that a correct answer exists in the context, we can measure the model's performance, there are 3 possible outcomes:\n",
"\n",
"1. ✅ **Answered Correctly**: The model responsded the correct answer. It may have also included other answers that were not in the context.\n",
"1. ✅ **Answered Correctly**: The model responded the correct answer. It may have also included other answers that were not in the context.\n",
"2. ❎ **Skipped**: The model responded with \"I don't know\" (IDK) while the answer was present in the context. It's better than giving the wrong answer. It's better for the model say \"I don't know\" than giving the wrong answer. In our design, we know that a true answer exists and hence we're able to measure it -- this is not always the case. *This is a model error*. We exclude this from the overall error rate. \n",
"3. ❌ **Wrong**: The model responded with an incorrect answer. **This is a model ERROR.**\n",
"\n",
@ -839,7 +839,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice that the fine-tuned model skips questions more often -- and makes fewer misakes. This is because the fine-tuned model is more conservative and skips questions when it's not sure."
"Notice that the fine-tuned model skips questions more often -- and makes fewer mistakes. This is because the fine-tuned model is more conservative and skips questions when it's not sure."
]
},
{
@ -896,7 +896,7 @@
" 6.1 Embed the Fine-Tuning Data\n",
" 6.2 Embedding the Questions\n",
"7. Using Qdrant to Improve RAG Prompt\n",
"8. \n",
"8. Evaluation\n",
"\n",
"\n",
"## 6. Fine-Tuning OpenAI Model with Qdrant\n",
@ -982,7 +982,7 @@
"\n",
"Next, you'll embed the entire training set questions. You'll use the question to question similarity to find the most similar questions to the question we're looking for. This is a workflow which is used in RAG to leverage the OpenAI model ability of incontext learning with more examples. This is what we call Few Shot Learning here.\n",
"\n",
"**❗️⏰ Important Note: This step can take upto 3 hours to complete. Please be patient. If you see Out of Memory errors or Kernel Crashes, please reduce the batch size to 32, restart the kernel and run the notebook again. This code needs to be run only ONCE.**\n",
"**❗️⏰ Important Note: This step can take up to 3 hours to complete. Please be patient. If you see Out of Memory errors or Kernel Crashes, please reduce the batch size to 32, restart the kernel and run the notebook again. This code needs to be run only ONCE.**\n",
"\n",
"## Function Breakdown for `generate_points_from_dataframe`\n",
"\n",