Merge pull request #226 from liuliuOD/fix/fine-tuned_qa

fix: made some spelling and semantic adjustments
pull/236/head
Ted Sanders 2 years ago committed by GitHub
commit 8c2d52eda4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -90,7 +90,7 @@
"source": [
"It turns out that a double newline is a good separator in this case, in order not to break the flow of the text. Also no individual chunk is larger than 1500 tokens. The model we will use is text-davinci-002, which has a limit of 4096 tokens, so we don't need to worry about breaking the chunks down further.\n",
"\n",
"We will group the shorter chunks into chunks of around 1000 tokens, to increase the coherence of the text, and the frequency of breaks within the text."
"We will group the shorter chunks into chunks of around 1000 tokens, to increase the coherence of the text, and decrease the frequency of breaks within the text."
]
},
{

@ -228,7 +228,7 @@
"\n",
"This process is noisy, as sometimes the question might be answerable given a different context, but on average we hope this won't affect the peformance too much.\n",
"\n",
"We apply the same process of dataset creation for both the discriminator, and the Q&A answering model. We apply the process separately for the training and testing set, to ensure that the examples from the traing set don't feature within the test set."
"We apply the same process of dataset creation for both the discriminator, and the Q&A answering model. We apply the process separately for the training and testing set, to ensure that the examples from the training set don't feature within the test set."
]
},
{

Loading…
Cancel
Save