From e383e243c2070f1b4705cfc032b2f79535b1d2ff Mon Sep 17 00:00:00 2001 From: Ted Sanders Date: Mon, 24 Oct 2022 16:31:33 -0700 Subject: [PATCH] fixes broken link to QA notebook --- examples/Question_answering_using_embeddings.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/Question_answering_using_embeddings.ipynb b/examples/Question_answering_using_embeddings.ipynb index 71e30b51..1cd8fb9c 100644 --- a/examples/Question_answering_using_embeddings.ipynb +++ b/examples/Question_answering_using_embeddings.ipynb @@ -195,7 +195,7 @@ "\n", "We plan to use document embeddings to fetch the most relevant part of parts of our document library and insert them into the prompt that we provide to GPT-3. We therefore need to break up the document library into \"sections\" of context, which can be searched and retrieved separately. \n", "\n", - "Sections should be large enough to contain enough information to answer a question; but small enough to fit one or several into the GPT-3 prompt. We find that approximately a paragraph of text is usually a good length, but you should experiment for your particular use case. In this example, Wikipedia articles are already grouped into semantically related headers, so we will use these to define our sections. This preprocessing has already been done in [this notebook](examples/fine-tuned_qa/olympics-1-collect-data.ipynb), so we will load the results and use them." + "Sections should be large enough to contain enough information to answer a question; but small enough to fit one or several into the GPT-3 prompt. We find that approximately a paragraph of text is usually a good length, but you should experiment for your particular use case. In this example, Wikipedia articles are already grouped into semantically related headers, so we will use these to define our sections. This preprocessing has already been done in [this notebook](fine-tuned_qa/olympics-1-collect-data.ipynb), so we will load the results and use them." ] }, {