minor renaming

pull/4/head
Mike Heaton 2 years ago
parent 711f87d6a3
commit 7ffa01aeb2

@ -191,7 +191,7 @@
"id": "0c9bfea5-a028-4191-b9f1-f210d76ec4e3",
"metadata": {},
"source": [
"# 1) Preprocess the contextual information\n",
"# 1) Preprocess the document library\n",
"\n",
"We plan to use document embeddings to fetch the most relevant part of parts of our document library and insert them into the prompt that we provide to GPT-3. We therefore need to break up the document library into \"sections\" of context, which can be searched and retrieved separately. \n",
"\n",
@ -439,7 +439,7 @@
"source": [
"So we have split our document library into sections, and encoded them by creating embedding vectors that represent each chunk. Next we will use these embeddings to answer our users' questions.\n",
"\n",
"# 2) Find the most similar context embeddings to the question embedding\n",
"# 2) Find the most similar document embeddings to the question embedding\n",
"\n",
"At the time of question-answering, to answer the user's query we compute the query embedding of the question and use it to find the most similar document sections. Since this is a small example, we store and search the embeddings locally. If you have a larger dataset, consider using a vector search engine like [Pinecone](https://www.pinecone.io/) or [Weaviate](https://github.com/semi-technologies/weaviate) to power the search."
]
@ -547,7 +547,7 @@
"id": "a0efa0f6-4469-457a-89a4-a2f5736a01e0",
"metadata": {},
"source": [
"# 3) Add the most relevant contexts to the query prompt\n",
"# 3) Add the most relevant document sections to the query prompt\n",
"\n",
"Once we've calculated the most relevant pieces of context, we construct a prompt by simply prepending them to the supplied query. It is helpful to use a query separator to help the model distinguish between separate pieces of text."
]

Loading…
Cancel
Save