diff --git a/examples/Question_answering_using_embeddings.ipynb b/examples/Question_answering_using_embeddings.ipynb index 1cd8fb9c..2d8b10ce 100644 --- a/examples/Question_answering_using_embeddings.ipynb +++ b/examples/Question_answering_using_embeddings.ipynb @@ -316,11 +316,11 @@ "id": "a17b88b9-7ea2-491e-9727-12617c74a77d", "metadata": {}, "source": [ - "We preprocess the document sections by creating an embedding vector for each section. An embedding is a vector of numbers that helps us understand how semantically similar or different the texts are. The closer two embeddings are to each other, the more similar are their contents. See the [documentation on OpenAI embeddings](https://beta.api.openai.org/docs/guides/embeddings/) for more information.\n", + "We preprocess the document sections by creating an embedding vector for each section. An embedding is a vector of numbers that helps us understand how semantically similar or different the texts are. The closer two embeddings are to each other, the more similar are their contents. See the [documentation on OpenAI embeddings](https://beta.openai.com/docs/guides/embeddings) for more information.\n", "\n", "This indexing stage can be executed offline and only runs once to precompute the indexes for the dataset so that each piece of content can be retrieved later. Since this is a small example, we will store and search the embeddings locally. If you have a larger dataset, consider using a vector search engine like [Pinecone](https://www.pinecone.io/) or [Weaviate](https://github.com/semi-technologies/weaviate) to power the search.\n", "\n", - "For the purposes of this tutorial we chose to use Curie embeddings, which are 4096-dimensional embeddings at a very good price and performance point. Since we will be using these embeddings for retrieval, we’ll use the \"search\" embeddings (see the [documentation](https://beta.api.openai.org/docs/guides/embeddings/))." + "For the purposes of this tutorial we chose to use Curie embeddings, which are 4096-dimensional embeddings at a very good price and performance point. Since we will be using these embeddings for retrieval, we’ll use the \"search\" embeddings (see the [documentation](https://beta.openai.com/docs/guides/embeddings))." ] }, {