"batches = [documents[i:i + batch_size] for i in range(0, len(documents), batch_size)] \n",
" \n",
"# Upload each batch of documents \n",
"for batch in batches: \n",
" result = search_client.upload_documents(batch) \n",
" \n",
"print(f\"Uploaded {len(documents)} documents in total\") \n"
"print(f\"Uploaded {len(documents)} documents in total\") "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you dataset didn't already contain pre-computed embeddings, you can create embeddings by using the below function using the `openai` python library. You'll also notice the same function and model are being used to generate query embeddings for performing vector searches."
"If your dataset didn't already contain pre-computed embeddings, you can create embeddings by using the below function using the `openai` python library. You'll also notice the same function and model are being used to generate query embeddings for performing vector searches."