Cassandra / Astra DB demo notebooks: adapt to OpenAI 1.0, add support for Astra DB HTTP API, and other upgrades (#853)

pull/863/head
Stefano Lottini 6 months ago committed by GitHub
parent 988139d70e
commit d13dc3b557
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,885 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "46589cdf-1ab6-4028-b07c-08b75acd98e5",
"metadata": {},
"source": [
"# Philosophy with Vector Embeddings, OpenAI and Astra DB\n",
"\n",
"### AstraPy version"
]
},
{
"cell_type": "markdown",
"id": "b3496d07-f473-4008-9133-1a54b818c8d3",
"metadata": {},
"source": [
"In this quickstart you will learn how to build a \"philosophy quote finder & generator\" using OpenAI's vector embeddings and DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) as the vector store for data persistence.\n",
"\n",
"The basic workflow of this notebook is outlined below. You will evaluate and store the vector embeddings for a number of quotes by famous philosophers, use them to build a powerful search engine and, after that, even a generator of new quotes!\n",
"\n",
"The notebook exemplifies some of the standard usage patterns of vector search -- while showing how easy is it to get started with [Astra DB](https://docs.datastax.com/en/astra/home/astra.html).\n",
"\n",
"For a background on using vector search and text embeddings to build a question-answering system, please check out this excellent hands-on notebook: [Question answering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb).\n",
"\n",
"Table of contents:\n",
"- Setup\n",
"- Create vector collection\n",
"- Connect to OpenAI\n",
"- Load quotes into the Vector Store\n",
"- Use case 1: **quote search engine**\n",
"- Use case 2: **quote generator**\n",
"- Cleanup"
]
},
{
"cell_type": "markdown",
"id": "cddf17cc-eef4-4021-b72a-4d3832a9b4a7",
"metadata": {},
"source": [
"### How it works\n",
"\n",
"**Indexing**\n",
"\n",
"Each quote is made into an embedding vector with OpenAI's `Embedding`. These are saved in the Vector Store for later use in searching. Some metadata, including the author's name and a few other pre-computed tags, are stored alongside, to allow for search customization.\n",
"\n",
"![1_vector_indexing](https://user-images.githubusercontent.com/14221764/282422016-1d540607-eed4-4240-9c3d-22ee3a3bc90f.png)\n",
"\n",
"**Search**\n",
"\n",
"To find a quote similar to the provided search quote, the latter is made into an embedding vector on the fly, and this vector is used to query the store for similar vectors ... i.e. similar quotes that were previously indexed. The search can optionally be constrained by additional metadata (\"find me quotes by Spinoza similar to this one ...\").\n",
"\n",
"![2_vector_search](https://user-images.githubusercontent.com/14221764/282422033-0a1297c4-63bb-4e04-b120-dfd98dc1a689.png)\n",
"\n",
"The key point here is that \"quotes similar in content\" translates, in vector space, to vectors that are metrically close to each other: thus, vector similarity search effectively implements semantic similarity. _This is the key reason vector embeddings are so powerful._\n",
"\n",
"The sketch below tries to convey this idea. Each quote, once it's made into a vector, is a point in space. Well, in this case it's on a sphere, since OpenAI's embedding vectors, as most others, are normalized to _unit length_. Oh, and the sphere is actually not three-dimensional, rather 1536-dimensional!\n",
"\n",
"So, in essence, a similarity search in vector space returns the vectors that are closest to the query vector:\n",
"\n",
"![3_vector_space](https://user-images.githubusercontent.com/14221764/262321363-c8c625c1-8be9-450e-8c68-b1ed518f990d.png)\n",
"\n",
"**Generation**\n",
"\n",
"Given a suggestion (a topic or a tentative quote), the search step is performed, and the first returned results (quotes) are fed into an LLM prompt which asks the generative model to invent a new text along the lines of the passed examples _and_ the initial suggestion.\n",
"\n",
"![4_quote_generation](https://user-images.githubusercontent.com/14221764/282422050-2e209ff5-07d6-41ac-99ac-f442e090b3bb.png)"
]
},
{
"cell_type": "markdown",
"id": "10493f44-565d-4f23-8bfd-1a7335392c2b",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "44a14f95-4683-4d0c-a251-0df7b43ca975",
"metadata": {},
"source": [
"Install and import the necessary dependencies:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "39afdb74-56e4-44ff-9c72-ab2669780113",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install --quiet \"astrapy>=0.6.0\" \"openai>=1.0.0\" datasets"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9ca6f5c6-30b4-4518-a816-5c732a60e339",
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"from collections import Counter\n",
"\n",
"from astrapy.db import AstraDB\n",
"import openai\n",
"from datasets import load_dataset"
]
},
{
"cell_type": "markdown",
"id": "9cb99e33-5cb7-416f-8dca-da18e0cb108d",
"metadata": {},
"source": [
"### Connection parameters"
]
},
{
"cell_type": "markdown",
"id": "65a8edc1-4633-491b-9ed3-11163ec24e46",
"metadata": {},
"source": [
"Please retrieve your database credentials on your Astra dashboard ([info](https://docs.datastax.com/en/astra/astra-db-vector/)): you will supply them momentarily.\n",
"\n",
"Example values:\n",
"\n",
"- API Endpoint: `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"- Token: `AstraCS:6gBhNmsk135...`"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ca5a2f5d-3ff2-43d6-91c0-4a52c0ecd06a",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Please enter your API Endpoint: https://4f835778-ec78-42b0-9ae3-29e3cf45b596-us-east1.apps.astra.datastax.com\n",
"Please enter your Token ········\n"
]
}
],
"source": [
"ASTRA_DB_API_ENDPOINT = input(\"Please enter your API Endpoint:\")\n",
"ASTRA_DB_APPLICATION_TOKEN = getpass(\"Please enter your Token\")"
]
},
{
"cell_type": "markdown",
"id": "f8c4e5ec-2ab2-4d41-b3ec-c946469fed8b",
"metadata": {},
"source": [
"### Instantiate an Astra DB client"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1b526e55-ad2c-413d-94b1-cf651afefd02",
"metadata": {},
"outputs": [],
"source": [
"astra_db = AstraDB(\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_APPLICATION_TOKEN,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "60829851-bd48-4461-9243-974f76304933",
"metadata": {},
"source": [
"## Create vector collection"
]
},
{
"cell_type": "markdown",
"id": "cbcd19dc-0580-42c2-8d45-1cef52050a59",
"metadata": {},
"source": [
"The only parameter to specify, other than the collection name, is the dimension of the vectors you'll store. Other parameters, notably the similarity metric to use for searches, are optional."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8db837dc-cd49-41e2-8b5d-edb17ccc470e",
"metadata": {},
"outputs": [],
"source": [
"coll_name = \"philosophers_astra_db\"\n",
"collection = astra_db.create_collection(coll_name, dimension=1536)"
]
},
{
"cell_type": "markdown",
"id": "da86f91a-88a6-4997-b0f8-9da0816f8ece",
"metadata": {},
"source": [
"## Connect to OpenAI"
]
},
{
"cell_type": "markdown",
"id": "a6b664b5-fd84-492e-a7bd-4dda3863b48a",
"metadata": {},
"source": [
"### Set up your secret key"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "37fe7653-dd64-4494-83e1-5702ec41725c",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Please enter your OpenAI API Key: ········\n"
]
}
],
"source": [
"OPENAI_API_KEY = getpass(\"Please enter your OpenAI API Key: \")"
]
},
{
"cell_type": "markdown",
"id": "847f2821-7f3f-4dcd-8e0c-49aa397e36f4",
"metadata": {},
"source": [
"### A test call for embeddings\n",
"\n",
"Quickly check how one can get the embedding vectors for a list of input texts:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6bf89454-9a55-4202-ab6b-ea15b2048f3d",
"metadata": {},
"outputs": [],
"source": [
"client = openai.OpenAI(api_key=OPENAI_API_KEY)\n",
"embedding_model_name = \"text-embedding-ada-002\"\n",
"\n",
"result = client.embeddings.create(\n",
" input=[\n",
" \"This is a sentence\",\n",
" \"A second sentence\"\n",
" ],\n",
" model=embedding_model_name,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e2841934-7b2a-4a00-b112-b0865c9ec593",
"metadata": {},
"source": [
"_Note: the above is the syntax for OpenAI v1.0+. If using previous versions, the code to get the embeddings will look different._"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "50a8e6f0-0aa7-4ffc-94e9-702b68566815",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"len(result.data) = 2\n",
"result.data[1].embedding = [-0.0108176339417696, 0.0013546717818826437, 0.00362232...\n",
"len(result.data[1].embedding) = 1536\n"
]
}
],
"source": [
"print(f\"len(result.data) = {len(result.data)}\")\n",
"print(f\"result.data[1].embedding = {str(result.data[1].embedding)[:55]}...\")\n",
"print(f\"len(result.data[1].embedding) = {len(result.data[1].embedding)}\")"
]
},
{
"cell_type": "markdown",
"id": "d7f09c42-fff3-4aa2-922b-043739b4b06a",
"metadata": {},
"source": [
"## Load quotes into the Vector Store"
]
},
{
"cell_type": "markdown",
"id": "cf0f3d58-74c2-458b-903d-3d12e61b7846",
"metadata": {},
"source": [
"Get a dataset with the quotes. _(We adapted and augmented the data from [this Kaggle dataset](https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers), ready to use in this demo.)_"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "aa68f038-3240-4e22-b7c6-a5f214eda381",
"metadata": {},
"outputs": [],
"source": [
"philo_dataset = load_dataset(\"datastax/philosopher-quotes\")[\"train\"]"
]
},
{
"cell_type": "markdown",
"id": "ab6b08b1-e3db-4c7c-9d7c-2ada7c8bc71d",
"metadata": {},
"source": [
"A quick inspection:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "10b629cf-efd7-434a-9dc6-7f38f35f7cc8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"An example entry:\n",
"{'author': 'aristotle', 'quote': 'Love well, be loved and do something of value.', 'tags': 'love;ethics'}\n"
]
}
],
"source": [
"print(\"An example entry:\")\n",
"print(philo_dataset[16])"
]
},
{
"cell_type": "markdown",
"id": "9badaa4d-80ea-462c-bb00-1909c6435eea",
"metadata": {},
"source": [
"Check the dataset size:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "1b33ac73-f8f2-4b64-8a27-178ac76886a9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total: 450 quotes. By author:\n",
" aristotle : 50 quotes\n",
" schopenhauer : 50 quotes\n",
" spinoza : 50 quotes\n",
" hegel : 50 quotes\n",
" freud : 50 quotes\n",
" nietzsche : 50 quotes\n",
" sartre : 50 quotes\n",
" plato : 50 quotes\n",
" kant : 50 quotes\n"
]
}
],
"source": [
"author_count = Counter(entry[\"author\"] for entry in philo_dataset)\n",
"print(f\"Total: {len(philo_dataset)} quotes. By author:\")\n",
"for author, count in author_count.most_common():\n",
" print(f\" {author:<20}: {count} quotes\")"
]
},
{
"cell_type": "markdown",
"id": "062157d1-d262-4735-b06c-f3112575b4cc",
"metadata": {},
"source": [
"### Write to the vector collection\n",
"\n",
"You will compute the embeddings for the quotes and save them into the Vector Store, along with the text itself and the metadata you'll use later.\n",
"\n",
"To optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service.\n",
"\n",
"To store the quote objects, you will use the `insert_many` method of the collection (one call per batch). When preparing the documents for insertion you will choose suitable field names -- keep in mind, however, that the embedding vector must be the fixed special `$vector` field."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "6ab84ccb-3363-4bdc-9484-0d68c25a58ff",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Starting to store entries: [20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][20][10]\n",
"Finished storing entries.\n"
]
}
],
"source": [
"BATCH_SIZE = 20\n",
"\n",
"num_batches = ((len(philo_dataset) + BATCH_SIZE - 1) // BATCH_SIZE)\n",
"\n",
"quotes_list = philo_dataset[\"quote\"]\n",
"authors_list = philo_dataset[\"author\"]\n",
"tags_list = philo_dataset[\"tags\"]\n",
"\n",
"print(\"Starting to store entries: \", end=\"\")\n",
"for batch_i in range(num_batches):\n",
" b_start = batch_i * BATCH_SIZE\n",
" b_end = (batch_i + 1) * BATCH_SIZE\n",
" # compute the embedding vectors for this batch\n",
" b_emb_results = client.embeddings.create(\n",
" input=quotes_list[b_start : b_end],\n",
" model=embedding_model_name,\n",
" )\n",
" # prepare the documents for insertion\n",
" b_docs = []\n",
" for entry_idx, emb_result in zip(range(b_start, b_end), b_emb_results.data):\n",
" if tags_list[entry_idx]:\n",
" tags = {\n",
" tag: True\n",
" for tag in tags_list[entry_idx].split(\";\")\n",
" }\n",
" else:\n",
" tags = {}\n",
" b_docs.append({\n",
" \"quote\": quotes_list[entry_idx],\n",
" \"$vector\": emb_result.embedding,\n",
" \"author\": authors_list[entry_idx],\n",
" \"tags\": tags,\n",
" })\n",
" # write to the vector collection\n",
" collection.insert_many(b_docs)\n",
" print(f\"[{len(b_docs)}]\", end=\"\")\n",
"\n",
"print(\"\\nFinished storing entries.\")"
]
},
{
"cell_type": "markdown",
"id": "db3ee629-b6b9-4a77-8c58-c3b93403a6a6",
"metadata": {},
"source": [
"## Use case 1: **quote search engine**"
]
},
{
"cell_type": "markdown",
"id": "db3b12b3-2557-4826-af5a-16e6cd9a4531",
"metadata": {},
"source": [
"For the quote-search functionality, you need first to make the input quote into a vector, and then use it to query the store (besides handling the optional metadata into the search call, that is).\n",
"\n",
"Encapsulate the search-engine functionality into a function for ease of re-use. At its core is the `vector_find` method of the collection:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d6fcf182-3ab7-4d28-9472-dce35cc38182",
"metadata": {},
"outputs": [],
"source": [
"def find_quote_and_author(query_quote, n, author=None, tags=None):\n",
" query_vector = client.embeddings.create(\n",
" input=[query_quote],\n",
" model=embedding_model_name,\n",
" ).data[0].embedding\n",
" filter_clause = {}\n",
" if author:\n",
" filter_clause[\"author\"] = author\n",
" if tags:\n",
" filter_clause[\"tags\"] = {}\n",
" for tag in tags:\n",
" filter_clause[\"tags\"][tag] = True\n",
" #\n",
" results = collection.vector_find(\n",
" query_vector,\n",
" limit=n,\n",
" filter=filter_clause,\n",
" fields=[\"quote\", \"author\"]\n",
" )\n",
" return [\n",
" (result[\"quote\"], result[\"author\"])\n",
" for result in results\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "2539262d-100b-4e8d-864d-e9c612a73e91",
"metadata": {},
"source": [
"### Putting search to test"
]
},
{
"cell_type": "markdown",
"id": "3634165c-0882-4281-bc60-ab96261a500d",
"metadata": {},
"source": [
"Passing just a quote:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6722c2c0-3e54-4738-80ce-4d1149e95414",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('Life to the great majority is only a constant struggle for mere existence, with the certainty of losing it at last.',\n",
" 'schopenhauer'),\n",
" ('We give up leisure in order that we may have leisure, just as we go to war in order that we may have peace.',\n",
" 'aristotle'),\n",
" ('Perhaps the gods are kind to us, by making life more disagreeable as we grow older. In the end death seems less intolerable than the manifold burdens we carry',\n",
" 'freud')]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"find_quote_and_author(\"We struggle all our life for nothing\", 3)"
]
},
{
"cell_type": "markdown",
"id": "50828e4c-9bb5-4489-9fe9-87da5fbe1f18",
"metadata": {},
"source": [
"Search restricted to an author:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "da9c705f-5c12-42b3-a038-202f89a3c6da",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('To live is to suffer, to survive is to find some meaning in the suffering.',\n",
" 'nietzsche'),\n",
" ('What makes us heroic?--Confronting simultaneously our supreme suffering and our supreme hope.',\n",
" 'nietzsche')]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"find_quote_and_author(\"We struggle all our life for nothing\", 2, author=\"nietzsche\")"
]
},
{
"cell_type": "markdown",
"id": "4a3857ea-6dfe-489a-9b86-4e5e0534960f",
"metadata": {},
"source": [
"Search constrained to a tag (out of those saved earlier with the quotes):"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "abcfaec9-8f42-4789-a5ed-1073fa2932c2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('He who seeks equality between unequals seeks an absurdity.', 'spinoza'),\n",
" ('The people are that part of the state that does not know what it wants.',\n",
" 'hegel')]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"find_quote_and_author(\"We struggle all our life for nothing\", 2, tags=[\"politics\"])"
]
},
{
"cell_type": "markdown",
"id": "746fe38f-139f-44a6-a225-a63e40d3ddf5",
"metadata": {},
"source": [
"### Cutting out irrelevant results\n",
"\n",
"The vector similarity search generally returns the vectors that are closest to the query, even if that means results that might be somewhat irrelevant if there's nothing better.\n",
"\n",
"To keep this issue under control, you can get the actual \"similarity\" between the query and each result, and then implement a cutoff on it, effectively discarding results that are beyond that threshold.\n",
"Tuning this threshold correctly is not an easy problem: here, we'll just show you the way.\n",
"\n",
"To get a feeling on how this works, try the following query and play with the choice of quote and threshold to compare the results. Note that the similarity is returned as the special `$similarity` field in each result document - and it will be returned by default, unless you pass `include_similarity = False` to the search method.\n",
"\n",
"_Note (for the mathematically inclined): this value is **a rescaling between zero and one** of the cosine difference between the vectors, i.e. of the scalar product divided by the product of the norms of the two vectors. In other words, this is 0 for opposite-facing vectors and +1 for parallel vectors. For other measures of similarity (cosine is the default), check the `metric` parameter in `AstraDB.create_collection` and the [documentation on allowed values](https://docs.datastax.com/en/astra-serverless/docs/develop/dev-with-json.html#metric-types)._"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "b9b43721-a3b0-4ac4-b730-7a6aeec52e70",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3 quotes within the threshold:\n",
" 0. [similarity=0.927] \"The assumption that animals are without rights, and the illusion that ...\"\n",
" 1. [similarity=0.922] \"Animals are in possession of themselves; their soul is in possession o...\"\n",
" 2. [similarity=0.920] \"At his best, man is the noblest of all animals; separated from law and...\"\n"
]
}
],
"source": [
"quote = \"Animals are our equals.\"\n",
"# quote = \"Be good.\"\n",
"# quote = \"This teapot is strange.\"\n",
"\n",
"metric_threshold = 0.92\n",
"\n",
"quote_vector = client.embeddings.create(\n",
" input=[quote],\n",
" model=embedding_model_name,\n",
").data[0].embedding\n",
"\n",
"results_full = collection.vector_find(\n",
" quote_vector,\n",
" limit=8,\n",
" fields=[\"quote\"]\n",
")\n",
"results = [res for res in results_full if res[\"$similarity\"] >= metric_threshold]\n",
"\n",
"print(f\"{len(results)} quotes within the threshold:\")\n",
"for idx, result in enumerate(results):\n",
" print(f\" {idx}. [similarity={result['$similarity']:.3f}] \\\"{result['quote'][:70]}...\\\"\")"
]
},
{
"cell_type": "markdown",
"id": "71871251-169f-4d3f-a687-65f836a9a8fe",
"metadata": {},
"source": [
"## Use case 2: **quote generator**"
]
},
{
"cell_type": "markdown",
"id": "b0a9cd63-a131-4819-bf41-c8ffa0b1e1ca",
"metadata": {},
"source": [
"For this task you need another component from OpenAI, namely an LLM to generate the quote for us (based on input obtained by querying the Vector Store).\n",
"\n",
"You also need a template for the prompt that will be filled for the generate-quote LLM completion task."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "a6dd366d-665a-45fd-917b-b6b5312b0865",
"metadata": {},
"outputs": [],
"source": [
"completion_model_name = \"gpt-3.5-turbo\"\n",
"\n",
"generation_prompt_template = \"\"\"\"Generate a single short philosophical quote on the given topic,\n",
"similar in spirit and form to the provided actual example quotes.\n",
"Do not exceed 20-30 words in your quote.\n",
"\n",
"REFERENCE TOPIC: \"{topic}\"\n",
"\n",
"ACTUAL EXAMPLES:\n",
"{examples}\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"id": "53073a9e-16de-4e49-9e97-ff31b9b250c2",
"metadata": {},
"source": [
"Like for search, this functionality is best wrapped into a handy function (which internally uses search):"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "397e6ebd-b30e-413b-be63-81a62947a7b8",
"metadata": {},
"outputs": [],
"source": [
"def generate_quote(topic, n=2, author=None, tags=None):\n",
" quotes = find_quote_and_author(query_quote=topic, n=n, author=author, tags=tags)\n",
" if quotes:\n",
" prompt = generation_prompt_template.format(\n",
" topic=topic,\n",
" examples=\"\\n\".join(f\" - {quote[0]}\" for quote in quotes),\n",
" )\n",
" # a little logging:\n",
" print(\"** quotes found:\")\n",
" for q, a in quotes:\n",
" print(f\"** - {q} ({a})\")\n",
" print(\"** end of logging\")\n",
" #\n",
" response = client.chat.completions.create(\n",
" model=completion_model_name,\n",
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
" temperature=0.7,\n",
" max_tokens=320,\n",
" )\n",
" return response.choices[0].message.content.replace('\"', '').strip()\n",
" else:\n",
" print(\"** no quotes found.\")\n",
" return None"
]
},
{
"cell_type": "markdown",
"id": "c13f8488-899b-4d4c-a069-73643a778200",
"metadata": {},
"source": [
"_Note: similar to the case of the embedding computation, the code for the Chat Completion API would be slightly different for OpenAI prior to v1.0._"
]
},
{
"cell_type": "markdown",
"id": "63bcc157-e5d4-43ef-8028-d4dcc8a72b9c",
"metadata": {},
"source": [
"#### Putting quote generation to test"
]
},
{
"cell_type": "markdown",
"id": "fe6b3f38-089d-486d-b32c-e665c725faa8",
"metadata": {},
"source": [
"Just passing a text (a \"quote\", but one can actually just suggest a topic since its vector embedding will still end up at the right place in the vector space):"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "806ba758-8988-410e-9eeb-b9c6799e6b25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"** quotes found:\n",
"** - Happiness is the reward of virtue. (aristotle)\n",
"** - Our moral virtues benefit mainly other people; intellectual virtues, on the other hand, benefit primarily ourselves; therefore the former make us universally popular, the latter unpopular. (schopenhauer)\n",
"** end of logging\n",
"\n",
"A new generated quote:\n",
"True politics lies in the virtuous pursuit of justice, for it is through virtue that we build a better world for all.\n"
]
}
],
"source": [
"q_topic = generate_quote(\"politics and virtue\")\n",
"print(\"\\nA new generated quote:\")\n",
"print(q_topic)"
]
},
{
"cell_type": "markdown",
"id": "ca032d30-4538-4d0b-aea1-731fb32d2d4b",
"metadata": {},
"source": [
"Use inspiration from just a single philosopher:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "7c2e2d4e-865f-4b2d-80cd-a695271415d9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"** quotes found:\n",
"** - Because Christian morality leaves animals out of account, they are at once outlawed in philosophical morals; they are mere 'things,' mere means to any ends whatsoever. They can therefore be used for vivisection, hunting, coursing, bullfights, and horse racing, and can be whipped to death as they struggle along with heavy carts of stone. Shame on such a morality that is worthy of pariahs, and that fails to recognize the eternal essence that exists in every living thing, and shines forth with inscrutable significance from all eyes that see the sun! (schopenhauer)\n",
"** - The assumption that animals are without rights, and the illusion that our treatment of them has no moral significance, is a positively outrageous example of Western crudity and barbarity. Universal compassion is the only guarantee of morality. (schopenhauer)\n",
"** end of logging\n",
"\n",
"A new generated quote:\n",
"Excluding animals from ethical consideration reveals a moral blindness that allows for their exploitation and suffering. True morality embraces universal compassion.\n"
]
}
],
"source": [
"q_topic = generate_quote(\"animals\", author=\"schopenhauer\")\n",
"print(\"\\nA new generated quote:\")\n",
"print(q_topic)"
]
},
{
"cell_type": "markdown",
"id": "4bd8368a-9e23-49a5-8694-921728ea9656",
"metadata": {},
"source": [
"## Cleanup\n",
"\n",
"If you want to remove all resources used for this demo, run this cell (_warning: this will irreversibly delete the collection and its data!_):"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1eb0fd16-7e15-4742-8fc5-94d9eeeda620",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'status': {'ok': 1}}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"astra_db.delete_collection(coll_name)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -15,11 +15,11 @@
"id": "b3496d07-f473-4008-9133-1a54b818c8d3",
"metadata": {},
"source": [
"In this quickstart you will learn how to build a \"philosophy quote finder & generator\" using OpenAI's vector embeddings and DataStax Astra DB (_or a vector-capable Apache Cassandra® cluster, if you prefer_) as the vector store for data persistence.\n",
"In this quickstart you will learn how to build a \"philosophy quote finder & generator\" using OpenAI's vector embeddings and [Apache Cassandra®](https://cassandra.apache.org), or equivalently DataStax [Astra DB through CQL](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html), as the vector store for data persistence.\n",
"\n",
"The basic workflow of this notebook is outlined below. You will evaluate and store the vector embeddings for a number of quotes by famous philosophers, use them to build a powerful search engine and, after that, even a generator of new quotes!\n",
"\n",
"The notebook exemplifies some of the standard usage patterns of vector search -- while showing how easy is it to get started with the [Vector capabilities of Astra DB](https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html).\n",
"The notebook exemplifies some of the standard usage patterns of vector search -- while showing how easy is it to get started with the vector capabilities of [Cassandra](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html) / [Astra DB through CQL](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html).\n",
"\n",
"For a background on using vector search and text embeddings to build a question-answering system, please check out this excellent hands-on notebook: [Question answering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb).\n",
"\n",
@ -48,13 +48,13 @@
"\n",
"Each quote is made into an embedding vector with OpenAI's `Embedding`. These are saved in the Vector Store for later use in searching. Some metadata, including the author's name and a few other pre-computed tags, are stored alongside, to allow for search customization.\n",
"\n",
"![1_vector_indexing_cql](https://user-images.githubusercontent.com/14221764/262649444-95b35174-acc9-44f0-853a-3e2904b0aac6.png)\n",
"![1_vector_indexing_cql](https://user-images.githubusercontent.com/14221764/282437237-1e763166-a863-4332-99b8-323ba23d1b87.png)\n",
"\n",
"**Search**\n",
"\n",
"To find a quote similar to the provided search quote, the latter is made into an embedding vector on the fly, and this vector is used to query the store for similar vectors ... i.e. similar quotes that were previously indexed. The search can optionally be constrained by additional metadata (\"find me quotes by Spinoza similar to this one ...\").\n",
"\n",
"![2_vector_search_cql](https://user-images.githubusercontent.com/14221764/262649453-d1fea38d-3fb6-4917-86a8-7144c2df99fa.png)\n",
"![2_vector_search_cql](https://user-images.githubusercontent.com/14221764/282437291-85335612-a845-444e-bed7-e4cf014a9f17.png)\n",
"\n",
"The key point here is that \"quotes similar in content\" translates, in vector space, to vectors that are metrically close to each other: thus, vector similarity search effectively implements semantic similarity. _This is the key reason vector embeddings are so powerful._\n",
"\n",
@ -68,7 +68,7 @@
"\n",
"Given a suggestion (a topic or a tentative quote), the search step is performed, and the first returned results (quotes) are fed into an LLM prompt which asks the generative model to invent a new text along the lines of the passed examples _and_ the initial suggestion.\n",
"\n",
"![4_quote_generation](https://user-images.githubusercontent.com/14221764/262087157-784117ff-7c56-45bc-9c76-577d09aea19a.png)"
"![4_quote_generation](https://user-images.githubusercontent.com/14221764/282437321-881bd273-3443-4987-9a11-350d3288dd8e.png)"
]
},
{
@ -84,7 +84,7 @@
"id": "44a14f95-4683-4d0c-a251-0df7b43ca975",
"metadata": {},
"source": [
"First install some required packages:"
"Install and import the necessary dependencies:"
]
},
{
@ -96,48 +96,43 @@
},
"outputs": [],
"source": [
"!pip install cassandra-driver openai"
"!pip install --quiet \"cassandra-driver>=0.28.0\" \"openai>=1.0.0\" datasets"
]
},
{
"cell_type": "markdown",
"id": "9cb99e33-5cb7-416f-8dca-da18e0cb108d",
"metadata": {},
"source": [
"## Get DB connection"
]
},
{
"cell_type": "markdown",
"id": "65a8edc1-4633-491b-9ed3-11163ec24e46",
"cell_type": "code",
"execution_count": 2,
"id": "6b5d1a8f-9175-417a-aa21-06fe2ad2998b",
"metadata": {},
"outputs": [],
"source": [
"A couple of secrets are required to create a `Session` object (a connection to your Astra DB instance).\n",
"import os\n",
"from uuid import uuid4\n",
"from getpass import getpass\n",
"from collections import Counter\n",
"\n",
"_(Note: some steps will be slightly different on Google Colab and on local Jupyter, that's why the notebook will detect the runtime type.)_"
"from cassandra.cluster import Cluster\n",
"from cassandra.auth import PlainTextAuthProvider\n",
"\n",
"import openai\n",
"from datasets import load_dataset"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a7429ed4-b3fe-44b0-ad00-60883df32070",
"cell_type": "markdown",
"id": "f20a38ab-3cb1-4dd1-9426-a100c33a86c2",
"metadata": {},
"outputs": [],
"source": [
"from cassandra.cluster import Cluster\n",
"from cassandra.auth import PlainTextAuthProvider"
"_Don't mind the next cell too much, we need it to detect Colabs and let you upload the SCB file (see below):_"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e4f2eec1-b784-4cea-9006-03cfe7b31e25",
"id": "26263ac3-4e73-42c1-a028-a8ee6cb3425a",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"try:\n",
" from google.colab import files\n",
" IS_COLAB = True\n",
@ -145,6 +140,24 @@
" IS_COLAB = False"
]
},
{
"cell_type": "markdown",
"id": "9cb99e33-5cb7-416f-8dca-da18e0cb108d",
"metadata": {},
"source": [
"## Get DB connection"
]
},
{
"cell_type": "markdown",
"id": "65a8edc1-4633-491b-9ed3-11163ec24e46",
"metadata": {},
"source": [
"A couple of secrets are required to create a `Session` object (a connection to your Astra DB instance).\n",
"\n",
"_(Note: some steps will be slightly different on Google Colab and on local Jupyter, that's why the notebook will detect the runtime type.)_"
]
},
{
"cell_type": "code",
"execution_count": 4,
@ -155,7 +168,7 @@
"name": "stdin",
"output_type": "stream",
"text": [
"Please provide the full path to your Secure Connect Bundle zipfile: /path/to/secure-connect-DATABASE.zip\n",
"Please provide the full path to your Secure Connect Bundle zipfile: /path/to/secure-connect-DatabaseName.zip\n",
"Please provide your Database Token ('AstraCS:...' string): ········\n",
"Please provide the Keyspace name for your Database: my_keyspace\n"
]
@ -195,12 +208,12 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "949ab020-90c8-499b-a139-f69f07af50ed",
"metadata": {},
"outputs": [],
"source": [
"# Don't mind the \"Closing connection\" error after \"downgrading protocol...\" messages,\n",
"# Don't mind the \"Closing connection\" error after \"downgrading protocol...\" messages you may see,\n",
"# it is really just a warning: the connection will work smoothly.\n",
"cluster = Cluster(\n",
" cloud={\n",
@ -232,7 +245,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"id": "8db837dc-cd49-41e2-8b5d-edb17ccc470e",
"metadata": {},
"outputs": [],
@ -251,22 +264,22 @@
"id": "fb1beab1-bbbe-4714-b817-c3ee3db34d91",
"metadata": {},
"source": [
"Pass this statement on your database Session to execute it:"
"Pass this statement to your database Session to execute it:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "e9a83507-1ebc-420e-8845-bef55f2b7c64",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<cassandra.cluster.ResultSet at 0x7f04a77fd1e0>"
"<cassandra.cluster.ResultSet at 0x7feee37b3460>"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@ -290,24 +303,24 @@
"source": [
"In order to run ANN (approximate-nearest-neighbor) searches on the vectors in the table, you need to create a specific index on the `embedding_vector` column.\n",
"\n",
"_When creating the index, you can optionally choose the \"similarity function\" used to compute vector distances: since for unit-length vectors (such as those from OpenAI) the \"cosine difference\" is the same as the \"dot product\", you'll use the latter which is computationally less expensive._\n",
"_When creating the index, you can [optionally choose](https://docs.datastax.com/en/astra-serverless/docs/vector-search/cql.html#_create_the_vector_schema_and_load_the_data_into_the_database) the \"similarity function\" used to compute vector distances: since for unit-length vectors (such as those from OpenAI) the \"cosine difference\" is the same as the \"dot product\", you'll use the latter which is computationally less expensive._\n",
"\n",
"Run this CQL statement:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "9dd61e12-a7a3-4c99-9ba3-f8d8641ff32a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<cassandra.cluster.ResultSet at 0x7f04a77fffd0>"
"<cassandra.cluster.ResultSet at 0x7feeefd3da00>"
]
},
"execution_count": 9,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -341,17 +354,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "691f1a07-cab4-42a1-baba-f17b561ddd3f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<cassandra.cluster.ResultSet at 0x7f047bf455a0>"
"<cassandra.cluster.ResultSet at 0x7fef2c64af70>"
]
},
"execution_count": 10,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@ -388,7 +401,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "37fe7653-dd64-4494-83e1-5702ec41725c",
"metadata": {},
"outputs": [
@ -404,18 +417,6 @@
"OPENAI_API_KEY = getpass(\"Please enter your OpenAI API Key: \")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "8065a42a-0ece-4453-b771-1dbef6d8a620",
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"openai.api_key = OPENAI_API_KEY"
]
},
{
"cell_type": "markdown",
"id": "847f2821-7f3f-4dcd-8e0c-49aa397e36f4",
@ -428,25 +429,34 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 11,
"id": "6bf89454-9a55-4202-ab6b-ea15b2048f3d",
"metadata": {},
"outputs": [],
"source": [
"client = openai.OpenAI(api_key=OPENAI_API_KEY)\n",
"embedding_model_name = \"text-embedding-ada-002\"\n",
"\n",
"result = openai.Embedding.create(\n",
"result = client.embeddings.create(\n",
" input=[\n",
" \"This is a sentence\",\n",
" \"A second sentence\"\n",
" ],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d33dc452-3449-4035-aea1-f5a64034b493",
"metadata": {},
"source": [
"_Note: the above is the syntax for OpenAI v1.0+. If using previous versions, the code to get the embeddings will look different._"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 12,
"id": "50a8e6f0-0aa7-4ffc-94e9-702b68566815",
"metadata": {},
"outputs": [
@ -455,7 +465,7 @@
"output_type": "stream",
"text": [
"len(result.data) = 2\n",
"result.data[1].embedding = [-0.01075850147753954, 0.0013505702372640371, 0.0036223...\n",
"result.data[1].embedding = [-0.0108176339417696, 0.0013546717818826437, 0.00362232...\n",
"len(result.data[1].embedding) = 1536\n"
]
}
@ -479,28 +489,17 @@
"id": "cf0f3d58-74c2-458b-903d-3d12e61b7846",
"metadata": {},
"source": [
"Get a JSON file containing our quotes. We already prepared this collection and put it into this repo for quick loading.\n",
"\n",
"_(Note: we adapted the following from a Kaggle dataset -- which we acknowledge -- and also added a few tags to each quote.)_"
"Get a dataset with the quotes. _(We adapted and augmented the data from [this Kaggle dataset](https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers), ready to use in this demo.)_"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 13,
"id": "94ff33fb-4b52-4c15-ab74-4af4fe973cbf",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"import requests\n",
"\n",
"if IS_COLAB:\n",
" # load from Web request to (github) repo\n",
" json_url = \"https://raw.githubusercontent.com/openai/openai-cookbook/main/examples/vector_databases/cassandra_astradb/sources/philo_quotes.json\"\n",
" quote_dict = json.loads(requests.get(json_url).text) \n",
"else:\n",
" # load from local repo\n",
" quote_dict = json.load(open(\"./sources/philo_quotes.json\"))"
"philo_dataset = load_dataset(\"datastax/philosopher-quotes\")[\"train\"]"
]
},
{
@ -508,55 +507,65 @@
"id": "ab6b08b1-e3db-4c7c-9d7c-2ada7c8bc71d",
"metadata": {},
"source": [
"A quick inspection of the input data structure:"
"A quick inspection:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6ab84ccb-3363-4bdc-9484-0d68c25a58ff",
"execution_count": 14,
"id": "4fe11475-9fdd-4775-93b2-4267e73f372f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Adapted from this Kaggle dataset: https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers (License: CC BY-NC-SA 4.0)\n",
"\n",
"Quotes loaded: 450.\n",
"By author:\n",
" aristotle (50)\n",
" freud (50)\n",
" hegel (50)\n",
" kant (50)\n",
" nietzsche (50)\n",
" plato (50)\n",
" sartre (50)\n",
" schopenhauer (50)\n",
" spinoza (50)\n",
"\n",
"Some examples:\n",
" aristotle:\n",
" True happiness comes from gaining insight and grow ... (tags: knowledge)\n",
" The roots of education are bitter, but the fruit i ... (tags: education, knowledge)\n",
" freud:\n",
" We are what we are because we have been what we ha ... (tags: history)\n",
" From error to error one discovers the entire truth ... (tags: )\n"
"An example entry:\n",
"{'author': 'aristotle', 'quote': 'Love well, be loved and do something of value.', 'tags': 'love;ethics'}\n"
]
}
],
"source": [
"print(quote_dict[\"source\"])\n",
"\n",
"total_quotes = sum(len(quotes) for quotes in quote_dict[\"quotes\"].values())\n",
"print(f\"\\nQuotes loaded: {total_quotes}.\\nBy author:\")\n",
"print(\"\\n\".join(f\" {author} ({len(quotes)})\" for author, quotes in quote_dict[\"quotes\"].items()))\n",
"\n",
"print(\"\\nSome examples:\")\n",
"for author, quotes in list(quote_dict[\"quotes\"].items())[:2]:\n",
" print(f\" {author}:\")\n",
" for quote in quotes[:2]:\n",
" print(f\" {quote['body'][:50]} ... (tags: {', '.join(quote['tags'])})\")"
"print(\"An example entry:\")\n",
"print(philo_dataset[16])"
]
},
{
"cell_type": "markdown",
"id": "c1f4db57-cb5d-418d-b4c9-d27be28bae79",
"metadata": {},
"source": [
"Check the dataset size:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "1b522e05-87b4-461c-a61f-6dfb7ad2ab53",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total: 450 quotes. By author:\n",
" aristotle : 50 quotes\n",
" schopenhauer : 50 quotes\n",
" spinoza : 50 quotes\n",
" hegel : 50 quotes\n",
" freud : 50 quotes\n",
" nietzsche : 50 quotes\n",
" sartre : 50 quotes\n",
" plato : 50 quotes\n",
" kant : 50 quotes\n"
]
}
],
"source": [
"author_count = Counter(entry[\"author\"] for entry in philo_dataset)\n",
"print(f\"Total: {len(philo_dataset)} quotes. By author:\")\n",
"for author, count in author_count.most_common():\n",
" print(f\" {author:<20}: {count} quotes\")"
]
},
{
@ -568,16 +577,16 @@
"\n",
"You will compute the embeddings for the quotes and save them into the Vector Store, along with the text itself and the metadata planned for later use.\n",
"\n",
"To optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service, with one batch per author.\n",
"To optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service.\n",
"\n",
"The DB write is accomplished with a CQL statement. But since you'll run this particular insertion several times (albeit with different values), it's best to _prepare_ the statement and then just run it over and over.\n",
"\n",
"_(Note: for faster execution, the Cassandra drivers would let you do concurrent inserts, which we don't do here for a more straightforward demo code.)_"
"_(Note: for faster insertion, the Cassandra drivers would let you do concurrent inserts, which we don't do here for a more straightforward demo code.)_"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 16,
"id": "68e80e81-886b-45a4-be61-c33b8028bcfb",
"metadata": {},
"outputs": [
@ -585,41 +594,78 @@
"name": "stdout",
"output_type": "stream",
"text": [
"aristotle: ************************************************** Done (50 quotes inserted).\n",
"freud: ************************************************** Done (50 quotes inserted).\n",
"hegel: ************************************************** Done (50 quotes inserted).\n",
"kant: ************************************************** Done (50 quotes inserted).\n",
"nietzsche: ************************************************** Done (50 quotes inserted).\n",
"plato: ************************************************** Done (50 quotes inserted).\n",
"sartre: ************************************************** Done (50 quotes inserted).\n",
"schopenhauer: ************************************************** Done (50 quotes inserted).\n",
"spinoza: ************************************************** Done (50 quotes inserted).\n",
"Finished inserting.\n"
"Starting to store entries:\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ******************** done (20)\n",
"B ********** done (10)\n",
"\n",
"Finished storing entries.\n"
]
}
],
"source": [
"from uuid import uuid4\n",
"\n",
"prepared_insertion = session.prepare(\n",
" f\"INSERT INTO {keyspace}.philosophers_cql (quote_id, author, body, embedding_vector, tags) VALUES (?, ?, ?, ?, ?);\"\n",
")\n",
"\n",
"for philosopher, quotes in quote_dict[\"quotes\"].items():\n",
" print(f\"{philosopher}: \", end=\"\")\n",
" result = openai.Embedding.create(\n",
" input=[quote[\"body\"] for quote in quotes],\n",
" engine=embedding_model_name,\n",
"BATCH_SIZE = 20\n",
"\n",
"num_batches = ((len(philo_dataset) + BATCH_SIZE - 1) // BATCH_SIZE)\n",
"\n",
"quotes_list = philo_dataset[\"quote\"]\n",
"authors_list = philo_dataset[\"author\"]\n",
"tags_list = philo_dataset[\"tags\"]\n",
"\n",
"print(\"Starting to store entries:\")\n",
"for batch_i in range(num_batches):\n",
" b_start = batch_i * BATCH_SIZE\n",
" b_end = (batch_i + 1) * BATCH_SIZE\n",
" # compute the embedding vectors for this batch\n",
" b_emb_results = client.embeddings.create(\n",
" input=quotes_list[b_start : b_end],\n",
" model=embedding_model_name,\n",
" )\n",
" for quote_idx, (quote, q_data) in enumerate(zip(quotes, result.data)):\n",
" # prepare the rows for insertion\n",
" print(\"B \", end=\"\")\n",
" for entry_idx, emb_result in zip(range(b_start, b_end), b_emb_results.data):\n",
" if tags_list[entry_idx]:\n",
" tags = {\n",
" tag\n",
" for tag in tags_list[entry_idx].split(\";\")\n",
" }\n",
" else:\n",
" tags = set()\n",
" author = authors_list[entry_idx]\n",
" quote = quotes_list[entry_idx]\n",
" quote_id = uuid4() # a new random ID for each quote. In a production app you'll want to have better control...\n",
" session.execute(\n",
" prepared_insertion,\n",
" (quote_id, philosopher, quote[\"body\"], q_data.embedding, set(quote[\"tags\"])),\n",
" (quote_id, author, quote, emb_result.embedding, tags),\n",
" )\n",
" print(\"*\", end='')\n",
" print(f\" Done ({len(quotes)} quotes inserted).\")\n",
"print(\"Finished inserting.\")"
" print(\"*\", end=\"\")\n",
" print(f\" done ({len(b_emb_results.data)})\")\n",
"\n",
"print(\"\\nFinished storing entries.\")"
]
},
{
@ -642,15 +688,15 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 17,
"id": "d6fcf182-3ab7-4d28-9472-dce35cc38182",
"metadata": {},
"outputs": [],
"source": [
"def find_quote_and_author(query_quote, n, author=None, tags=None):\n",
" query_vector = openai.Embedding.create(\n",
" query_vector = client.embeddings.create(\n",
" input=[query_quote],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
" ).data[0].embedding\n",
" # depending on what conditions are passed, the WHERE clause in the statement may vary.\n",
" where_clauses = []\n",
@ -705,7 +751,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 18,
"id": "6722c2c0-3e54-4738-80ce-4d1149e95414",
"metadata": {},
"outputs": [
@ -720,7 +766,7 @@
" 'freud')]"
]
},
"execution_count": 19,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@ -739,7 +785,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 19,
"id": "da9c705f-5c12-42b3-a038-202f89a3c6da",
"metadata": {},
"outputs": [
@ -752,7 +798,7 @@
" 'nietzsche')]"
]
},
"execution_count": 20,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@ -771,7 +817,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 20,
"id": "abcfaec9-8f42-4789-a5ed-1073fa2932c2",
"metadata": {},
"outputs": [
@ -784,7 +830,7 @@
" 'nietzsche')]"
]
},
"execution_count": 21,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@ -807,12 +853,12 @@
"\n",
"To get a feeling on how this works, try the following query and play with the choice of quote and threshold to compare the results:\n",
"\n",
"_Note (for the mathematically inclined): this \"distance\" is not exactly the cosine difference between the vectors (i.e. the scalar product divided by the product of the norms of the two vectors), rather it is rescaled to fit the [0, 1] interval. Elsewhere (e.g. in the \"CassIO\" version of this example) you will see the actual bare cosine difference. As a result, if you compare the two notebooks, the numerical values and adequate thresholds will be slightly different._"
"_Note (for the mathematically inclined): this value is **a rescaling between zero and one** of the cosine difference between the vectors, i.e. of the scalar product divided by the product of the norms of the two vectors. In other words, this is 0 for opposite-facing vecors and +1 for parallel vectors. For other measures of similarity, check the [documentation](https://docs.datastax.com/en/astra-serverless/docs/vector-search/cql.html#_create_the_vector_schema_and_load_the_data_into_the_database) -- and keep in mind that the metric in the `SELECT` query should match the one used when creating the index earlier for meaningful, ordered results._"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 21,
"id": "b9b43721-a3b0-4ac4-b730-7a6aeec52e70",
"metadata": {},
"outputs": [
@ -820,15 +866,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"8 quotes within the threshold:\n",
"3 quotes within the threshold:\n",
" 0. [similarity=0.927] \"The assumption that animals are without rights, and the illusion that ...\"\n",
" 1. [similarity=0.922] \"Animals are in possession of themselves; their soul is in possession o...\"\n",
" 2. [similarity=0.920] \"At his best, man is the noblest of all animals; separated from law and...\"\n",
" 3. [similarity=0.916] \"Man is the only animal that must be encouraged to live....\"\n",
" 4. [similarity=0.916] \".... we are a part of nature as a whole, whose order we follow....\"\n",
" 5. [similarity=0.912] \"Every human endeavor, however singular it seems, involves the whole hu...\"\n",
" 6. [similarity=0.910] \"Because Christian morality leaves animals out of account, they are at ...\"\n",
" 7. [similarity=0.910] \"A dog has the soul of a philosopher....\"\n"
" 2. [similarity=0.920] \"At his best, man is the noblest of all animals; separated from law and...\"\n"
]
}
],
@ -837,11 +878,11 @@
"# quote = \"Be good.\"\n",
"# quote = \"This teapot is strange.\"\n",
"\n",
"similarity_threshold = 0.9\n",
"similarity_threshold = 0.92\n",
"\n",
"quote_vector = openai.Embedding.create(\n",
"quote_vector = client.embeddings.create(\n",
" input=[quote],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
").data[0].embedding\n",
"\n",
"# Once more: remember to prepare your statements in production for greater performance...\n",
@ -885,7 +926,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 22,
"id": "a6dd366d-665a-45fd-917b-b6b5312b0865",
"metadata": {},
"outputs": [],
@ -913,7 +954,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 23,
"id": "397e6ebd-b30e-413b-be63-81a62947a7b8",
"metadata": {},
"outputs": [],
@ -931,7 +972,7 @@
" print(f\"** - {q} ({a})\")\n",
" print(\"** end of logging\")\n",
" #\n",
" response = openai.ChatCompletion.create(\n",
" response = client.chat.completions.create(\n",
" model=completion_model_name,\n",
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
" temperature=0.7,\n",
@ -943,6 +984,14 @@
" return None"
]
},
{
"cell_type": "markdown",
"id": "94b0e20f-3e98-47d4-9963-2fbb3d2594de",
"metadata": {},
"source": [
"_Note: similar to the case of the embedding computation, the code for the Chat Completion API would be slightly different for OpenAI prior to v1.0._"
]
},
{
"cell_type": "markdown",
"id": "63bcc157-e5d4-43ef-8028-d4dcc8a72b9c",
@ -961,7 +1010,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 24,
"id": "806ba758-8988-410e-9eeb-b9c6799e6b25",
"metadata": {},
"outputs": [
@ -971,11 +1020,11 @@
"text": [
"** quotes found:\n",
"** - Happiness is the reward of virtue. (aristotle)\n",
"** - It is better for a city to be governed by a good man than by good laws. (aristotle)\n",
"** - Our moral virtues benefit mainly other people; intellectual virtues, on the other hand, benefit primarily ourselves; therefore the former make us universally popular, the latter unpopular. (schopenhauer)\n",
"** end of logging\n",
"\n",
"A new generated quote:\n",
"Politics is the battleground where virtue fights for the soul of society.\n"
"True politics is not the pursuit of power, but the cultivation of virtue for the betterment of all.\n"
]
}
],
@ -995,7 +1044,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 25,
"id": "7c2e2d4e-865f-4b2d-80cd-a695271415d9",
"metadata": {},
"outputs": [
@ -1009,7 +1058,7 @@
"** end of logging\n",
"\n",
"A new generated quote:\n",
"By disregarding animals in our moral framework, we deny their inherent value and perpetuate a barbaric disregard for life.\n"
"Do not judge the worth of a soul by its outward form, for within every animal lies an eternal essence that deserves our compassion and respect.\n"
]
}
],
@ -1051,17 +1100,17 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 26,
"id": "d849003c-fce8-4bd9-96ee-d826bb4301eb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<cassandra.cluster.ResultSet at 0x7f046240c880>"
"<cassandra.cluster.ResultSet at 0x7fef149d7940>"
]
},
"execution_count": 29,
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
@ -1102,51 +1151,76 @@
"\n",
"You could use the very same insertion code as you did earlier, because the differences are hidden \"behind the scenes\": the database will store the inserted rows differently according to the partioning scheme of this new table.\n",
"\n",
"However, by way of demonstration, you will take advantage of a handy facility offered by the Cassandra drivers to easily run several queries (in this case, `INSERT`s) concurrently. This is something that Astra DB / Cassandra supports very well and can lead to a significant speedup, with very little changes in the client code.\n",
"However, by way of demonstration, you will take advantage of a handy facility offered by the Cassandra drivers to easily run several queries (in this case, `INSERT`s) concurrently. This is something that Cassandra / Astra DB through CQL supports very well and can lead to a significant speedup, with very little changes in the client code.\n",
"\n",
"_(Note: one could additionally have cached the embeddings computed previously to save a few API tokens -- here, however, we wanted to keep the code easier to inspect.)_"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "424513a6-0a9d-4164-bf30-22d5b7e3bb25",
"execution_count": 27,
"id": "1576d4a9-4369-43dd-b17b-76940b92ea80",
"metadata": {},
"outputs": [],
"source": [
"from cassandra.concurrent import execute_concurrent_with_args"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "c63b18c0-a866-4ac1-b2a1-37863b6db663",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"aristotle: Done (50 quotes inserted).\n",
"freud: Done (50 quotes inserted).\n",
"hegel: Done (50 quotes inserted).\n",
"kant: Done (50 quotes inserted).\n",
"nietzsche: Done (50 quotes inserted).\n",
"plato: Done (50 quotes inserted).\n",
"sartre: Done (50 quotes inserted).\n",
"schopenhauer: Done (50 quotes inserted).\n",
"spinoza: Done (50 quotes inserted).\n",
"Finished inserting.\n"
"Starting to store entries:\n",
"[...50] [...50] [...50] [...50] [...50] [...50] [...50] [...50] [...50] \n",
"Finished storing entries.\n"
]
}
],
"source": [
"from cassandra.concurrent import execute_concurrent_with_args\n",
"\n",
"prepared_insertion = session.prepare(\n",
" f\"INSERT INTO {keyspace}.philosophers_cql_partitioned (quote_id, author, body, embedding_vector, tags) VALUES (?, ?, ?, ?, ?);\"\n",
")\n",
"\n",
"for philosopher, quotes in quote_dict[\"quotes\"].items():\n",
" print(f\"{philosopher}: \", end=\"\")\n",
" result = openai.Embedding.create(\n",
" input=[quote[\"body\"] for quote in quotes],\n",
" engine=embedding_model_name,\n",
"BATCH_SIZE = 50\n",
"\n",
"num_batches = ((len(philo_dataset) + BATCH_SIZE - 1) // BATCH_SIZE)\n",
"\n",
"quotes_list = philo_dataset[\"quote\"]\n",
"authors_list = philo_dataset[\"author\"]\n",
"tags_list = philo_dataset[\"tags\"]\n",
"\n",
"print(\"Starting to store entries:\")\n",
"for batch_i in range(num_batches):\n",
" print(\"[...\", end=\"\")\n",
" b_start = batch_i * BATCH_SIZE\n",
" b_end = (batch_i + 1) * BATCH_SIZE\n",
" # compute the embedding vectors for this batch\n",
" b_emb_results = client.embeddings.create(\n",
" input=quotes_list[b_start : b_end],\n",
" model=embedding_model_name,\n",
" )\n",
" # prepare this batch's entries for insertion\n",
" tuples_to_insert = []\n",
" for quote_idx, (quote, q_data) in enumerate(zip(quotes, result.data)):\n",
" quote_id = uuid4()\n",
" tuples_to_insert.append( (quote_id, philosopher, quote[\"body\"], q_data.embedding, set(quote[\"tags\"])) )\n",
" for entry_idx, emb_result in zip(range(b_start, b_end), b_emb_results.data):\n",
" if tags_list[entry_idx]:\n",
" tags = {\n",
" tag\n",
" for tag in tags_list[entry_idx].split(\";\")\n",
" }\n",
" else:\n",
" tags = set()\n",
" author = authors_list[entry_idx]\n",
" quote = quotes_list[entry_idx]\n",
" quote_id = uuid4() # a new random ID for each quote. In a production app you'll want to have better control...\n",
" # append a *tuple* to the list, and in the tuple the values are ordered to match \"?\" in the prepared statement:\n",
" tuples_to_insert.append((quote_id, author, quote, emb_result.embedding, tags))\n",
" # insert the batch at once through the driver's concurrent primitive\n",
" conc_results = execute_concurrent_with_args(\n",
" session,\n",
" prepared_insertion,\n",
@ -1156,9 +1230,9 @@
" if any([not success for success, _ in conc_results]):\n",
" print(\"Something failed during the insertions!\")\n",
" else:\n",
" print(f\"Done ({len(quotes)} quotes inserted).\")\n",
" print(f\"{len(b_emb_results.data)}] \", end=\"\")\n",
"\n",
"print(\"Finished inserting.\")"
"print(\"\\nFinished storing entries.\")"
]
},
{
@ -1171,17 +1245,18 @@
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 29,
"id": "a3217a90-c682-4c72-b834-7717ed13a3af",
"metadata": {},
"outputs": [],
"source": [
"def find_quote_and_author_p(query_quote, n, author=None, tags=None):\n",
" query_vector = openai.Embedding.create(\n",
" query_vector = client.embeddings.create(\n",
" input=[query_quote],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
" ).data[0].embedding\n",
" # depending on what conditions are passed, the WHERE clause in the statement may vary.\n",
" # Depending on what conditions are passed, the WHERE clause in the statement may vary.\n",
" # Construct it accordingly:\n",
" where_clauses = []\n",
" where_values = []\n",
" if author:\n",
@ -1220,7 +1295,7 @@
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 30,
"id": "d7343a7a-5a06-47c5-ad96-8b60b6948352",
"metadata": {},
"outputs": [
@ -1235,7 +1310,7 @@
" 'freud')]"
]
},
"execution_count": 33,
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
@ -1254,7 +1329,7 @@
},
{
"cell_type": "code",
"execution_count": 34,
"execution_count": 31,
"id": "d1abb677-5a8b-48c2-82c5-dbca94ef56f1",
"metadata": {},
"outputs": [
@ -1267,7 +1342,7 @@
" 'nietzsche')]"
]
},
"execution_count": 34,
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
@ -1310,17 +1385,17 @@
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 32,
"id": "17f2dd55-82df-4c3f-8aef-4ae0a03e7b79",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<cassandra.cluster.ResultSet at 0x7f0462264eb0>"
"<cassandra.cluster.ResultSet at 0x7fef149096a0>"
]
},
"execution_count": 36,
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
@ -1347,7 +1422,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.18"
}
},
"nbformat": 4,

@ -5,7 +5,7 @@
"id": "46589cdf-1ab6-4028-b07c-08b75acd98e5",
"metadata": {},
"source": [
"# Philosophy with Vector Embeddings, OpenAI and Cassandra / Astra DB\n",
"# Philosophy with Vector Embeddings, OpenAI and Cassandra / Astra DB through CQL\n",
"\n",
"### CassIO version"
]
@ -15,11 +15,11 @@
"id": "b3496d07-f473-4008-9133-1a54b818c8d3",
"metadata": {},
"source": [
"In this quickstart you will learn how to build a \"philosophy quote finder & generator\" using OpenAI's vector embeddings and DataStax Astra DB (_or a vector-capable Apache Cassandra® cluster, if you prefer_) as the vector store for data persistence.\n",
"In this quickstart you will learn how to build a \"philosophy quote finder & generator\" using OpenAI's vector embeddings and [Apache Cassandra®](https://cassandra.apache.org), or equivalently DataStax [Astra DB through CQL](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html), as the vector store for data persistence.\n",
"\n",
"The basic workflow of this notebook is outlined below. You will evaluate and store the vector embeddings for a number of quotes by famous philosophers, use them to build a powerful search engine and, after that, even a generator of new quotes!\n",
"\n",
"The notebook exemplifies some of the standard usage patterns of vector search -- while showing how easy is it to get started with the [Vector capabilities of Astra DB](https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html).\n",
"The notebook exemplifies some of the standard usage patterns of vector search -- while showing how easy is it to get started with the vector capabilities of [Cassandra](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html) / [Astra DB through CQL](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html).\n",
"\n",
"For a background on using vector search and text embeddings to build a question-answering system, please check out this excellent hands-on notebook: [Question answering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb).\n",
"\n",
@ -48,13 +48,13 @@
"\n",
"Each quote is made into an embedding vector with OpenAI's `Embedding`. These are saved in the Vector Store for later use in searching. Some metadata, including the author's name and a few other pre-computed tags, are stored alongside, to allow for search customization.\n",
"\n",
"![1_vector_indexing](https://user-images.githubusercontent.com/14221764/262085997-215c3854-a004-45f0-8afc-51b924b059a0.png)\n",
"![1_vector_indexing](https://user-images.githubusercontent.com/14221764/282440878-dc3ed680-7d0e-4b30-9a74-d2d66a7394f7.png)\n",
"\n",
"**Search**\n",
"\n",
"To find a quote similar to the provided search quote, the latter is made into an embedding vector on the fly, and this vector is used to query the store for similar vectors ... i.e. similar quotes that were previously indexed. The search can optionally be constrained by additional metadata (\"find me quotes by Spinoza similar to this one ...\").\n",
"\n",
"![2_vector_search](https://user-images.githubusercontent.com/14221764/262086005-5824d690-b8a4-4cbe-a6fd-a43fa785f8dc.png)\n",
"![2_vector_search](https://user-images.githubusercontent.com/14221764/282440908-683e3ee1-0bf1-46b3-8621-86c31fc7f9c9.png)\n",
"\n",
"The key point here is that \"quotes similar in content\" translates, in vector space, to vectors that are metrically close to each other: thus, vector similarity search effectively implements semantic similarity. _This is the key reason vector embeddings are so powerful._\n",
"\n",
@ -68,7 +68,7 @@
"\n",
"Given a suggestion (a topic or a tentative quote), the search step is performed, and the first returned results (quotes) are fed into an LLM prompt which asks the generative model to invent a new text along the lines of the passed examples _and_ the initial suggestion.\n",
"\n",
"![4_quote_generation](https://user-images.githubusercontent.com/14221764/262087157-784117ff-7c56-45bc-9c76-577d09aea19a.png)"
"![4_quote_generation](https://user-images.githubusercontent.com/14221764/282440927-d56f36eb-d611-4342-8026-7736edc6f5c9.png)"
]
},
{
@ -96,7 +96,24 @@
},
"outputs": [],
"source": [
"!pip install \"cassio>=0.1.3\" openai"
"!pip install --quiet \"cassio>=0.1.3\" \"openai>=1.0.0\" datasets"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f0ceccaf-a55a-4442-89c1-0904aa7cc42c",
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"from collections import Counter\n",
"\n",
"import cassio\n",
"from cassio.table import MetadataVectorCassandraTable\n",
"\n",
"import openai\n",
"from datasets import load_dataset"
]
},
{
@ -112,18 +129,18 @@
"id": "65a8edc1-4633-491b-9ed3-11163ec24e46",
"metadata": {},
"source": [
"In order to connect to you Astra DB, you need two things:\n",
"- An Astra Token, with role \"Database Administrator\" (it looks like `AstraCS:...`)\n",
"In order to connect to your Astra DB through CQL, you need two things:\n",
"- A Token, with role \"Database Administrator\" (it looks like `AstraCS:...`)\n",
"- the database ID (it looks like `3df2a5b6-...`)\n",
"\n",
" Make sure you have both strings, Both are obtained in the [Astra UI](https://astra.datastax.com) once you sign in. For more information, see here: [database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) and [Token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure).\n",
" Make sure you have both strings -- which are obtained in the [Astra UI](https://astra.datastax.com) once you sign in. For more information, see here: [database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) and [Token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure).\n",
"\n",
"If you want to _connect to a Cassandra cluster_ (which however must [support](https://cassio.org/more_info/#use-a-local-vector-capable-cassandra) Vectors), replace with `cassio.init(session=..., keyspace=...)` with suitable Session and keyspace name for your cluster."
"If you want to _connect to a Cassandra cluster_ (which however must [support](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html) Vector Search), replace with `cassio.init(session=..., keyspace=...)` with suitable Session and keyspace name for your cluster."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "ca5a2f5d-3ff2-43d6-91c0-4a52c0ecd06a",
"metadata": {},
"outputs": [
@ -132,26 +149,22 @@
"output_type": "stream",
"text": [
"Please enter your Astra token ('AstraCS:...') ········\n",
"Please enter your database id ('3df2a5b6-...') 00000000-0000-0000-0000-000000000000\n"
"Please enter your database id ('3df2a5b6-...') 01234567-89ab-dcef-0123-456789abcdef\n"
]
}
],
"source": [
"from getpass import getpass\n",
"\n",
"astra_token = getpass(\"Please enter your Astra token ('AstraCS:...')\")\n",
"database_id = input(\"Please enter your database id ('3df2a5b6-...')\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "0fe028b0-3a40-4f12-b07c-8fd8bbee29b0",
"metadata": {},
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(token=astra_token, database_id=database_id)"
]
},
@ -162,7 +175,7 @@
"source": [
"### Creation of the DB connection\n",
"\n",
"This is how you create a connection to Astra DB:\n",
"This is how you create a connection to Astra DB through CQL:\n",
"\n",
"_(Incidentally, you could also use any Cassandra cluster (as long as it provides Vector capabilities), just by [changing the parameters](https://docs.datastax.com/en/developer/python-driver/latest/getting_started/#connecting-to-cassandra) to the following `Cluster` instantiation.)_"
]
@ -177,17 +190,6 @@
"You need a table which support vectors and is equipped with metadata. Call it \"philosophers_cassio\":"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8db837dc-cd49-41e2-8b5d-edb17ccc470e",
"metadata": {},
"outputs": [],
"source": [
"# create a vector store with cassIO\n",
"from cassio.table import MetadataVectorCassandraTable"
]
},
{
"cell_type": "code",
"execution_count": 5,
@ -232,18 +234,6 @@
"OPENAI_API_KEY = getpass(\"Please enter your OpenAI API Key: \")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "8065a42a-0ece-4453-b771-1dbef6d8a620",
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"\n",
"openai.api_key = OPENAI_API_KEY"
]
},
{
"cell_type": "markdown",
"id": "847f2821-7f3f-4dcd-8e0c-49aa397e36f4",
@ -256,25 +246,34 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "6bf89454-9a55-4202-ab6b-ea15b2048f3d",
"metadata": {},
"outputs": [],
"source": [
"client = openai.OpenAI(api_key=OPENAI_API_KEY)\n",
"embedding_model_name = \"text-embedding-ada-002\"\n",
"\n",
"result = openai.Embedding.create(\n",
"result = client.embeddings.create(\n",
" input=[\n",
" \"This is a sentence\",\n",
" \"A second sentence\"\n",
" ],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d054be36-0300-4c47-ba81-a1fe14c0165f",
"metadata": {},
"source": [
"_Note: the above is the syntax for OpenAI v1.0+. If using previous versions, the code to get the embeddings will look different._"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "50a8e6f0-0aa7-4ffc-94e9-702b68566815",
"metadata": {},
"outputs": [
@ -283,7 +282,7 @@
"output_type": "stream",
"text": [
"len(result.data) = 2\n",
"result.data[1].embedding = [-0.011011358350515366, 0.0033741754014045, 0.004608382...\n",
"result.data[1].embedding = [-0.010821706615388393, 0.001387271680869162, 0.0035479...\n",
"len(result.data[1].embedding) = 1536\n"
]
}
@ -307,99 +306,83 @@
"id": "cf0f3d58-74c2-458b-903d-3d12e61b7846",
"metadata": {},
"source": [
"Get a JSON file containing our quotes. We already prepared this collection and put it into this repo for quick loading.\n",
"\n",
"_(Note: we adapted the following from a Kaggle dataset -- which we acknowledge -- and also added a few tags to each quote.)_"
"_Note: the above is the syntax for OpenAI v1.0+. If using previous versions, the code to get the embeddings will look different._"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "aa68f038-3240-4e22-b7c6-a5f214eda381",
"execution_count": 9,
"id": "1a486ae9-f8f5-40c5-8fe7-f1328fe026b8",
"metadata": {},
"outputs": [],
"source": [
"# Don't mind this cell, just autodetecting if we're on a Colab or not\n",
"try:\n",
" from google.colab import files\n",
" IS_COLAB = True\n",
"except ModuleNotFoundError:\n",
" IS_COLAB = False"
"philo_dataset = load_dataset(\"datastax/philosopher-quotes\")[\"train\"]"
]
},
{
"cell_type": "markdown",
"id": "ab6b08b1-e3db-4c7c-9d7c-2ada7c8bc71d",
"metadata": {},
"source": [
"A quick inspection:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "94ff33fb-4b52-4c15-ab74-4af4fe973cbf",
"execution_count": 10,
"id": "6ab84ccb-3363-4bdc-9484-0d68c25a58ff",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"An example entry:\n",
"{'author': 'aristotle', 'quote': 'Love well, be loved and do something of value.', 'tags': 'love;ethics'}\n"
]
}
],
"source": [
"import json\n",
"import requests\n",
"\n",
"if IS_COLAB:\n",
" # load from Web request to (github) repo\n",
" json_url = \"https://raw.githubusercontent.com/openai/openai-cookbook/main/examples/vector_databases/cassandra_astradb/sources/philo_quotes.json\"\n",
" quote_dict = json.loads(requests.get(json_url).text) \n",
"else:\n",
" # load from local repo\n",
" quote_dict = json.load(open(\"./sources/philo_quotes.json\"))"
"print(\"An example entry:\")\n",
"print(philo_dataset[16])"
]
},
{
"cell_type": "markdown",
"id": "ab6b08b1-e3db-4c7c-9d7c-2ada7c8bc71d",
"id": "c88e3bca-bded-4523-9550-14dce3a308d1",
"metadata": {},
"source": [
"A quick inspection of the input data structure:"
"Check the dataset size:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "6ab84ccb-3363-4bdc-9484-0d68c25a58ff",
"execution_count": 11,
"id": "c08a9b27-df9a-4bba-8da1-8a87aac3cde8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Adapted from this Kaggle dataset: https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers (License: CC BY-NC-SA 4.0)\n",
"\n",
"Quotes loaded: 450.\n",
"By author:\n",
" aristotle (50)\n",
" freud (50)\n",
" hegel (50)\n",
" kant (50)\n",
" nietzsche (50)\n",
" plato (50)\n",
" sartre (50)\n",
" schopenhauer (50)\n",
" spinoza (50)\n",
"\n",
"Some examples:\n",
" aristotle:\n",
" True happiness comes from gaining insight and grow ... (tags: knowledge)\n",
" The roots of education are bitter, but the fruit i ... (tags: education, knowledge)\n",
" freud:\n",
" We are what we are because we have been what we ha ... (tags: history)\n",
" From error to error one discovers the entire truth ... (tags: )\n"
"Total: 450 quotes. By author:\n",
" aristotle : 50 quotes\n",
" schopenhauer : 50 quotes\n",
" spinoza : 50 quotes\n",
" hegel : 50 quotes\n",
" freud : 50 quotes\n",
" nietzsche : 50 quotes\n",
" sartre : 50 quotes\n",
" plato : 50 quotes\n",
" kant : 50 quotes\n"
]
}
],
"source": [
"print(quote_dict[\"source\"])\n",
"\n",
"total_quotes = sum(len(quotes) for quotes in quote_dict[\"quotes\"].values())\n",
"print(f\"\\nQuotes loaded: {total_quotes}.\\nBy author:\")\n",
"print(\"\\n\".join(f\" {author} ({len(quotes)})\" for author, quotes in quote_dict[\"quotes\"].items()))\n",
"\n",
"print(\"\\nSome examples:\")\n",
"for author, quotes in list(quote_dict[\"quotes\"].items())[:2]:\n",
" print(f\" {author}:\")\n",
" for quote in quotes[:2]:\n",
" print(f\" {quote['body'][:50]} ... (tags: {', '.join(quote['tags'])})\")"
"author_count = Counter(entry[\"author\"] for entry in philo_dataset)\n",
"print(f\"Total: {len(philo_dataset)} quotes. By author:\")\n",
"for author, count in author_count.most_common():\n",
" print(f\" {author:<20}: {count} quotes\")"
]
},
{
@ -411,51 +394,76 @@
"\n",
"You will compute the embeddings for the quotes and save them into the Vector Store, along with the text itself and the metadata planned for later use. Note that the author is added as a metadata field along with the \"tags\" already found with the quote itself.\n",
"\n",
"To optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service, with one batch per author.\n",
"To optimize speed and reduce the calls, you'll perform batched calls to the embedding OpenAI service.\n",
"\n",
"_(Note: for faster execution, Cassandra and CassIO would let you do concurrent inserts, which we don't do here for a more straightforward demo code.)_"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "68e80e81-886b-45a4-be61-c33b8028bcfb",
"execution_count": 12,
"id": "4392f39c-5588-469b-99e0-940f6482a80b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"aristotle: ************************************************** Done (50 quotes inserted).\n",
"freud: ************************************************** Done (50 quotes inserted).\n",
"hegel: ************************************************** Done (50 quotes inserted).\n",
"kant: ************************************************** Done (50 quotes inserted).\n",
"nietzsche: ************************************************** Done (50 quotes inserted).\n",
"plato: ************************************************** Done (50 quotes inserted).\n",
"sartre: ************************************************** Done (50 quotes inserted).\n",
"schopenhauer: ************************************************** Done (50 quotes inserted).\n",
"spinoza: ************************************************** Done (50 quotes inserted).\n",
"Finished inserting.\n"
"Starting to store entries:\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"B ************************************************** done (50)\n",
"\n",
"Finished storing entries.\n"
]
}
],
"source": [
"for philosopher, quotes in quote_dict[\"quotes\"].items():\n",
" print(f\"{philosopher}: \", end=\"\")\n",
" result = openai.Embedding.create(\n",
" input=[quote[\"body\"] for quote in quotes],\n",
" engine=embedding_model_name,\n",
"BATCH_SIZE = 50\n",
"\n",
"num_batches = ((len(philo_dataset) + BATCH_SIZE - 1) // BATCH_SIZE)\n",
"\n",
"quotes_list = philo_dataset[\"quote\"]\n",
"authors_list = philo_dataset[\"author\"]\n",
"tags_list = philo_dataset[\"tags\"]\n",
"\n",
"print(\"Starting to store entries:\")\n",
"for batch_i in range(num_batches):\n",
" b_start = batch_i * BATCH_SIZE\n",
" b_end = (batch_i + 1) * BATCH_SIZE\n",
" # compute the embedding vectors for this batch\n",
" b_emb_results = client.embeddings.create(\n",
" input=quotes_list[b_start : b_end],\n",
" model=embedding_model_name,\n",
" )\n",
" for quote_idx, (quote, q_data) in enumerate(zip(quotes, result.data)):\n",
" # prepare the rows for insertion\n",
" print(\"B \", end=\"\")\n",
" for entry_idx, emb_result in zip(range(b_start, b_end), b_emb_results.data):\n",
" if tags_list[entry_idx]:\n",
" tags = {\n",
" tag\n",
" for tag in tags_list[entry_idx].split(\";\")\n",
" }\n",
" else:\n",
" tags = set()\n",
" author = authors_list[entry_idx]\n",
" quote = quotes_list[entry_idx]\n",
" v_table.put(\n",
" row_id=f\"q_{philosopher}_{quote_idx}\",\n",
" body_blob=quote[\"body\"],\n",
" vector=q_data.embedding,\n",
" metadata={**{tag: True for tag in quote[\"tags\"]}, **{\"author\": philosopher}},\n",
" row_id=f\"q_{author}_{entry_idx}\",\n",
" body_blob=quote,\n",
" vector=emb_result.embedding,\n",
" metadata={**{tag: True for tag in tags}, **{\"author\": author}},\n",
" )\n",
" print(\"*\", end='')\n",
" print(f\" Done ({len(quotes)} quotes inserted).\")\n",
"print(\"Finished inserting.\")"
" print(\"*\", end=\"\")\n",
" print(f\" done ({len(b_emb_results.data)})\")\n",
"\n",
"print(\"\\nFinished storing entries.\")"
]
},
{
@ -478,15 +486,15 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 13,
"id": "d6fcf182-3ab7-4d28-9472-dce35cc38182",
"metadata": {},
"outputs": [],
"source": [
"def find_quote_and_author(query_quote, n, author=None, tags=None):\n",
" query_vector = openai.Embedding.create(\n",
" query_vector = client.embeddings.create(\n",
" input=[query_quote],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
" ).data[0].embedding\n",
" metadata = {}\n",
" if author:\n",
@ -524,7 +532,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 14,
"id": "6722c2c0-3e54-4738-80ce-4d1149e95414",
"metadata": {},
"outputs": [
@ -533,13 +541,13 @@
"text/plain": [
"[('Life to the great majority is only a constant struggle for mere existence, with the certainty of losing it at last.',\n",
" 'schopenhauer'),\n",
" ('The meager satisfaction that man can extract from reality leaves him starving.',\n",
" 'freud'),\n",
" ('To live is to suffer, to survive is to find some meaning in the suffering.',\n",
" 'nietzsche')]"
" ('We give up leisure in order that we may have leisure, just as we go to war in order that we may have peace.',\n",
" 'aristotle'),\n",
" ('Perhaps the gods are kind to us, by making life more disagreeable as we grow older. In the end death seems less intolerable than the manifold burdens we carry',\n",
" 'freud')]"
]
},
"execution_count": 15,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@ -558,7 +566,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 15,
"id": "da9c705f-5c12-42b3-a038-202f89a3c6da",
"metadata": {},
"outputs": [
@ -571,7 +579,7 @@
" 'nietzsche')]"
]
},
"execution_count": 16,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@ -590,7 +598,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 16,
"id": "abcfaec9-8f42-4789-a5ed-1073fa2932c2",
"metadata": {},
"outputs": [
@ -603,7 +611,7 @@
" 'nietzsche')]"
]
},
"execution_count": 17,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@ -626,12 +634,12 @@
"\n",
"To get a feeling on how this works, try the following query and play with the choice of quote and threshold to compare the results:\n",
"\n",
"_Note (for the mathematically inclined): this \"distance\" is exactly the cosine difference between the vectors, i.e. the scalar product divided by the product of the norms of the two vectors. As such, it is a number ranging from -1 to +1. Elsewhere (e.g. in the \"CQL\" version of this example) you will see this quantity rescaled to fit the [0, 1] interval, which means the numerical values and adequate thresholds will be slightly different._"
"_Note (for the mathematically inclined): this \"distance\" is exactly the cosine similarity between the vectors, i.e. the scalar product divided by the product of the norms of the two vectors. As such, it is a number ranging from -1 to +1, where -1 is for exactly opposite-facing vectors and +1 for identically-oriented vectors. Elsewhere (e.g. in the \"CQL\" counterpart of this demo) you would get a rescaling of this quantity to fit the [0, 1] interval, which means the resulting numerical values and adequate thresholds there are transformed accordingly._"
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 17,
"id": "b9b43721-a3b0-4ac4-b730-7a6aeec52e70",
"metadata": {},
"outputs": [
@ -639,15 +647,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"8 quotes within the threshold:\n",
" 0. [distance=0.858] \"The assumption that animals are without rights, and the illusion that ...\"\n",
" 1. [distance=0.849] \"Animals are in possession of themselves; their soul is in possession o...\"\n",
" 2. [distance=0.846] \"At his best, man is the noblest of all animals; separated from law and...\"\n",
" 3. [distance=0.840] \"Man is the only animal that must be encouraged to live....\"\n",
" 4. [distance=0.838] \".... we are a part of nature as a whole, whose order we follow....\"\n",
" 5. [distance=0.828] \"Because Christian morality leaves animals out of account, they are at ...\"\n",
" 6. [distance=0.827] \"Every human endeavor, however singular it seems, involves the whole hu...\"\n",
" 7. [distance=0.826] \"A dog has the soul of a philosopher....\"\n"
"3 quotes within the threshold:\n",
" 0. [distance=0.855] \"The assumption that animals are without rights, and the illusion that ...\"\n",
" 1. [distance=0.843] \"Animals are in possession of themselves; their soul is in possession o...\"\n",
" 2. [distance=0.841] \"At his best, man is the noblest of all animals; separated from law and...\"\n"
]
}
],
@ -656,11 +659,11 @@
"# quote = \"Be good.\"\n",
"# quote = \"This teapot is strange.\"\n",
"\n",
"metric_threshold = 0.8\n",
"metric_threshold = 0.84\n",
"\n",
"quote_vector = openai.Embedding.create(\n",
"quote_vector = client.embeddings.create(\n",
" input=[quote],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
").data[0].embedding\n",
"\n",
"results = list(v_table.metric_ann_search(\n",
@ -695,7 +698,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 18,
"id": "a6dd366d-665a-45fd-917b-b6b5312b0865",
"metadata": {},
"outputs": [],
@ -723,7 +726,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 19,
"id": "397e6ebd-b30e-413b-be63-81a62947a7b8",
"metadata": {},
"outputs": [],
@ -741,7 +744,7 @@
" print(f\"** - {q} ({a})\")\n",
" print(\"** end of logging\")\n",
" #\n",
" response = openai.ChatCompletion.create(\n",
" response = client.chat.completions.create(\n",
" model=completion_model_name,\n",
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
" temperature=0.7,\n",
@ -753,6 +756,14 @@
" return None"
]
},
{
"cell_type": "markdown",
"id": "55189828-31ad-4ca5-b375-dc80639678ab",
"metadata": {},
"source": [
"_Note: similar to the case of the embedding computation, the code for the Chat Completion API would be slightly different for OpenAI prior to v1.0._"
]
},
{
"cell_type": "markdown",
"id": "63bcc157-e5d4-43ef-8028-d4dcc8a72b9c",
@ -771,7 +782,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 20,
"id": "806ba758-8988-410e-9eeb-b9c6799e6b25",
"metadata": {},
"outputs": [
@ -781,11 +792,11 @@
"text": [
"** quotes found:\n",
"** - Happiness is the reward of virtue. (aristotle)\n",
"** - Enthusiasm is always connected with the senses, whatever be the object that excites it. The true strength of virtue is serenity of mind, combined with a deliberate and steadfast determination to execute her laws. That is the healthful condition of the moral life; on the other hand, enthusiasm, even when excited by representations of goodness, is a brilliant but feverish glow which leaves only exhaustion and languor behind. (kant)\n",
"** - Our moral virtues benefit mainly other people; intellectual virtues, on the other hand, benefit primarily ourselves; therefore the former make us universally popular, the latter unpopular. (schopenhauer)\n",
"** end of logging\n",
"\n",
"A new generated quote:\n",
"Politics without virtue is like a ship without a compass - destined to drift aimlessly, guided only by self-interest and corruption.\n"
"Virtuous politics purifies society, while corrupt politics breeds chaos and decay.\n"
]
}
],
@ -805,7 +816,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 21,
"id": "7c2e2d4e-865f-4b2d-80cd-a695271415d9",
"metadata": {},
"outputs": [
@ -819,7 +830,7 @@
"** end of logging\n",
"\n",
"A new generated quote:\n",
"By disregarding the worth of animals, we reveal our own moral ignorance. True morality lies in extending compassion to all living beings.\n"
"The true measure of humanity lies not in our dominion over animals, but in our ability to show compassion and respect for all living beings.\n"
]
}
],
@ -861,7 +872,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 22,
"id": "49cabc31-47e3-4326-8ef5-d95690317321",
"metadata": {},
"outputs": [],
@ -871,7 +882,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 23,
"id": "a614c333-4143-4ad6-abdf-7b3853fbf423",
"metadata": {},
"outputs": [],
@ -895,7 +906,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 24,
"id": "424513a6-0a9d-4164-bf30-22d5b7e3bb25",
"metadata": {},
"outputs": [
@ -903,39 +914,66 @@
"name": "stdout",
"output_type": "stream",
"text": [
"aristotle: Done (50 quotes inserted).\n",
"freud: Done (50 quotes inserted).\n",
"hegel: Done (50 quotes inserted).\n",
"kant: Done (50 quotes inserted).\n",
"nietzsche: Done (50 quotes inserted).\n",
"plato: Done (50 quotes inserted).\n",
"sartre: Done (50 quotes inserted).\n",
"schopenhauer: Done (50 quotes inserted).\n",
"spinoza: Done (50 quotes inserted).\n",
"Finished inserting.\n"
"Starting to store entries:\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"B done (50)\n",
"\n",
"Finished storing entries.\n"
]
}
],
"source": [
"for philosopher, quotes in quote_dict[\"quotes\"].items():\n",
" print(f\"{philosopher}: \", end=\"\")\n",
" result = openai.Embedding.create(\n",
" input=[quote[\"body\"] for quote in quotes],\n",
" engine=embedding_model_name,\n",
"BATCH_SIZE = 50\n",
"\n",
"num_batches = ((len(philo_dataset) + BATCH_SIZE - 1) // BATCH_SIZE)\n",
"\n",
"quotes_list = philo_dataset[\"quote\"]\n",
"authors_list = philo_dataset[\"author\"]\n",
"tags_list = philo_dataset[\"tags\"]\n",
"\n",
"print(\"Starting to store entries:\")\n",
"for batch_i in range(num_batches):\n",
" b_start = batch_i * BATCH_SIZE\n",
" b_end = (batch_i + 1) * BATCH_SIZE\n",
" # compute the embedding vectors for this batch\n",
" b_emb_results = client.embeddings.create(\n",
" input=quotes_list[b_start : b_end],\n",
" model=embedding_model_name,\n",
" )\n",
" # prepare the rows for insertion\n",
" futures = []\n",
" for quote_idx, (quote, q_data) in enumerate(zip(quotes, result.data)):\n",
" print(\"B \", end=\"\")\n",
" for entry_idx, emb_result in zip(range(b_start, b_end), b_emb_results.data):\n",
" if tags_list[entry_idx]:\n",
" tags = {\n",
" tag\n",
" for tag in tags_list[entry_idx].split(\";\")\n",
" }\n",
" else:\n",
" tags = set()\n",
" author = authors_list[entry_idx]\n",
" quote = quotes_list[entry_idx]\n",
" futures.append(v_table_partitioned.put_async(\n",
" partition_id=philosopher,\n",
" row_id=f\"q_{philosopher}_{quote_idx}\",\n",
" body_blob=quote[\"body\"],\n",
" vector=q_data.embedding,\n",
" metadata={tag: True for tag in quote[\"tags\"]},\n",
" partition_id=author,\n",
" row_id=f\"q_{author}_{entry_idx}\",\n",
" body_blob=quote,\n",
" vector=emb_result.embedding,\n",
" metadata={tag: True for tag in tags},\n",
" ))\n",
" #\n",
" for future in futures:\n",
" future.result()\n",
" print(f\"Done ({len(quotes)} quotes inserted).\")\n",
"print(\"Finished inserting.\")"
" #\n",
" print(f\" done ({len(b_emb_results.data)})\")\n",
"\n",
"print(\"\\nFinished storing entries.\")"
]
},
{
@ -948,15 +986,15 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 25,
"id": "a3217a90-c682-4c72-b834-7717ed13a3af",
"metadata": {},
"outputs": [],
"source": [
"def find_quote_and_author_p(query_quote, n, author=None, tags=None):\n",
" query_vector = openai.Embedding.create(\n",
" query_vector = client.embeddings.create(\n",
" input=[query_quote],\n",
" engine=embedding_model_name,\n",
" model=embedding_model_name,\n",
" ).data[0].embedding\n",
" metadata = {}\n",
" partition_id = None\n",
@ -988,7 +1026,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 26,
"id": "d7343a7a-5a06-47c5-ad96-8b60b6948352",
"metadata": {},
"outputs": [
@ -997,13 +1035,13 @@
"text/plain": [
"[('Life to the great majority is only a constant struggle for mere existence, with the certainty of losing it at last.',\n",
" 'schopenhauer'),\n",
" ('The meager satisfaction that man can extract from reality leaves him starving.',\n",
" 'freud'),\n",
" ('To live is to suffer, to survive is to find some meaning in the suffering.',\n",
" 'nietzsche')]"
" ('We give up leisure in order that we may have leisure, just as we go to war in order that we may have peace.',\n",
" 'aristotle'),\n",
" ('Perhaps the gods are kind to us, by making life more disagreeable as we grow older. In the end death seems less intolerable than the manifold burdens we carry',\n",
" 'freud')]"
]
},
"execution_count": 27,
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
@ -1022,7 +1060,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 27,
"id": "d1abb677-5a8b-48c2-82c5-dbca94ef56f1",
"metadata": {},
"outputs": [
@ -1035,7 +1073,7 @@
" 'nietzsche')]"
]
},
"execution_count": 28,
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
@ -1059,11 +1097,11 @@
"source": [
"## Conclusion\n",
"\n",
"Congratulations! You have learned how to use OpenAI for vector embeddings and Astra DB / Cassandra for storage in order to build a sophisticated philosophical search engine and quote generator.\n",
"Congratulations! You have learned how to use OpenAI for vector embeddings and Cassandra / Astra DB through CQL for storage in order to build a sophisticated philosophical search engine and quote generator.\n",
"\n",
"This example used [CassIO](https://cassio.org) to interface with the Vector Store - but this is not the only choice. Check the [README](https://github.com/openai/openai-cookbook/tree/main/examples/vector_databases/cassandra_astradb) for other options and integration with popular frameworks.\n",
"\n",
"To find out more on how Astra DB's Vector Search capabilities can be a key ingredient in your ML/GenAI applications, visit [Astra DB](https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html)'s web page on the topic."
"To find out more on how Astra DB's Vector Search capabilities can be a key ingredient in your ML/GenAI applications, visit [Astra DB](https://docs.datastax.com/en/astra/home/astra.html)'s web page on the topic."
]
},
{
@ -1078,17 +1116,17 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 28,
"id": "1eb0fd16-7e15-4742-8fc5-94d9eeeda620",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<cassandra.cluster.ResultSet at 0x7fc7c4287940>"
"<cassandra.cluster.ResultSet at 0x7fdcc42e8f10>"
]
},
"execution_count": 29,
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
@ -1119,7 +1157,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.18"
}
},
"nbformat": 4,

@ -1,17 +1,24 @@
# Cassandra / Astra DB
# RAG with Astra DB and Cassandra
The example notebooks in this directory show how to use the Vector
The demos in this directory show how to use the Vector
Search capabilities available today in **DataStax Astra DB**, a serverless
Database-as-a-Service built on Apache Cassandra®.
Moreover, support for vector-oriented workloads is making its way to the
next major release of Cassandra, so that the code examples in this folder
are designed to work equally well on it as soon as the vector capabilities
get released.
These example notebooks demonstrate implementation of
the same GenAI standard RAG workload with different libraries and APIs.
To use [Astra DB](https://docs.datastax.com/en/astra/home/astra.html)
with its HTTP API interface, head to the "AstraPy" notebook (`astrapy`
is the Python client to interact with the database).
If you prefer CQL access to the database (either with
[Astra DB](https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html)
or a Cassandra cluster
[supporting vector search](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html)),
check the "CQL" or "CassIO" notebooks -- they differ in the level of abstraction you get to work at.
If you want to know more about Astra DB and its Vector Search capabilities,
head over to [astra.datastax.com](https://docs.datastax.com/en/astra-serverless/docs/vector-search/overview.html) or try out one
of these hands-on notebooks straight away:
head over to [datastax.com](https://docs.datastax.com/en/astra/home/astra.html).
### Example notebooks
@ -19,10 +26,11 @@ The following examples show how easily OpenAI and DataStax Astra DB can
work together to power vector-based AI applications. You can run them either
with your local Jupyter engine or as Colab notebooks:
| Use case | Framework | Notebook | Google Colab |
| -------- | --------- | -------- | ------------ |
| Search/generate quotes | CassIO | [Notebook](./Philosophical_Quotes_cassIO.ipynb) | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_cassIO.ipynb) |
| Search/generate quotes | Plain Cassandra language | [Notebook](./Philosophical_Quotes_CQL.ipynb) | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_CQL.ipynb) |
| Use case | Target database | Framework | Notebook | Google Colab |
| -------- | --------------- | --------- | -------- | ------------ |
| Search/generate quotes | Astra DB | AstraPy | [Notebook](./Philosophical_Quotes_AstraPy.ipynb) | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_AstraPy.ipynb) |
| Search/generate quotes | Cassandra / Astra DB through CQL | CassIO | [Notebook](./Philosophical_Quotes_cassIO.ipynb) | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_cassIO.ipynb) |
| Search/generate quotes | Cassandra / Astra DB through CQL | Plain Cassandra language | [Notebook](./Philosophical_Quotes_CQL.ipynb) | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/openai-cookbook/blob/main/examples/vector_databases/cassandra_astradb/Philosophical_Quotes_CQL.ipynb) |
### Vector similarity, visual representation

Loading…
Cancel
Save