"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration\n",
"\n",
"_Note: in addition to access to the database, an OpenAI API Key is required to run the full example._"
]
},
@ -51,7 +49,7 @@
"metadata": {},
"outputs": [],
"source": [
"pip install --upgrade langchain-astradb"
"pip install -qU langchain-astradb"
]
},
{
@ -59,7 +57,7 @@
"id": "2453d83a-bc8f-41e1-a692-befe4dd90156",
"metadata": {},
"source": [
"_**Note.** the following are all packages required to run the full demo on this page. Depending on your LangChain setup, some of them may need to be installed:_"
"_Make sure you have installed the packages required to run all of this demo:_"
"There are two ways to create an Astra DB vector store, which differ in how the embeddings are computed.\n",
"\n",
"*Explicit embeddings*. You can separately instantiate a `langchain_core.embeddings.Embeddings` class and pass it to the `AstraDBVectorStore` constructor, just like with most other LangChain vector stores.\n",
"\n",
"*Integrated embedding computation*. Alternatively, you can use the [Vectorize](https://www.datastax.com/blog/simplifying-vector-embedding-generation-with-astra-vectorize) feature of Astra DB and simply specify the name of a supported embedding model when creating the store. The embedding computations are entirely handled within the database. (To proceed with this method, you must have enabled the desired embedding integration for your database, as described [in the docs](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html).)\n",
"\n",
"**Please choose one method and run the corresponding cells only.**"