docs: update Google Cloud database integration docs (#18711)

**Description:** update Google Cloud database integration docs
 **Issue:** NA
**Dependencies:** NA
pull/18774/head
Averi Kitsch 4 months ago committed by GitHub
parent 010a234f1e
commit 8accee57a9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -10,7 +10,11 @@
"\n",
"> [AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB's Langchain integrations.\n",
"\n",
"This notebook goes over how to use `AlloyDB for PostgreSQL` to load Documents with the `AlloyDBLoader` class."
"This notebook goes over how to use `AlloyDB for PostgreSQL` to load Documents with the `AlloyDBLoader` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-alloydb-pg-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/document_loader.ipynb)"
]
},
{
@ -24,7 +28,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the AlloyDB Admin API.](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
" * [Enable the AlloyDB API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
" * [Create a AlloyDB cluster and instance.](https://cloud.google.com/alloydb/docs/cluster-create)\n",
" * [Create a AlloyDB database.](https://cloud.google.com/alloydb/docs/quickstart/create-and-connect)\n",
" * [Add a User to the database.](https://cloud.google.com/alloydb/docs/database-users/about)"
@ -139,30 +143,6 @@
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"id": "rEWWNoNnKOgq",
"metadata": {
"id": "rEWWNoNnKOgq"
},
"source": [
"### 💡 API Enablement\n",
"The `langchain-google-alloydb-pg` package requires that you [enable the AlloyDB Admin API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5utKIdq7KYi5",
"metadata": {
"id": "5utKIdq7KYi5"
},
"outputs": [],
"source": [
"# enable AlloyDB Admin API\n",
"!gcloud services enable alloydb.googleapis.com"
]
},
{
"cell_type": "markdown",
"id": "f8f2830ee9ca1e01",

@ -10,6 +10,8 @@
"\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/document_loader.ipynb)"
]
},
@ -22,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Bigtable API](https://console.cloud.google.com/flows/enableapi?apiid=bigtable.googleapis.com)\n",
"* [Create a Bigtable instance](https://cloud.google.com/bigtable/docs/creating-instance)\n",
"* [Create a Bigtable table](https://cloud.google.com/bigtable/docs/managing-tables)\n",
"* [Create Bigtable access credentials](https://developers.google.com/workspace/guides/create-credentials)\n",
@ -461,7 +464,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.5"
}
},
"nbformat": 4,

@ -4,11 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Cloud SQL for SQL Server\n",
"# Google Cloud SQL for SQL server\n",
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Cloud SQL for SQL Server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
"This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mssql-python/blob/main/docs/document_loader.ipynb)"
]
@ -22,9 +24,10 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Create a Cloud SQL for SQL Server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance)\n",
"* [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mssql/create-manage-databases)\n",
"* [Add an IAM database user to the database](https://cloud.google.com/sql/docs/sqlserver/add-manage-iam-users#creating-a-database-user) (Optional)\n",
"* [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
"* [Create a Cloud SQL for SQL server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance)\n",
"* [Create a Cloud SQL database](https://cloud.google.com/sql/docs/sqlserver/create-manage-databases)\n",
"* [Add an IAM database user to the database](https://cloud.google.com/sql/docs/sqlserver/create-manage-users) (Optional)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
@ -170,7 +173,7 @@
"\n",
"Before saving or loading documents from MSSQL table, we need first configures a connection pool to Cloud SQL database. The `MSSQLEngine` configures a [SQLAlchemy connection pool](https://docs.sqlalchemy.org/en/20/core/pooling.html#module-sqlalchemy.pool) to your Cloud SQL database, enabling successful connections from your application and following industry best practices.\n",
"\n",
"To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 6 things:\n",
"To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 4 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `region` : Region where the Cloud SQL instance is located.\n",
@ -205,6 +208,7 @@
"### Initialize a table\n",
"\n",
"Initialize a table of default schema via `MSSQLEngine.init_document_table(<table_name>)`. Table Columns:\n",
"\n",
"- page_content (type: text)\n",
"- langchain_metadata (type: JSON)\n",
"\n",
@ -227,6 +231,7 @@
"### Save documents\n",
"\n",
"Save langchain documents with `MSSQLDocumentSaver.add_documents(<documents>)`. To initialize `MSSQLDocumentSaver` class you need to provide 2 things:\n",
"\n",
"1. `engine` - An instance of a `MSSQLEngine` engine.\n",
"2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents."
]
@ -270,6 +275,7 @@
"metadata": {},
"source": [
"Load langchain documents with `MSSQLLoader.load()` or `MSSQLLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `MSSQLDocumentSaver` class you need to provide:\n",
"\n",
"1. `engine` - An instance of a `MSSQLEngine` engine.\n",
"2. `table_name` - The name of the table within the Cloud SQL database to store langchain documents."
]
@ -341,6 +347,7 @@
"For table with default schema (page_content, langchain_metadata), the deletion criteria is:\n",
"\n",
"A `row` should be deleted if there exists a `document` in the list, such that\n",
"\n",
"- `document.page_content` equals `row[page_content]`\n",
"- `document.metadata` equals `row[langchain_metadata]`"
]
@ -448,6 +455,7 @@
"metadata": {},
"source": [
"We can specify the content and metadata we want to load by setting the `content_columns` and `metadata_columns` when initializing the `MSSQLLoader`.\n",
"\n",
"1. `content_columns`: The columns to write into the `page_content` of the document.\n",
"2. `metadata_columns`: The columns to write into the `metadata` of the document.\n",
"\n",
@ -486,12 +494,14 @@
"metadata": {},
"source": [
"In order to save langchain document into table with customized metadata fields. We need first create such a table via `MSSQLEngine.init_document_table()`, and specify the list of `metadata_columns` we want it to have. In this example, the created table will have table columns:\n",
"\n",
"- description (type: text): for storing fruit description.\n",
"- fruit_name (type text): for storing fruit name.\n",
"- organic (type tinyint(1)): to tell if the fruit is organic.\n",
"- other_metadata (type: JSON): for storing other metadata information of the fruit.\n",
"\n",
"We can use the following parameters with `MSSQLEngine.init_document_table()` to create the table:\n",
"\n",
"1. `table_name`: The name of the table within the Cloud SQL database to store langchain documents.\n",
"2. `metadata_columns`: A list of `sqlalchemy.Column` indicating the list of metadata columns we need.\n",
"3. `content_column`: The name of column to store `page_content` of langchain document. Default: `page_content`.\n",
@ -531,6 +541,7 @@
"metadata": {},
"source": [
"Save documents with `MSSQLDocumentSaver.add_documents(<documents>)`. As you can see in this example, \n",
"\n",
"- `document.page_content` will be saved into `description` column.\n",
"- `document.metadata.fruit_name` will be saved into `fruit_name` column.\n",
"- `document.metadata.organic` will be saved into `organic` column.\n",
@ -584,6 +595,7 @@
"We can also delete documents from table with customized metadata columns via `MSSQLDocumentSaver.delete(<documents>)`. The deletion criteria is:\n",
"\n",
"A `row` should be deleted if there exists a `document` in the list, such that\n",
"\n",
"- `document.page_content` equals `row[page_content]`\n",
"- For every metadata field `k` in `document.metadata`\n",
" - `document.metadata[k]` equals `row[k]` or `document.metadata[k]` equals `row[langchain_metadata][k]`\n",

@ -6,10 +6,12 @@
"source": [
"# Google Cloud SQL for MySQL\n",
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgresql), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mysql-python/blob/main/docs/document_loader.ipynb)"
]
},
@ -22,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
"* [Create a Cloud SQL for MySQL instance](https://cloud.google.com/sql/docs/mysql/create-instance)\n",
"* [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
"* [Add an IAM database user to the database](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users#creating-a-database-user) (Optional)\n",
@ -137,24 +140,6 @@
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Enablement\n",
"The `langchain-google-cloud-sql-mysql` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Cloud SQL Admin API\n",
"!gcloud services enable sqladmin.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {},

@ -1,382 +1,362 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "E_RJy7C1bpCT"
},
"source": [
"# Google Cloud SQL for PostgreSQL\n",
"\n",
"> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Extend your database application to build AI-powered experiences leveraging Cloud SQL for PostgreSQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use `Cloud SQL for PostgreSQL` to load Documents with the `PostgreSQLLoader` class."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xjcxaw6--Xyy"
},
"source": [
"## Before you begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
" * [Create a Cloud SQL for PostgreSQL instance.](https://cloud.google.com/sql/docs/postgres/create-instance)\n",
" * [Create a Cloud SQL for PostgreSQL database.](https://cloud.google.com/sql/docs/postgres/create-manage-databases)\n",
" * [Add a User to the database.](https://cloud.google.com/sql/docs/postgres/create-manage-users)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IR54BmgvdHT_"
},
"source": [
"### 🦜🔗 Library Installation\n",
"Install the integration library, `langchain-google-cloud-sql-pg`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "E_RJy7C1bpCT"
},
"source": [
"# Google Cloud SQL for PostgreSQL\n",
"\n",
"> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Extend your database application to build AI-powered experiences leveraging Cloud SQL for PostgreSQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use `Cloud SQL for PostgreSQL` to load Documents with the `PostgresLoader` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-pg-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-pg-python/blob/main/docs/document_loader.ipynb)"
]
},
"id": "0ZITIDE160OD",
"outputId": "90e0636e-ff34-4e1e-ad37-d2a6db4a317e"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-google-cloud-sql-pg"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "v40bB_GMcr9f"
},
"source": [
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6o0iGVIdDD6K"
},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cTXTbj4UltKf"
},
"source": [
"### 🔐 Authentication\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Uj02bMRAc9_c"
},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
{
"cell_type": "markdown",
"metadata": {
"id": "xjcxaw6--Xyy"
},
"source": [
"## Before you begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
" * [Create a Cloud SQL for PostgreSQL instance.](https://cloud.google.com/sql/docs/postgres/create-instance)\n",
" * [Create a Cloud SQL for PostgreSQL database.](https://cloud.google.com/sql/docs/postgres/create-manage-databases)\n",
" * [Add a User to the database.](https://cloud.google.com/sql/docs/postgres/create-manage-users)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IR54BmgvdHT_"
},
"source": [
"### 🦜🔗 Library Installation\n",
"Install the integration library, `langchain_google_cloud_sql_pg`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "0ZITIDE160OD",
"outputId": "90e0636e-ff34-4e1e-ad37-d2a6db4a317e"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain_google_cloud_sql_pg"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "v40bB_GMcr9f"
},
"source": [
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6o0iGVIdDD6K"
},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cTXTbj4UltKf"
},
"source": [
"### 🔐 Authentication\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Uj02bMRAc9_c"
},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "wnp1R1PYc9_c",
"outputId": "6502c721-a2fd-451f-b946-9f7b850d5966"
},
"outputs": [],
"source": [
"# @title Project { display-mode: \"form\" }\n",
"PROJECT_ID = \"gcp_project_id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"id": "f8f2830ee9ca1e01",
"metadata": {
"id": "f8f2830ee9ca1e01"
},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"id": "OMvzMWRrR6n7",
"metadata": {
"id": "OMvzMWRrR6n7"
},
"source": [
"### Set Cloud SQL database values\n",
"Find your database variables, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "irl7eMFnSPZr",
"metadata": {
"id": "irl7eMFnSPZr"
},
"outputs": [],
"source": [
"# @title Set Your Values Here { display-mode: \"form\" }\n",
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
"INSTANCE = \"my-primary\" # @param {type: \"string\"}\n",
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
"TABLE_NAME = \"vector_store\" # @param {type: \"string\"}"
]
},
{
"cell_type": "markdown",
"id": "QuQigs4UoFQ2",
"metadata": {
"id": "QuQigs4UoFQ2"
},
"source": [
"### Cloud SQL Engine\n",
"\n",
"One of the requirements and arguments to establish PostgreSQL as a document loader is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL for PostgreSQL database, enabling successful connections from your application and following industry best practices.\n",
"\n",
"To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `region` : Region where the Cloud SQL instance is located.\n",
"1. `instance` : The name of the Cloud SQL instance.\n",
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
"\n",
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the environment.\n",
"\n",
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/users) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`:\n",
"\n",
"* `user` : Database user to use for built-in database authentication and login\n",
"* `password` : Database password to use for built-in database authentication and login.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**: This tutorial demonstrates the async interface. All async methods have corresponding sync methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_cloud_sql_pg import PostgresEngine\n",
"\n",
"engine = await PostgresEngine.afrom_instance(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" instance=INSTANCE,\n",
" database=DATABASE,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e1tl0aNx7SWy"
},
"source": [
"### Create PostgresLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "z-AZyzAQ7bsf"
},
"outputs": [],
"source": [
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
"\n",
"# Creating a basic PostgreSQL object\n",
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PeOMpftjc9_e"
},
"source": [
"### Load Documents via default table\n",
"The loader returns a list of Documents from the table using the first column as page_content and all other columns as metadata. The default table will have the first column as\n",
"page_content and the second column as metadata (JSON). Each row becomes a document. Please note that if you want your documents to have ids you will need to add them in."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cwvi_O5Wc9_e"
},
"outputs": [],
"source": [
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
"\n",
"# Creating a basic PostgresLoader object\n",
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)\n",
"\n",
"docs = await loader.aload()\n",
"print(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kSkL9l1Hc9_e"
},
"source": [
"### Load documents via custom table/metadata or custom page content columns"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = await PostgresLoader.create(\n",
" engine,\n",
" table_name=TABLE_NAME,\n",
" content_columns=[\"product_name\"], # Optional\n",
" metadata_columns=[\"id\"], # Optional\n",
")\n",
"docs = await loader.aload()\n",
"print(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5R6h0_Cvc9_f"
},
"source": [
"### Set page content format\n",
"The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NGNdS7cqc9_f"
},
"outputs": [],
"source": [
"loader = await PostgresLoader.create(\n",
" engine,\n",
" table_name=\"products\",\n",
" content_columns=[\"product_name\", \"description\"],\n",
" format=\"YAML\",\n",
")\n",
"docs = await loader.aload()\n",
"print(docs)"
]
}
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
"provenance": [],
"toc_visible": true
},
"id": "wnp1R1PYc9_c",
"outputId": "6502c721-a2fd-451f-b946-9f7b850d5966"
},
"outputs": [],
"source": [
"# @title Project { display-mode: \"form\" }\n",
"PROJECT_ID = \"gcp_project_id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"! gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"id": "rEWWNoNnKOgq",
"metadata": {
"id": "rEWWNoNnKOgq"
},
"source": [
"### 💡 API Enablement\n",
"The `langchain_google_cloud_sql_pg` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5utKIdq7KYi5",
"metadata": {
"id": "5utKIdq7KYi5"
},
"outputs": [],
"source": [
"# enable Cloud SQL Admin API\n",
"!gcloud services enable sqladmin.googleapis.com"
]
},
{
"cell_type": "markdown",
"id": "f8f2830ee9ca1e01",
"metadata": {
"id": "f8f2830ee9ca1e01"
},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"id": "OMvzMWRrR6n7",
"metadata": {
"id": "OMvzMWRrR6n7"
},
"source": [
"### Set Cloud SQL database values\n",
"Find your database variables, in the [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "irl7eMFnSPZr",
"metadata": {
"id": "irl7eMFnSPZr"
},
"outputs": [],
"source": [
"# @title Set Your Values Here { display-mode: \"form\" }\n",
"REGION = \"us-central1\" # @param {type: \"string\"}\n",
"INSTANCE = \"my-primary\" # @param {type: \"string\"}\n",
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
"TABLE_NAME = \"vector_store\" # @param {type: \"string\"}"
]
},
{
"cell_type": "markdown",
"id": "QuQigs4UoFQ2",
"metadata": {
"id": "QuQigs4UoFQ2"
},
"source": [
"### Cloud SQL Engine\n",
"\n",
"One of the requirements and arguments to establish PostgreSQL as a document loader is a `PostgresEngine` object. The `PostgresEngine` configures a connection pool to your Cloud SQL for PostgreSQL database, enabling successful connections from your application and following industry best practices.\n",
"\n",
"To create a `PostgresEngine` using `PostgresEngine.from_instance()` you need to provide only 4 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `region` : Region where the Cloud SQL instance is located.\n",
"1. `instance` : The name of the Cloud SQL instance.\n",
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
"\n",
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/postgres/iam-authentication) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the environment.\n",
"\n",
"Optionally, [built-in database authentication](https://cloud.google.com/sql/docs/postgres/users) using a username and password to access the Cloud SQL database can also be used. Just provide the optional `user` and `password` arguments to `PostgresEngine.from_instance()`:\n",
"\n",
"* `user` : Database user to use for built-in database authentication and login\n",
"* `password` : Database password to use for built-in database authentication and login.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**: This tutorial demonstrates the async interface. All async methods have corresponding sync methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_cloud_sql_pg import PostgresEngine\n",
"\n",
"engine = await PostgresEngine.afrom_instance(\n",
" project_id=PROJECT_ID,\n",
" region=REGION,\n",
" instance=INSTANCE,\n",
" database=DATABASE,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e1tl0aNx7SWy"
},
"source": [
"### Create PostgresLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "z-AZyzAQ7bsf"
},
"outputs": [],
"source": [
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
"\n",
"# Creating a basic PostgreSQL object\n",
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PeOMpftjc9_e"
},
"source": [
"### Load Documents via default table\n",
"The loader returns a list of Documents from the table using the first column as page_content and all other columns as metadata. The default table will have the first column as\n",
"page_content and the second column as metadata (JSON). Each row becomes a document. Please note that if you want your documents to have ids you will need to add them in."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cwvi_O5Wc9_e"
},
"outputs": [],
"source": [
"from langchain_google_cloud_sql_pg import PostgresLoader\n",
"\n",
"# Creating a basic PostgresLoader object\n",
"loader = await PostgresLoader.create(engine, table_name=TABLE_NAME)\n",
"\n",
"docs = await loader.aload()\n",
"print(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kSkL9l1Hc9_e"
},
"source": [
"### Load documents via custom table/metadata or custom page content columns"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = await PostgresLoader.create(\n",
" engine,\n",
" table_name=TABLE_NAME,\n",
" content_columns=[\"product_name\"], # Optional\n",
" metadata_columns=[\"id\"], # Optional\n",
")\n",
"docs = await loader.aload()\n",
"print(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5R6h0_Cvc9_f"
},
"source": [
"### Set page content format\n",
"The loader returns a list of Documents, with one document per row, with page content in specified string format, i.e. text (space separated concatenation), JSON, YAML, CSV, etc. JSON and YAML formats include headers, while text and CSV do not include field headers.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NGNdS7cqc9_f"
},
"outputs": [],
"source": [
"loader = await PostgresLoader.create(\n",
" engine,\n",
" table_name=\"products\",\n",
" content_columns=[\"product_name\", \"description\"],\n",
" format=\"YAML\",\n",
")\n",
"docs = await loader.aload()\n",
"print(docs)"
]
}
],
"metadata": {
"colab": {
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
"nbformat": 4,
"nbformat_minor": 4
}

@ -1,411 +1,336 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Firestore in Datastore mode\n",
"\n",
"> [Firestore in Datastore mode](https://cloud.google.com/datastore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore in Datastore mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/document_loader.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Create a Datastore database](https://cloud.google.com/datastore/docs/manage-databases)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please specify a source for demo purpose.\n",
"SOURCE = \"test\" # @param {type:\"Query\"|\"CollectionGroup\"|\"DocumentReference\"|\"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Enablement\n",
"The `langchain-google-datastore` package requires that you [enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Datastore API\n",
"!gcloud services enable datastore.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save documents\n",
"\n",
"`DatastoreSaver` can store Documents into Datastore. By default it will try to extract the Document reference from the metadata\n",
"\n",
"Save langchain documents with `DatastoreSaver.upsert_documents(<documents>)`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"from langchain_google_datastore import DatastoreSaver\n",
"\n",
"data = [Document(page_content=\"Hello, World!\")]\n",
"saver = DatastoreSaver()\n",
"saver.upsert_documents(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Save documents without reference\n",
"\n",
"If a collection is specified the documents will be stored with an auto generated id."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"saver = DatastoreSaver(\"Collection\")\n",
"\n",
"saver.upsert_documents(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Save documents with other references"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"doc_ids = [\"AnotherCollection/doc_id\", \"foo/bar\"]\n",
"saver = DatastoreSaver()\n",
"\n",
"saver.upsert_documents(documents=data, document_ids=doc_ids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load from Collection or SubCollection"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Load langchain documents with `DatastoreLoader.load()` or `Datastore.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `DatastoreLoader` class you need to provide:\n",
"\n",
"1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\\`-delimited path to a Datastore collection`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_datastore import DatastoreLoader\n",
"\n",
"loader_collection = DatastoreLoader(\"Collection\")\n",
"loader_subcollection = DatastoreLoader(\"Collection/doc/SubCollection\")\n",
"\n",
"\n",
"data_collection = loader_collection.load()\n",
"data_subcollection = loader_subcollection.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load a single Document"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import datastore\n",
"\n",
"client = datastore.Client()\n",
"doc_ref = client.collection(\"foo\").document(\"bar\")\n",
"\n",
"loader_document = DatastoreLoader(doc_ref)\n",
"\n",
"data = loader_document.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load from CollectionGroup or Query"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.cloud.datastore import CollectionGroup, FieldFilter, Query\n",
"\n",
"col_ref = client.collection(\"col_group\")\n",
"collection_group = CollectionGroup(col_ref)\n",
"\n",
"loader_group = DatastoreLoader(collection_group)\n",
"\n",
"col_ref = client.collection(\"collection\")\n",
"query = col_ref.where(filter=FieldFilter(\"region\", \"==\", \"west_coast\"))\n",
"\n",
"loader_query = DatastoreLoader(query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete documents\n",
"\n",
"Delete a list of langchain documents from Datastore collection with `DatastoreSaver.delete_documents(<documents>)`.\n",
"\n",
"If document ids is provided, the Documents will be ignored."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"saver = DatastoreSaver()\n",
"\n",
"saver.delete_documents(data)\n",
"\n",
"# The Documents will be ignored and only the document ids will be used.\n",
"saver.delete_documents(data, doc_ids)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load documents with customize document page content & metadata\n",
"\n",
"The arguments of `page_content_fields` and `metadata_fields` will specify the Datastore Document fields to be written into LangChain Document `page_content` and `metadata`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = DatastoreLoader(\n",
" source=\"foo/bar/subcol\",\n",
" page_content_fields=[\"data_field\"],\n",
" metadata_fields=[\"metadata_field\"],\n",
")\n",
"\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Customize Page Content Format\n",
"\n",
"When the `page_content` contains only one field the information will be the field value only. Otherwise the `page_content` will be in JSON format."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Customize Connection & Authentication"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.auth import compute_engine\n",
"from google.cloud.datastore import Client\n",
"\n",
"client = Client(database=\"non-default-db\", creds=compute_engine.Credentials())\n",
"loader = DatastoreLoader(\n",
" source=\"foo\",\n",
" client=client,\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Firestore in Datastore Mode\n",
"\n",
"> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/document_loader.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com)\n",
"* [Create a Firestore in Datastore Mode database](https://cloud.google.com/datastore/docs/manage-databases)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save documents\n",
"\n",
"Save langchain documents with `DatastoreSaver.upsert_documents(<documents>)`. By default it will try to extract the entity key from the `key` in the Document metadata."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents import Document\n",
"from langchain_google_datastore import DatastoreSaver\n",
"\n",
"saver = DatastoreSaver()\n",
"\n",
"data = [Document(page_content=\"Hello, World!\")]\n",
"saver.upsert_documents(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Save documents without key\n",
"\n",
"If a `kind` is specified the documents will be stored with an auto generated id."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"saver = DatastoreSaver(\"MyKind\")\n",
"\n",
"saver.upsert_documents(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load documents via Kind\n",
"\n",
"Load langchain documents with `DatastoreLoader.load()` or `DatastoreLoader.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `DatastoreLoader` class you need to provide:\n",
"1. `source` - The source to load the documents. It can be an instance of Query or the name of the Datastore kind to read from."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_datastore import DatastoreLoader\n",
"\n",
"loader = DatastoreLoader(\"MyKind\")\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load documents via query\n",
"\n",
"Other than loading documents from kind, we can also choose to load documents from query. For example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import datastore\n",
"\n",
"client = datastore.Client(database=\"non-default-db\", namespace=\"custom_namespace\")\n",
"query_load = client.query(kind=\"MyKind\")\n",
"query_load.add_filter(\"region\", \"=\", \"west_coast\")\n",
"\n",
"loader_document = DatastoreLoader(query_load)\n",
"\n",
"data = loader_document.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete documents\n",
"\n",
"Delete a list of langchain documents from Datastore with `DatastoreSaver.delete_documents(<documents>)`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"saver = DatastoreSaver()\n",
"\n",
"saver.delete_documents(data)\n",
"\n",
"keys_to_delete = [\n",
" [\"Kind1\", \"identifier\"],\n",
" [\"Kind2\", 123],\n",
" [\"Kind3\", \"identifier\", \"NestedKind\", 456],\n",
"]\n",
"# The Documents will be ignored and only the document ids will be used.\n",
"saver.delete_documents(data, keys_to_delete)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Advanced Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load documents with customized document page content & metadata\n",
"\n",
"The arguments of `page_content_properties` and `metadata_properties` will specify the Entity properties to be written into LangChain Document `page_content` and `metadata`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"loader = DatastoreLoader(\n",
" source=\"MyKind\",\n",
" page_content_fields=[\"data_field\"],\n",
" metadata_fields=[\"metadata_field\"],\n",
")\n",
"\n",
"data = loader.load()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Customize Page Content Format\n",
"\n",
"When the `page_content` contains only one field the information will be the field value only. Otherwise the `page_content` will be in JSON format."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Customize Connection & Authentication"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.auth import compute_engine\n",
"from google.cloud.firestore import Client\n",
"\n",
"client = Client(database=\"non-default-db\", creds=compute_engine.Credentials())\n",
"loader = DatastoreLoader(\n",
" source=\"foo\",\n",
" client=client,\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -10,6 +10,8 @@
"\n",
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-firestore-python/blob/main/docs/document_loader.ipynb)"
]
},
@ -22,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com)\n",
"* [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
@ -128,24 +131,6 @@
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Enablement\n",
"The `langchain-google-firestore` package requires that you [enable the Firestore Admin API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Firestore Admin API\n",
"!gcloud services enable firestore.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -170,7 +155,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.documents.base import Document\n",
"from langchain_core.documents import Document\n",
"from langchain_google_firestore import FirestoreSaver\n",
"\n",
"saver = FirestoreSaver()\n",
@ -232,7 +217,7 @@
"source": [
"Load langchain documents with `FirestoreLoader.load()` or `Firestore.lazy_load()`. `lazy_load` returns a generator that only queries database during the iteration. To initialize `FirestoreLoader` class you need to provide:\n",
"\n",
"1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\\`-delimited path to a Firestore collection`."
"1. `source` - An instance of a Query, CollectionGroup, DocumentReference or the single `\\`-delimited path to a Firestore collection."
]
},
{

@ -12,6 +12,8 @@
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/document_loader.ipynb)"
]
},
@ -24,6 +26,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 5.0.\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
@ -159,7 +162,7 @@
"outputs": [],
"source": [
"import redis\n",
"from langchain_core.documents.base import Document\n",
"from langchain_core.documents import Document\n",
"from langchain_google_memorystore_redis import MemorystoreDocumentSaver\n",
"\n",
"test_docs = [\n",

@ -6,10 +6,12 @@
"source": [
"# Google Spanner\n",
"\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution. Extend your database application to build AI-powered experiences leveraging Spanner's Langchain integrations.\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"\n",
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/docs/document_loader.ipynb)"
]
},
@ -22,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
"* [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances)\n",
"* [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases)\n",
"* [Create a Spanner table](https://cloud.google.com/spanner/docs/create-query-database-console#create-schema)\n",
@ -58,7 +61,7 @@
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-spanner"
"%pip install -upgrade --quiet langchain-google-spanner langchain"
]
},
{
@ -256,6 +259,34 @@
"## Advanced Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom client\n",
"\n",
"The client created by default is the default client. To pass in `credentials` and `project` explicitly, a custom client can be passed to the constructor."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import spanner\n",
"from google.oauth2 import service_account\n",
"\n",
"creds = service_account.Credentials.from_service_account_file(\"/path/to/key.json\")\n",
"custom_client = spanner.Client(project=\"my-project\", credentials=creds)\n",
"loader = SpannerLoader(\n",
" INSTANCE_ID,\n",
" DATABASE_ID,\n",
" query,\n",
" client=custom_client,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -412,9 +443,7 @@
"outputs": [],
"source": [
"from google.cloud import spanner\n",
"from google.oauth2 import service_account\n",
"\n",
"creds = service_account.Credentials.from_service_account_file(\"/path/to/key.json\")\n",
"custom_client = spanner.Client(project=\"my-project\", credentials=creds)\n",
"saver = SpannerDocumentSaver(\n",
" INSTANCE_ID,\n",

@ -8,7 +8,11 @@
"\n",
"> [AlloyDB](https://cloud.google.com/alloydb) is a fully managed PostgreSQL compatible database service for your most demanding enterprise workloads. AlloyDB combines the best of Google with PostgreSQL, for superior performance, scale, and availability. Extend your database application to build AI-powered experiences leveraging AlloyDB Langchain integrations.\n",
"\n",
"This notebook goes over how to use `AlloyDB for PostgreSQL` to store chat message history with the `AlloyDBChatMessageHistory` class."
"This notebook goes over how to use `AlloyDB for PostgreSQL` to store chat message history with the `AlloyDBChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-alloydb-pg-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/chat_message_history.ipynb)"
]
},
{
@ -20,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the AlloyDB API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
" * [Create a AlloyDB instance](https://cloud.google.com/alloydb/docs/instance-primary-create)\n",
" * [Create a AlloyDB database](https://cloud.google.com/alloydb/docs/database-create)\n",
" * [Add an IAM database user to the database](https://cloud.google.com/alloydb/docs/manage-iam-authn) (Optional)"
@ -112,24 +117,6 @@
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 💡 API Enablement\n",
"The `langchain-google-alloydb-pg` package requires that you [enable the AlloyDB Admin API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable AlloyDB API\n",
"!gcloud services enable alloydb.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -169,7 +156,7 @@
"\n",
"To create a `AlloyDBEngine` using `AlloyDBEngine.from_instance()` you need to provide only 5 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the AlloyDB instance is located.\n",
"1. `project_id` : Project ID of the Google Cloud Project where the AlloyDB instance is located.\n",
"1. `region` : Region where the AlloyDB instance is located.\n",
"1. `cluster`: The name of the AlloyDB cluster.\n",
"1. `instance` : The name of the AlloyDB instance.\n",

@ -10,6 +10,8 @@
"\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to store chat message history with the `BigtableChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-bigtable-python/blob/main/docs/chat_message_history.ipynb)\n"
]
},
@ -22,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Bigtable API](https://console.cloud.google.com/flows/enableapi?apiid=bigtable.googleapis.com)\n",
"* [Create a Bigtable instance](https://cloud.google.com/bigtable/docs/creating-instance)\n",
"* [Create a Bigtable table](https://cloud.google.com/bigtable/docs/managing-tables)\n",
"* [Create Bigtable access credentials](https://developers.google.com/workspace/guides/create-credentials)"
@ -280,7 +283,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.5"
}
},
"nbformat": 4,

@ -11,7 +11,11 @@
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers MySQL, PostgreSQL, and SQL Server database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use `Cloud SQL for SQL Server` to store chat message history with the `MSSQLChatMessageHistory` class."
"This notebook goes over how to use `Cloud SQL for SQL Server` to store chat message history with the `MSSQLChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mssql-python/blob/main/docs/chat_message_history.ipynb)"
]
},
{
@ -26,6 +30,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
" * [Create a Cloud SQL for SQL Server instance](https://cloud.google.com/sql/docs/sqlserver/create-instance)\n",
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/sqlserver/create-manage-databases)\n",
" * [Create a database user](https://cloud.google.com/sql/docs/sqlserver/create-manage-users) (Optional if you choose to use the `sqlserver` user)"
@ -143,30 +148,6 @@
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"id": "rEWWNoNnKOgq",
"metadata": {
"id": "rEWWNoNnKOgq"
},
"source": [
"### 💡 API Enablement\n",
"The `langchain-google-cloud-sql-mssql` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5utKIdq7KYi5",
"metadata": {
"id": "5utKIdq7KYi5"
},
"outputs": [],
"source": [
"# enable Cloud SQL Admin API\n",
"!gcloud services enable sqladmin.googleapis.com"
]
},
{
"cell_type": "markdown",
"id": "f8f2830ee9ca1e01",
@ -219,7 +200,7 @@
"\n",
"To create a `MSSQLEngine` using `MSSQLEngine.from_instance()` you need to provide only 6 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `region` : Region where the Cloud SQL instance is located.\n",
"1. `instance` : The name of the Cloud SQL instance.\n",
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",

@ -13,6 +13,8 @@
"\n",
"This notebook goes over how to use `Cloud SQL for MySQL` to store chat message history with the `MySQLChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-cloud-sql-mysql-python/blob/main/docs/chat_message_history.ipynb)"
]
},
@ -28,6 +30,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the Cloud SQL Admin API.](https://console.cloud.google.com/marketplace/product/google/sqladmin.googleapis.com)\n",
" * [Create a Cloud SQL for MySQL instance](https://cloud.google.com/sql/docs/mysql/create-instance)\n",
" * [Create a Cloud SQL database](https://cloud.google.com/sql/docs/mysql/create-manage-databases)\n",
" * [Add an IAM database user to the database](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users#creating-a-database-user) (Optional)"
@ -145,30 +148,6 @@
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"id": "rEWWNoNnKOgq",
"metadata": {
"id": "rEWWNoNnKOgq"
},
"source": [
"### 💡 API Enablement\n",
"The `langchain-google-cloud-sql-mysql` package requires that you [enable the Cloud SQL Admin API](https://console.cloud.google.com/flows/enableapi?apiid=sqladmin.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5utKIdq7KYi5",
"metadata": {
"id": "5utKIdq7KYi5"
},
"outputs": [],
"source": [
"# enable Cloud SQL Admin API\n",
"!gcloud services enable sqladmin.googleapis.com"
]
},
{
"cell_type": "markdown",
"id": "f8f2830ee9ca1e01",
@ -219,7 +198,7 @@
"\n",
"To create a `MySQLEngine` using `MySQLEngine.from_instance()` you need to provide only 4 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `project_id` : Project ID of the Google Cloud Project where the Cloud SQL instance is located.\n",
"1. `region` : Region where the Cloud SQL instance is located.\n",
"1. `instance` : The name of the Cloud SQL instance.\n",
"1. `database` : The name of the database to connect to on the Cloud SQL instance.\n",
@ -227,7 +206,6 @@
"By default, [IAM database authentication](https://cloud.google.com/sql/docs/mysql/iam-authentication#iam-db-auth) will be used as the method of database authentication. This library uses the IAM principal belonging to the [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) sourced from the envionment.\n",
"\n",
"For more informatin on IAM database authentication please see:\n",
"\n",
"* [Configure an instance for IAM database authentication](https://cloud.google.com/sql/docs/mysql/create-edit-iam-instances)\n",
"* [Manage users with IAM database authentication](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users)\n",
"\n",
@ -548,7 +526,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.5"
}
},
"nbformat": 4,

File diff suppressed because it is too large Load Diff

@ -1,259 +1,245 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Firestore in Datastore\n",
"\n",
"> [Firestore in Datastore](https://cloud.google.com/datastore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore in Datastore](https://cloud.google.com/datastore) to to store chat message history with the `DatastoreChatMessageHistory` class.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/chat_message_history.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Create a Datastore database](https://cloud.google.com/datastore/docs/manage-databases)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Enablement\n",
"The `langchain-google-datastore` package requires that you [enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Datastore API\n",
"!gcloud services enable datastore.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### DatastoreChatMessageHistory\n",
"\n",
"To initialize the `DatastoreChatMessageHistory` class you need to provide only 3 things:\n",
"\n",
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
"1. `collection` : The single `/`-delimited path to a Datastore collection."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_datastore import DatastoreChatMessageHistory\n",
"\n",
"chat_history = DatastoreChatMessageHistory(\n",
" session_id=\"user-session-id\", collection=\"HistoryMessages\"\n",
")\n",
"\n",
"chat_history.add_user_message(\"Hi!\")\n",
"chat_history.add_ai_message(\"How can I help you?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Cleaning up\n",
"When the history of a specific session is obsolete and can be deleted from the database and memory, it can be done the following way.\n",
"\n",
"**Note:** Once deleted, the data is no longer stored in Datastore and is gone forever."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chat_history.clear()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom Client\n",
"\n",
"The client is created by default using the available environment variables. A [custom client](https://cloud.google.com/python/docs/reference/datastore/latest/client) can be passed to the constructor."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.auth import compute_engine\n",
"from google.cloud import datastore\n",
"\n",
"client = datastore.Client(\n",
" project=\"project-custom\",\n",
" database=\"non-default-database\",\n",
" credentials=compute_engine.Credentials(),\n",
")\n",
"\n",
"history = DatastoreChatMessageHistory(\n",
" session_id=\"session-id\", collection=\"History\", client=client\n",
")\n",
"\n",
"history.add_user_message(\"New message\")\n",
"\n",
"history.messages\n",
"\n",
"history.clear()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Firestore in Datastore Mode\n",
"\n",
"> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database build for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to save chat messages into `Firestore` in Datastore Mode.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-datastore-python/blob/main/docs/chat_message_history.ipynb)"
]
},
"nbformat": 4,
"nbformat_minor": 4
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Datastore API](https://console.cloud.google.com/flows/enableapi?apiid=datastore.googleapis.com)\n",
"* [Create a Firestore in Datastore Mode database](https://cloud.google.com/datastore/docs/manage-databases)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"\n",
"The integration lives in its own `langchain-google-datastore` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only**: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"- If you are using Colab to run this notebook, use the cell below and continue.\n",
"- If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### DatastoreChatMessageHistory\n",
"\n",
"To initialize the `DatastoreChatMessageHistory` class you need to provide only 2 things:\n",
"\n",
"1. `session_id` - A unique identifier string that specifies an id for the session.\n",
"1. `kind` : The name of the Datastore kind to write into. This is an optional value and by default it will use `ChatHistory` as the kind."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_datastore import DatastoreChatMessageHistory\n",
"\n",
"chat_history = DatastoreChatMessageHistory(\n",
" session_id=\"user-session-id\", kind=\"HistoryMessages\"\n",
")\n",
"\n",
"chat_history.add_user_message(\"Hi!\")\n",
"chat_history.add_ai_message(\"How can I help you?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chat_history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Cleanup\n",
"\n",
"When the history of a specific session is obsolete and can be deleted from the database and memory, it can be done the following way.\n",
"\n",
"**Note:** Once deleted, the data is no longer stored in Datastore and is gone forever."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chat_history.clear()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Custom Client\n",
"\n",
"The client is created by default using the available environment variables. A [custom client](https://cloud.google.com/python/docs/reference/datastore/latest/client) can be passed to the constructor."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.auth import compute_engine\n",
"from google.cloud import datastore\n",
"\n",
"client = datastore.Client(\n",
" project=\"project-custom\",\n",
" database=\"non-default-database\",\n",
" credentials=compute_engine.Credentials(),\n",
")\n",
"\n",
"history = DatastoreChatMessageHistory(\n",
" session_id=\"session-id\", kind=\"History\", client=client\n",
")\n",
"\n",
"history.add_user_message(\"New message\")\n",
"\n",
"history.messages\n",
"\n",
"history.clear()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -1,393 +1,400 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google El Carro Oracle Operator\n",
"\n",
"> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring. Extend your Oracle database's capabilities to build AI-powered experiences by leveraging the El Carro Langchain integration.\n",
"\n",
"This guide goes over how to use the El Carro Langchain integration to store chat message history with the `ElCarroChatMessageHistory` class. This integration works for any Oracle database, regardless of where it is running.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-el-carro-python/blob/main/docs/chat_message_history.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
" * Complete the [Getting Started](https://github.com/googleapis/langchain-google-el-carro-python/tree/main/README.md#getting-started) section if you would like to run your Oracle database with El Carro."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"The integration lives in its own `langchain-google-el-carro` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-google-el-carro langchain-google-vertexai langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import auth\n",
"\n",
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set Up Oracle Database Connection\n",
"Fill out the following variable with your Oracle database connections details."
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# @title Set Your Values Here { display-mode: \"form\" }\n",
"HOST = \"127.0.0.1\" # @param {type: \"string\"}\n",
"PORT = 3307 # @param {type: \"integer\"}\n",
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
"TABLE_NAME = \"message_store\" # @param {type: \"string\"}\n",
"USER = \"my-user\" # @param {type: \"string\"}\n",
"PASSWORD = input(\"Please provide a password to be used for the database user: \")"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"\n",
"If you are using `El Carro`, you can find the hostname and port values in the\n",
"status of the `El Carro` Kubernetes instance.\n",
"Use the user password you created for your PDB.\n",
"Example"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"kubectl get -w instances.oracle.db.anthosapis.com -n db\n",
"NAME DB ENGINE VERSION EDITION ENDPOINT URL DB NAMES BACKUP ID READYSTATUS READYREASON DBREADYSTATUS DBREADYREASON\n",
"mydb Oracle 18c Express mydb-svc.db 34.71.69.25:6021 False CreateInProgress"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ElCarroEngine Connection Pool\n",
"\n",
"`ElCarroEngine` configures a connection pool to your Oracle database, enabling successful connections from your application and following industry best practices."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_el_carro import ElCarroEngine\n",
"\n",
"elcarro_engine = ElCarroEngine.from_instance(\n",
" db_host=HOST,\n",
" db_port=PORT,\n",
" db_name=DATABASE,\n",
" db_user=USER,\n",
" db_password=PASSWORD,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize a table\n",
"The `ElCarroChatMessageHistory` class requires a database table with a specific\n",
"schema in order to store the chat message history.\n",
"\n",
"The `ElCarroEngine` class has a\n",
"method `init_chat_history_table()` that can be used to create a table with the\n",
"proper schema for you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"elcarro_engine.init_chat_history_table(table_name=TABLE_NAME)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ElCarroChatMessageHistory\n",
"\n",
"To initialize the `ElCarroChatMessageHistory` class you need to provide only 3\n",
"things:\n",
"\n",
"1. `elcarro_engine` - An instance of an `ElCarroEngine` engine.\n",
"1. `session_id` - A unique identifier string that specifies an id for the\n",
" session.\n",
"1. `table_name` : The name of the table within the Oracle database to store the\n",
" chat message history."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_el_carro import ElCarroChatMessageHistory\n",
"\n",
"history = ElCarroChatMessageHistory(\n",
" elcarro_engine=elcarro_engine, session_id=\"test_session\", table_name=TABLE_NAME\n",
")\n",
"history.add_user_message(\"hi!\")\n",
"history.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Cleaning up\n",
"When the history of a specific session is obsolete and can be deleted, it can be done the following way.\n",
"\n",
"**Note:** Once deleted, the data is no longer stored in your database and is gone forever."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"history.clear()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🔗 Chaining\n",
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Vertex AI API\n",
"!gcloud services enable aiplatform.googleapis.com"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_google_vertexai import ChatVertexAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | ChatVertexAI(project=PROJECT_ID)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: ElCarroChatMessageHistory(\n",
" elcarro_engine,\n",
" session_id=session_id,\n",
" table_name=TABLE_NAME,\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This is where we configure the session id\n",
"config = {\"configurable\": {\"session_id\": \"test_session\"}}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google El Carro for Oracle Workloads\n",
"\n",
"> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator) offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring. Extend your Oracle database's capabilities to build AI-powered experiences by leveraging the El Carro Langchain integration.\n",
"\n",
"This guide goes over how to use the El Carro Langchain integration to store chat message history with the `ElCarroChatMessageHistory` class. This integration works for any Oracle database, regardless of where it is running.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-el-carro-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-el-carro-python/blob/main/docs/chat_message_history.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before You Begin\n",
"\n",
"To run this notebook, you will need to do the following:\n",
"\n",
" * Complete the [Getting Started](https://github.com/googleapis/langchain-google-el-carro-python/tree/main/README.md#getting-started) section"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🦜🔗 Library Installation\n",
"The integration lives in its own `langchain-google-el-carro` package, so we need to install it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-google-el-carro langchain-google-vertexai langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔐 Authentication\n",
"Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.\n",
"\n",
"* If you are using Colab to run this notebook, use the cell below and continue.\n",
"* If you are using Vertex AI Workbench, check out the setup instructions [here](https://github.com/GoogleCloudPlatform/generative-ai/tree/main/setup-env)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# from google.colab import auth\n",
"\n",
"# auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ☁ Set Your Google Cloud Project\n",
"Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.\n",
"\n",
"If you don't know your project ID, try the following:\n",
"\n",
"* Run `gcloud config list`.\n",
"* Run `gcloud projects list`.\n",
"* See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.\n",
"\n",
"PROJECT_ID = \"my-project-id\" # @param {type:\"string\"}\n",
"\n",
"# Set the project id\n",
"!gcloud config set project {PROJECT_ID}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "azV0k45WWSVI"
},
"source": [
"## Basic Usage\n",
"\n",
"### Set Up Oracle Database Connection\n",
"Fill out the following variable with your Oracle database connections details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# @title Set Your Values Here { display-mode: \"form\" }\n",
"HOST = \"127.0.0.1\" # @param {type: \"string\"}\n",
"PORT = 3307 # @param {type: \"integer\"}\n",
"DATABASE = \"my-database\" # @param {type: \"string\"}\n",
"TABLE_NAME = \"message_store\" # @param {type: \"string\"}\n",
"USER = \"my-user\" # @param {type: \"string\"}\n",
"PASSWORD = input(\"Please provide a password to be used for the database user: \")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
"If you are using El Carro, you can find the hostname and port values in the\n",
"status of the El Carro Kubernetes instance.\n",
"Use the user password you created for your PDB.\n",
"Example Output"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"```\n",
"kubectl get -w instances.oracle.db.anthosapis.com -n db\n",
"NAME DB ENGINE VERSION EDITION ENDPOINT URL DB NAMES BACKUP ID READYSTATUS READYREASON DBREADYSTATUS DBREADYREASON\n",
"\n",
"mydb Oracle 18c Express mydb-svc.db 34.71.69.25:6021 ['pdbname'] TRUE CreateComplete True CreateComplete\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"### ElCarroEngine Connection Pool\n",
"\n",
"`ElCarroEngine` configures a connection pool to your Oracle database, enabling successful connections from your application and following industry best practices."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xG1mYFkEWbkp"
},
"outputs": [],
"source": [
"from langchain_google_el_carro import ElCarroEngine\n",
"\n",
"elcarro_engine = ElCarroEngine.from_instance(\n",
" db_host=HOST,\n",
" db_port=PORT,\n",
" db_name=DATABASE,\n",
" db_user=USER,\n",
" db_password=PASSWORD,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize a table\n",
"The `ElCarroChatMessageHistory` class requires a database table with a specific\n",
"schema in order to store the chat message history.\n",
"\n",
"The `ElCarroEngine` class has a\n",
"method `init_chat_history_table()` that can be used to create a table with the\n",
"proper schema for you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"elcarro_engine.init_chat_history_table(table_name=TABLE_NAME)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ElCarroChatMessageHistory\n",
"\n",
"To initialize the `ElCarroChatMessageHistory` class you need to provide only 3\n",
"things:\n",
"\n",
"1. `elcarro_engine` - An instance of an `ElCarroEngine` engine.\n",
"1. `session_id` - A unique identifier string that specifies an id for the\n",
" session.\n",
"1. `table_name` : The name of the table within the Oracle database to store the\n",
" chat message history."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_el_carro import ElCarroChatMessageHistory\n",
"\n",
"history = ElCarroChatMessageHistory(\n",
" elcarro_engine=elcarro_engine, session_id=\"test_session\", table_name=TABLE_NAME\n",
")\n",
"history.add_user_message(\"hi!\")\n",
"history.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"history.messages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Cleaning up\n",
"When the history of a specific session is obsolete and can be deleted, it can be done the following way.\n",
"\n",
"**Note:** Once deleted, the data is no longer stored in the database and is gone forever."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"history.clear()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🔗 Chaining\n",
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Vertex AI API\n",
"!gcloud services enable aiplatform.googleapis.com"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_google_vertexai import ChatVertexAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | ChatVertexAI(project=PROJECT_ID)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: ElCarroChatMessageHistory(\n",
" elcarro_engine,\n",
" session_id=session_id,\n",
" table_name=TABLE_NAME,\n",
" ),\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This is where we configure the session id\n",
"config = {\"configurable\": {\"session_id\": \"test_session\"}}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -10,6 +10,8 @@
"\n",
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to to store chat message history with the `FirestoreChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-firestore-python/blob/main/docs/chat_message_history.ipynb)"
]
},
@ -22,6 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Firestore API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com)\n",
"* [Create a Firestore database](https://cloud.google.com/firestore/docs/manage-databases)\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."
@ -118,24 +121,6 @@
"auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Enablement\n",
"The `langchain-google-firestore` package requires that you [enable the Firestore Admin API](https://console.cloud.google.com/flows/enableapi?apiid=firestore.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# enable Firestore Admin API\n",
"!gcloud services enable firestore.googleapis.com"
]
},
{
"cell_type": "markdown",
"metadata": {},

@ -12,6 +12,8 @@
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to store chat message history with the `MemorystoreChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/chat_message_history.ipynb)"
]
},
@ -24,6 +26,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 5.0.\n",
"\n",
"After confirmed access to database in the runtime environment of this notebook, filling the following values and run the cell before running example scripts."

@ -7,9 +7,13 @@
},
"source": [
"# Google Spanner\n",
"> [Cloud Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"\n",
"This notebook goes over how to use `Spanner` to store chat message history with the `SpannerChatMessageHistory` class."
"This notebook goes over how to use `Spanner` to store chat message history with the `SpannerChatMessageHistory` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/samples/chat_message_history.ipynb)"
]
},
{
@ -23,6 +27,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
" * [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances)\n",
" * [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases)"
]
@ -169,7 +174,7 @@
},
"source": [
"### Set Spanner database values\n",
"Find your database values, in the [Spanner Instances page](https://console.cloud.google.com/spanner/instances)."
"Find your database values, in the [Spanner Instances page](https://console.cloud.google.com/spanner)."
]
},
{

@ -6,9 +6,13 @@
"source": [
"# Google AlloyDB for PostgreSQL\n",
"\n",
"> [Google Cloud AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB's Langchain integrations.\n",
"> [AlloyDB](https://cloud.google.com/alloydb) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB's Langchain integrations.\n",
"\n",
"This notebook goes over how to use `AlloyDB for PostgreSQL` to store vector embeddings with the `AlloyDBVectorStore` class."
"This notebook goes over how to use `AlloyDB for PostgreSQL` to store vector embeddings with the `AlloyDBVectorStore` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-alloydb-pg-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-alloydb-pg-python/blob/main/docs/vector_store.ipynb)"
]
},
{
@ -20,7 +24,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the AlloyDB Admin API.](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
" * [Enable the AlloyDB API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com)\n",
" * [Create a AlloyDB cluster and instance.](https://cloud.google.com/alloydb/docs/cluster-create)\n",
" * [Create a AlloyDB database.](https://cloud.google.com/alloydb/docs/quickstart/create-and-connect)\n",
" * [Add a User to the database.](https://cloud.google.com/alloydb/docs/database-users/about)"
@ -64,6 +68,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "v6jBDnYnNM08",
"metadata": {
"id": "v6jBDnYnNM08"
},
@ -78,6 +83,7 @@
},
{
"cell_type": "markdown",
"id": "yygMe6rPWxHS",
"metadata": {
"id": "yygMe6rPWxHS"
},
@ -92,6 +98,7 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "PTXN1_DSXj2b",
"metadata": {
"id": "PTXN1_DSXj2b"
},
@ -104,6 +111,7 @@
},
{
"cell_type": "markdown",
"id": "NEvB9BoLEulY",
"metadata": {
"id": "NEvB9BoLEulY"
},
@ -121,6 +129,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "gfkS3yVRE4_W",
"metadata": {
"cellView": "form",
"id": "gfkS3yVRE4_W"
@ -137,28 +146,7 @@
},
{
"cell_type": "markdown",
"metadata": {
"id": "rEWWNoNnKOgq"
},
"source": [
"### 💡 API Enablement\n",
"The `langchain-google-alloydb-pg` package requires that you [enable the AlloyDB Admin API](https://console.cloud.google.com/flows/enableapi?apiid=alloydb.googleapis.com) in your Google Cloud Project."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "5utKIdq7KYi5"
},
"outputs": [],
"source": [
"# enable AlloyDB Admin API\n",
"!gcloud services enable alloydb.googleapis.com"
]
},
{
"cell_type": "markdown",
"id": "f8f2830ee9ca1e01",
"metadata": {
"id": "f8f2830ee9ca1e01"
},
@ -168,6 +156,7 @@
},
{
"cell_type": "markdown",
"id": "OMvzMWRrR6n7",
"metadata": {
"id": "OMvzMWRrR6n7"
},
@ -179,6 +168,7 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "irl7eMFnSPZr",
"metadata": {
"id": "irl7eMFnSPZr"
},
@ -194,6 +184,7 @@
},
{
"cell_type": "markdown",
"id": "QuQigs4UoFQ2",
"metadata": {
"id": "QuQigs4UoFQ2"
},
@ -202,7 +193,7 @@
"\n",
"One of the requirements and arguments to establish AlloyDB as a vector store is a `AlloyDBEngine` object. The `AlloyDBEngine` configures a connection pool to your AlloyDB database, enabling successful connections from your application and following industry best practices.\n",
"\n",
"To create a `AlloyDBEngine` using `AlloyDBEngine.from_instance()` you need to provide only 4 things:\n",
"To create a `AlloyDBEngine` using `AlloyDBEngine.from_instance()` you need to provide only 5 things:\n",
"\n",
"1. `project_id` : Project ID of the Google Cloud Project where the AlloyDB instance is located.\n",
"1. `region` : Region where the AlloyDB instance is located.\n",
@ -279,6 +270,7 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "5utKIdq7KYi5",
"metadata": {
"id": "5utKIdq7KYi5"
},
@ -534,8 +526,7 @@
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
@ -548,9 +539,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 0
}

@ -11,6 +11,8 @@
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to store vector embeddings with the `MemorystoreVectorStore` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-memorystore-redis-python/blob/main/docs/vector_store.ipynb)"
]
},
@ -31,6 +33,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
"* [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
"* [Enable the Memorystore for Redis API](https://console.cloud.google.com/flows/enableapi?apiid=redis.googleapis.com)\n",
"* [Create a Memorystore for Redis instance](https://cloud.google.com/memorystore/docs/redis/create-instance-console). Ensure that the version is greater than or equal to 7.2."
]
},
@ -52,7 +55,7 @@
},
"outputs": [],
"source": [
"%pip install -upgrade --quiet langchain-google-memorystore-redis"
"%pip install -upgrade --quiet langchain-google-memorystore-redis langchain"
]
},
{
@ -184,8 +187,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"loader = TextLoader(\"./state_of_the_union.txt\")\n",
"documents = loader.load()\n",

@ -5,9 +5,13 @@
"metadata": {},
"source": [
"# Google Spanner\n",
"> [Cloud Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"\n",
"This notebook goes over how to use `Spanner` for Vector Search with `SpannerVectorStore` class."
"This notebook goes over how to use `Spanner` for Vector Search with `SpannerVectorStore` class.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/langchain-google-spanner-python/blob/main/docs/vector_store.ipynb)"
]
},
{
@ -19,6 +23,7 @@
"To run this notebook, you will need to do the following:\n",
"\n",
" * [Create a Google Cloud Project](https://developers.google.com/workspace/guides/create-project)\n",
" * [Enable the Cloud Spanner API](https://console.cloud.google.com/flows/enableapi?apiid=spanner.googleapis.com)\n",
" * [Create a Spanner instance](https://cloud.google.com/spanner/docs/create-manage-instances)\n",
" * [Create a Spanner database](https://cloud.google.com/spanner/docs/create-manage-databases)"
]
@ -151,8 +156,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set Cloud Spanner database values\n",
"Find your database values, in the [Cloud Spanner Instances page](https://console.cloud.google.com/spanner?_ga=2.223735448.2062268965.1707700487-2088871159.1707257687)."
"### Set Spanner database values\n",
"Find your database values, in the [Spanner Instances page](https://console.cloud.google.com/spanner?_ga=2.223735448.2062268965.1707700487-2088871159.1707257687)."
]
},
{

Loading…
Cancel
Save