docs: `graphs` update (#19675)

Issue: The `graph` code was moved into the `community` package a long
ago. But the related documentation is still in the
[use_cases](https://python.langchain.com/docs/use_cases/graph/integrations/diffbot_graphtransformer)
section and not in the `integrations`.
Changes:
- moved the `use_cases/graph/integrations` notebooks into the
`integrations/graphs`
- renamed files and changed titles to follow the consistent format
- redirected old page URLs to new URLs in `vercel.json` and in several
other pages
- added descriptions and links when necessary
- formatted into the consistent format
pull/19892/head^2
Leonid Ganeline 1 month ago committed by GitHub
parent be3dd62de4
commit 82f0198be2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -4,8 +4,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Neptune Open Cypher QA Chain\n",
"This QA chain queries Neptune graph database using openCypher and returns human readable response\n"
"# Amazon Neptune with Cypher\n",
"\n",
">[Amazon Neptune](https://aws.amazon.com/neptune/) is a high-performance graph analytics and serverless database for superior scalability and availability.\n",
">\n",
">This example shows the QA chain that queries the `Neptune` graph database using `openCypher` and returns a human-readable response.\n",
">\n",
">[Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph.\n",
">\n",
">[openCypher](https://opencypher.org/) is an open-source implementation of Cypher."
]
},
{
@ -53,7 +60,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -67,10 +74,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
},
"orig_nbformat": 4
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -4,12 +4,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Neptune SPARQL QA Chain\n",
"# Amazon Neptune with SPARQL\n",
"\n",
"This QA chain queries Resource Description Framework (RDF) data in an Amazon Neptune graph database using the SPARQL query language and returns a human readable response.\n",
">[Amazon Neptune](https://aws.amazon.com/neptune/) is a high-performance graph analytics and serverless database for superior scalability and availability.\n",
">\n",
">This example shows the QA chain that queries [Resource Description Framework (RDF)](https://en.wikipedia.org/wiki/Resource_Description_Framework) data \n",
"in an `Amazon Neptune` graph database using the `SPARQL` query language and returns a human-readable response.\n",
">\n",
">[SPARQL](https://en.wikipedia.org/wiki/SPARQL) is a standard query language for `RDF` graphs.\n",
"\n",
"\n",
"This code uses a `NeptuneRdfGraph` class that connects with the Neptune database and loads its schema. The `NeptuneSparqlQAChain` is used to connect the graph and LLM to ask natural language questions.\n",
"This example uses a `NeptuneRdfGraph` class that connects with the Neptune database and loads its schema. \n",
"The `NeptuneSparqlQAChain` is used to connect the graph and LLM to ask natural language questions.\n",
"\n",
"This notebook demonstrates an example using organizational data.\n",
"\n",
@ -29,17 +35,20 @@
"}\n",
"```\n",
"\n",
"- S3 bucket for staging sample data, bucket should be in same account/region as Neptune."
"- S3 bucket for staging sample data. The bucket should be in the same account/region as Neptune."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Seed W3C organizational data\n",
"W3C org ontology plus some instances. \n",
"## Setting up\n",
"\n",
"You will need an S3 bucket in the same region and account. Set STAGE_BUCKET to name of that bucket."
"### Seed the W3C organizational data\n",
"\n",
"Seed the W3C organizational data, W3C org ontology plus some instances. \n",
" \n",
"You will need an S3 bucket in the same region and account. Set `STAGE_BUCKET`as the name of that bucket."
]
},
{
@ -100,7 +109,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Chain"
"### Setup Chain"
]
},
{
@ -137,6 +146,13 @@
"** Restart kernel **"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare an example"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -352,7 +368,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -366,9 +382,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -7,10 +7,14 @@
"id": "c94240f5"
},
"source": [
"# ArangoDB QA chain\n",
"# ArangoDB\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/arangodb/interactive_tutorials/blob/master/notebooks/Langchain.ipynb)\n",
"\n",
">[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to drive value from\n",
">connected data, faster. Native graphs, an integrated search engine, and JSON support, via\n",
">a single query language. `ArangoDB` runs on-prem or in the cloud.\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to an [ArangoDB](https://github.com/arangodb/arangodb#readme) database."
]
},
@ -21,7 +25,9 @@
"id": "dbc0ee68"
},
"source": [
"You can get a local ArangoDB instance running via the [ArangoDB Docker image](https://hub.docker.com/_/arangodb): \n",
"## Setting up\n",
"\n",
"You can get a local `ArangoDB` instance running via the [ArangoDB Docker image](https://hub.docker.com/_/arangodb): \n",
"\n",
"```\n",
"docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodb\n",
@ -113,9 +119,9 @@
"id": "995ea9b9"
},
"source": [
"## Populating the Database\n",
"## Populating database\n",
"\n",
"We will rely on the Python Driver to import our [GameOfThrones](https://github.com/arangodb/example-datasets/tree/master/GameOfThrones) data into our database."
"We will rely on the `Python Driver` to import our [GameOfThrones](https://github.com/arangodb/example-datasets/tree/master/GameOfThrones) data into our database."
]
},
{
@ -215,9 +221,9 @@
"id": "58c1a8ea"
},
"source": [
"## Getting & Setting the ArangoDB Schema\n",
"## Getting and setting the ArangoDB schema\n",
"\n",
"An initial ArangoDB Schema is generated upon instantiating the `ArangoDBGraph` object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:"
"An initial `ArangoDB Schema` is generated upon instantiating the `ArangoDBGraph` object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:"
]
},
{
@ -399,9 +405,9 @@
"id": "68a3c677"
},
"source": [
"## Querying the ArangoDB Database\n",
"## Querying the ArangoDB database\n",
"\n",
"We can now use the ArangoDB Graph QA Chain to inquire about our data"
"We can now use the `ArangoDB Graph` QA Chain to inquire about our data"
]
},
{
@ -640,7 +646,7 @@
"id": "Ob_3aGauGd7d"
},
"source": [
"## Chain Modifiers"
"## Chain modifiers"
]
},
{
@ -812,7 +818,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -5,9 +5,13 @@
"id": "c94240f5",
"metadata": {},
"source": [
"# Gremlin (with CosmosDB) QA chain\n",
"# Azure Cosmos DB for Apache Gremlin\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Gremlin query language."
">[Azure Cosmos DB for Apache Gremlin](https://learn.microsoft.com/en-us/azure/cosmos-db/gremlin/introduction) is a graph database service that can be used to store massive graphs with billions of vertices and edges. You can query the graphs with millisecond latency and evolve the graph structure easily.\n",
">\n",
">[Gremlin](https://en.wikipedia.org/wiki/Gremlin_(query_language)) is a graph traversal language and virtual machine developed by `Apache TinkerPop` of the `Apache Software Foundation`.\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the `Gremlin` query language."
]
},
{
@ -15,9 +19,42 @@
"id": "dbc0ee68",
"metadata": {},
"source": [
"You will need to have a Azure CosmosDB Graph database instance. One option is to create a [free CosmosDB Graph database instance in Azure](https://learn.microsoft.com/en-us/azure/cosmos-db/free-tier). \n",
"## Setting up\n",
"\n",
"Install a library:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "350da0d4-680e-47fa-964e-1ea7d52a54a3",
"metadata": {},
"outputs": [],
"source": [
"!pip3 install gremlinpython"
]
},
{
"cell_type": "markdown",
"id": "8f1f1a73-c43e-40ba-83af-740d5437a2bf",
"metadata": {},
"source": [
"You will need an Azure CosmosDB Graph database instance. One option is to create a [free CosmosDB Graph database instance in Azure](https://learn.microsoft.com/en-us/azure/cosmos-db/free-tier). \n",
"\n",
"When you create your Cosmos DB account and Graph, use /type as partition key."
"When you create your Cosmos DB account and Graph, use `/type` as a partition key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a32682f0-be52-4c72-b166-21aebb3c5866",
"metadata": {},
"outputs": [],
"source": [
"cosmosdb_name = \"mycosmosdb\"\n",
"cosmosdb_db_id = \"graphtesting\"\n",
"cosmosdb_db_graph_id = \"mygraph\"\n",
"cosmosdb_access_Key = \"longstring==\""
]
},
{
@ -42,11 +79,6 @@
"metadata": {},
"outputs": [],
"source": [
"cosmosdb_name = \"mycosmosdb\"\n",
"cosmosdb_db_id = \"graphtesting\"\n",
"cosmosdb_db_graph_id = \"mygraph\"\n",
"cosmosdb_access_Key = \"longstring==\"\n",
"\n",
"graph = GremlinGraph(\n",
" url=f\"=wss://{cosmosdb_name}.gremlin.cosmos.azure.com:443/\",\n",
" username=f\"/dbs/{cosmosdb_db_id}/colls/{cosmosdb_db_graph_id}\",\n",
@ -231,7 +263,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -5,17 +5,22 @@
"id": "7f0b0c06-ee70-468c-8bf5-b023f9e5e0a2",
"metadata": {},
"source": [
"# Diffbot Graph Transformer\n",
"# Diffbot\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/graph/diffbot_graphtransformer.ipynb)\n",
"\n",
">[Diffbot](https://docs.diffbot.com/docs/getting-started-with-diffbot) is a suite of products that make it easy to integrate and research data on the web.\n",
">\n",
">[The Diffbot Knowledge Graph](https://docs.diffbot.com/docs/getting-started-with-diffbot-knowledge-graph) is a self-updating graph database of the public web.\n",
"\n",
"\n",
"## Use case\n",
"\n",
"Text data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications.\n",
"Text data often contain rich relationships and insights used for various analytics, recommendation engines, or knowledge management applications.\n",
"\n",
"Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.\n",
"`Diffbot's NLP API` allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.\n",
"\n",
"By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications.\n",
"By coupling `Diffbot's NLP API` with `Neo4j`, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications.\n",
"\n",
"This combination allows for use cases such as:\n",
"\n",
@ -32,7 +37,7 @@
"2. `Query a graph database` using chains for query creation and execution\n",
"3. `Interact with a graph database` using agents for robust and flexible querying \n",
"\n",
"## Quickstart\n",
"## Setting up\n",
"\n",
"First, get required packages and set environment variables:"
]
@ -52,9 +57,9 @@
"id": "77718977-629e-46c2-b091-f9191b9ec569",
"metadata": {},
"source": [
"## Diffbot NLP Service\n",
"### Diffbot NLP Service\n",
"\n",
"Diffbot's NLP service is a tool for extracting entities, relationships, and semantic context from unstructured text data.\n",
"`Diffbot's NLP` service is a tool for extracting entities, relationships, and semantic context from unstructured text data.\n",
"This extracted information can be used to construct a knowledge graph.\n",
"To use their service, you'll need to obtain an API key from [Diffbot](https://www.diffbot.com/products/natural-language/)."
]
@ -294,7 +299,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -4,22 +4,28 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# FalkorDBQAChain"
"# FalkorDB"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.\n",
">[FalkorDB](https://www.falkordb.com/) is a low-latency Graph Database that delivers knowledge to GenAI.\n",
"\n",
"FalkorDB is a low latency property graph database management system. You can simply run its docker locally:\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to `FalkorDB` database.\n",
"\n",
"\n",
"## Setting up\n",
"\n",
"You can run the `falkordb` Docker container locally:\n",
"\n",
"```bash\n",
"docker run -p 6379:6379 -it --rm falkordb/falkordb:edge\n",
"```\n",
"\n",
"Once launched, you can simply start creating a database on the local machine and connect to it."
"Once launched, you create a database on the local machine and connect to it."
]
},
{
@ -37,7 +43,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a graph connection and insert some demo data."
"## Create a graph connection and insert the demo data"
]
},
{
@ -97,7 +103,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating FalkorDBQAChain"
"## Creating FalkorDBQAChain"
]
},
{
@ -138,7 +144,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Querying the graph"
"## Querying the graph"
]
},
{
@ -256,7 +262,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -270,10 +276,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -5,11 +5,24 @@
"id": "d2777010",
"metadata": {},
"source": [
"# HugeGraph QA Chain\n",
"# HugeGraph\n",
"\n",
">[HugeGraph](https://hugegraph.apache.org/) is a convenient, efficient, and adaptable graph database compatible with\n",
">the `Apache TinkerPop3` framework and the `Gremlin` query language.\n",
">\n",
">[Gremlin](https://en.wikipedia.org/wiki/Gremlin_(query_language)) is a graph traversal language and virtual machine developed by `Apache TinkerPop` of the `Apache Software Foundation`.\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to [HugeGraph](https://hugegraph.apache.org/cn/) database."
]
},
{
"cell_type": "markdown",
"id": "0b219ec2-75d7-4db3-b844-0bf310e5b187",
"metadata": {},
"source": [
"## Setting up"
]
},
{
"cell_type": "markdown",
"id": "f26dcbe4",
@ -286,9 +299,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -300,7 +313,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -5,9 +5,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# KuzuQAChain\n",
"# Kuzu\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to [Kùzu](https://kuzudb.com) database."
">[Kùzu](https://kuzudb.com) is an in-process property graph database management system. \n",
">\n",
">This notebook shows how to use LLMs to provide a natural language interface to [Kùzu](https://kuzudb.com) database with `Cypher` graph query language.\n",
">\n",
">[Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph."
]
},
{
@ -15,13 +19,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[Kùzu](https://kuzudb.com) is an in-process property graph database management system. You can simply install it with `pip`:\n",
"## Setting up\n",
"\n",
"Install the python package:\n",
"\n",
"```bash\n",
"pip install kuzu\n",
"```\n",
"\n",
"Once installed, you can simply import it and start creating a database on the local machine and connect to it:\n"
"Create a database on the local machine and connect to it:"
]
},
{
@ -351,7 +357,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -365,10 +371,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -5,10 +5,21 @@
"id": "311b3061",
"metadata": {},
"source": [
"# Memgraph QA chain\n",
"This notebook shows how to use LLMs to provide a natural language interface to a [Memgraph](https://github.com/memgraph/memgraph) database. To complete this tutorial, you will need [Docker](https://www.docker.com/get-started/) and [Python 3.x](https://www.python.org/) installed.\n",
"# Memgraph\n",
"\n",
"To follow along with this tutorial, ensure you have a running Memgraph instance. You can download and run it in a local Docker container by executing the following script:\n",
">[Memgraph](https://github.com/memgraph/memgraph) is the open-source graph database, compatible with `Neo4j`.\n",
">The database is using the `Cypher` graph query language, \n",
">\n",
">[Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph.\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to a [Memgraph](https://github.com/memgraph/memgraph) database.\n",
"\n",
"\n",
"## Setting up\n",
"\n",
"To complete this tutorial, you will need [Docker](https://www.docker.com/get-started/) and [Python 3.x](https://www.python.org/) installed.\n",
"\n",
"Ensure you have a running `Memgraph` instance. You can download and run it in a local Docker container by executing the following script:\n",
"```\n",
"docker run \\\n",
" -it \\\n",
@ -19,7 +30,7 @@
" -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform\n",
"```\n",
"\n",
"You will need to wait a few seconds for the database to start. If the process completes successfully, you should see something like this:\n",
"You will need to wait a few seconds for the database to start. If the process is completed successfully, you should see something like this:\n",
"```\n",
"mgconsole X.X\n",
"Connected to 'memgraph://127.0.0.1:7687'\n",
@ -28,7 +39,7 @@
"memgraph>\n",
"```\n",
"\n",
"Now you can start playing with Memgraph!"
"Now you can start playing with `Memgraph`!"
]
},
{
@ -686,7 +697,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -6,9 +6,14 @@
"id": "c94240f5",
"metadata": {},
"source": [
"# NebulaGraphQAChain\n",
"# NebulaGraph\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database."
">[NebulaGraph](https://www.nebula-graph.io/) is an open-source, distributed, scalable, lightning-fast\n",
"> graph database built for super large-scale graphs with milliseconds of latency. It uses the `nGQL` graph query language.\n",
">\n",
">[nGQL](https://docs.nebula-graph.io/3.0.0/3.ngql-guide/1.nGQL-overview/1.overview/) is a declarative graph query language for `NebulaGraph`. It allows expressive and efficient graph patterns. `nGQL` is designed for both developers and operations professionals. `nGQL` is an SQL-like query language.\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to `NebulaGraph` database."
]
},
{
@ -17,7 +22,9 @@
"id": "dbc0ee68",
"metadata": {},
"source": [
"You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:\n",
"## Setting up\n",
"\n",
"You can start the `NebulaGraph` cluster as a Docker container by running the following script:\n",
"\n",
"```bash\n",
"curl -fsSL nebula-up.siwei.io/install.sh | bash\n",
@ -28,7 +35,7 @@
"- NebulaGraph Cloud Service. See [here](https://www.nebula-graph.io/cloud)\n",
"- Deploy from package, source code, or via Kubernetes. See [here](https://docs.nebula-graph.io/)\n",
"\n",
"Once the cluster is running, we could create the SPACE and SCHEMA for the database."
"Once the cluster is running, we could create the `SPACE` and `SCHEMA` for the database."
]
},
{
@ -93,18 +100,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "d8eea530",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"UsageError: Cell magic `%%ngql` not found.\n"
]
}
],
"outputs": [],
"source": [
"%%ngql\n",
"INSERT VERTEX person(name, birthdate) VALUES \"Al Pacino\":(\"Al Pacino\", \"1940-04-25\");\n",
@ -262,7 +261,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -5,9 +5,15 @@
"id": "c94240f5",
"metadata": {},
"source": [
"# Neo4j DB QA chain\n",
"# Neo4j\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language."
">[Neo4j](https://neo4j.com/docs/getting-started/) is a graph database management system developed by `Neo4j, Inc`.\n",
"\n",
">The data elements `Neo4j` stores are nodes, edges connecting them, and attributes of nodes and edges. Described by its developers as an ACID-compliant transactional database with native graph storage and processing, `Neo4j` is available in a non-open-source \"community edition\" licensed with a modification of the GNU General Public License, with online backup and high availability extensions licensed under a closed-source commercial license. Neo also licenses `Neo4j` with these extensions under closed-source commercial terms.\n",
"\n",
">This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the `Cypher` query language.\n",
"\n",
">[Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph.\n"
]
},
{
@ -15,7 +21,9 @@
"id": "dbc0ee68",
"metadata": {},
"source": [
"You will need to have a running Neo4j instance. One option is to create a [free Neo4j database instance in their Aura cloud service](https://neo4j.com/cloud/platform/aura-graph-database/). You can also run the database locally using the [Neo4j Desktop application](https://neo4j.com/download/), or running a docker container.\n",
"## Settin up\n",
"\n",
"You will need to have a running `Neo4j` instance. One option is to create a [free Neo4j database instance in their Aura cloud service](https://neo4j.com/cloud/platform/aura-graph-database/). You can also run the database locally using the [Neo4j Desktop application](https://neo4j.com/download/), or running a docker container.\n",
"You can run a local docker container by running the executing the following script:\n",
"\n",
"```\n",
@ -515,7 +523,8 @@
"id": "eefea16b-508f-4552-8942-9d5063ed7d37",
"metadata": {},
"source": [
"# Ignore specified node and relationship types\n",
"## Ignore specified node and relationship types\n",
"\n",
"You can use `include_types` or `exclude_types` to ignore parts of the graph schema when generating Cypher statements."
]
},
@ -564,7 +573,7 @@
"id": "f0202e88-d700-40ed-aef9-0c969c7bf951",
"metadata": {},
"source": [
"# Validate generated Cypher statements\n",
"## Validate generated Cypher statements\n",
"You can use the `validate_cypher` parameter to validate and correct relationship directions in generated Cypher statements"
]
},
@ -645,7 +654,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -385,7 +385,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.10.6 64-bit",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -399,7 +399,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

@ -2,18 +2,34 @@
"cells": [
{
"cell_type": "markdown",
"id": "922a7a98-7d73-4a1a-8860-76a33451d1be",
"id": "1271ba5c-1700-4872-b193-f7c162944521",
"metadata": {
"id": "922a7a98-7d73-4a1a-8860-76a33451d1be"
"execution": {
"iopub.execute_input": "2024-03-27T18:44:53.493675Z",
"iopub.status.busy": "2024-03-27T18:44:53.493473Z",
"iopub.status.idle": "2024-03-27T18:44:53.499541Z",
"shell.execute_reply": "2024-03-27T18:44:53.498940Z",
"shell.execute_reply.started": "2024-03-27T18:44:53.493660Z"
}
},
"source": [
"# Ontotext GraphDB QA Chain\n",
"# Ontotext GraphDB\n",
"\n",
"This notebook shows how to use LLMs to provide natural language querying (NLQ to SPARQL, also called text2sparql) for [Ontotext GraphDB](https://graphdb.ontotext.com/). Ontotext GraphDB is a graph database and knowledge discovery tool compliant with [RDF](https://www.w3.org/RDF/) and [SPARQL](https://www.w3.org/TR/sparql11-query/).\n",
">[Ontotext GraphDB](https://graphdb.ontotext.com/) is a graph database and knowledge discovery tool compliant with [RDF](https://www.w3.org/RDF/) and [SPARQL](https://www.w3.org/TR/sparql11-query/).\n",
"\n",
">This notebook shows how to use LLMs to provide natural language querying (NLQ to SPARQL, also called `text2sparql`) for `Ontotext GraphDB`. "
]
},
{
"cell_type": "markdown",
"id": "922a7a98-7d73-4a1a-8860-76a33451d1be",
"metadata": {
"id": "922a7a98-7d73-4a1a-8860-76a33451d1be"
},
"source": [
"## GraphDB LLM Functionalities\n",
"\n",
"GraphDB supports some LLM integration functionalities as described in [https://github.com/w3c/sparql-dev/issues/193](https://github.com/w3c/sparql-dev/issues/193):\n",
"`GraphDB` supports some LLM integration functionalities as described [here](https://github.com/w3c/sparql-dev/issues/193):\n",
"\n",
"[gpt-queries](https://graphdb.ontotext.com/documentation/10.5/gpt-queries.html)\n",
"\n",
@ -43,25 +59,33 @@
"\n",
"* A simple chatbot using a defined KG entity index\n",
"\n",
"## Querying the GraphDB Database\n",
"\n",
"For this tutorial, we won't use the GraphDB LLM integration, but SPARQL generation from NLQ. We'll use the Star Wars API (SWAPI) ontology and dataset that you can examine [here](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo/blob/main/starwars-data.trig).\n",
"\n",
"You will need to have a running GraphDB instance. This tutorial shows how to run the database locally using the [GraphDB Docker image](https://hub.docker.com/r/ontotext/graphdb). It provides a docker compose set-up, which populates GraphDB with the Star Wars dataset. All nessessary files including this notebook can be downloaded from [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo).\n",
"For this tutorial, we won't use the GraphDB LLM integration, but `SPARQL` generation from NLQ. We'll use the `Star Wars API` (`SWAPI`) ontology and dataset that you can examine [here](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo/blob/main/starwars-data.trig).\n"
]
},
{
"cell_type": "markdown",
"id": "45b464ff-8556-403f-a3d6-14ffcd703313",
"metadata": {},
"source": [
"## Setting up\n",
"\n",
"### Set-up\n",
"You need a running GraphDB instance. This tutorial shows how to run the database locally using the [GraphDB Docker image](https://hub.docker.com/r/ontotext/graphdb). It provides a docker compose set-up, which populates GraphDB with the Star Wars dataset. All necessary files including this notebook can be downloaded from [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo).\n",
"\n",
"* Install [Docker](https://docs.docker.com/get-docker/). This tutorial is created using Docker version `24.0.7` which bundles [Docker Compose](https://docs.docker.com/compose/). For earlier Docker versions you may need to install Docker Compose separately.\n",
"* Clone [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo) in a local folder on your machine.\n",
"* Start GraphDB with the following script executed from the same folder\n",
" ```\n",
" docker build --tag graphdb .\n",
" docker compose up -d graphdb\n",
" ```\n",
" \n",
"```\n",
"docker build --tag graphdb .\n",
"docker compose up -d graphdb\n",
"```\n",
"\n",
" You need to wait a couple of seconds for the database to start on `http://localhost:7200/`. The Star Wars dataset `starwars-data.trig` is automatically loaded into the `langchain` repository. The local SPARQL endpoint `http://localhost:7200/repositories/langchain` can be used to run queries against. You can also open the GraphDB Workbench from your favourite web browser `http://localhost:7200/sparql` where you can make queries interactively.\n",
"* Working environment\n",
"* Set up working environment\n",
"\n",
"If you use `conda`, create and activate a new conda env (e.g. `conda create -n graph_ontotext_graphdb_qa python=3.9.18`).\n",
"\n",
"Install the following libraries:\n",
"\n",
"```\n",
@ -85,7 +109,7 @@
"id": "e51b397c-2fdc-4b99-9fed-1ab2b6ef7547"
},
"source": [
"### Specifying the Ontology\n",
"## Specifying the ontology\n",
"\n",
"In order for the LLM to be able to generate SPARQL, it needs to know the knowledge graph schema (the ontology). It can be provided using one of two parameters on the `OntotextGraphDBGraph` class:\n",
"\n",
@ -196,7 +220,7 @@
"id": "446d8a00-c98f-43b8-9e84-77b244f7bb24"
},
"source": [
"### Question Answering against the StarWars Dataset\n",
"## Question Answering against the StarWars dataset\n",
"\n",
"We can now use the `OntotextGraphDBQAChain` to ask some questions."
]
@ -400,12 +424,12 @@
"id": "11511345-8436-4634-92c6-36f2c0dd44db"
},
"source": [
"### Chain Modifiers\n",
"## Chain modifiers\n",
"\n",
"The Ontotext GraphDB QA chain allows prompt refinement for further improvement of your QA chain and enhancing the overall user experience of your app.\n",
"\n",
"\n",
"#### \"SPARQL Generation\" Prompt\n",
"### \"SPARQL Generation\" prompt\n",
"\n",
"The prompt is used for the SPARQL query generation based on the user question and the KG schema.\n",
"\n",
@ -436,7 +460,7 @@
" )\n",
" ````\n",
"\n",
"#### \"SPARQL Fix\" Prompt\n",
"### \"SPARQL Fix\" prompt\n",
"\n",
"Sometimes, the LLM may generate a SPARQL query with syntactic errors or missing prefixes, etc. The chain will try to amend this by prompting the LLM to correct it a certain number of times.\n",
"\n",
@ -475,7 +499,7 @@
" \n",
" Default value: `5`\n",
"\n",
"#### \"Answering\" Prompt\n",
"### \"Answering\" prompt\n",
"\n",
"The prompt is used for answering the question based on the results returned from the database and the initial user question. By default, the LLM is instructed to only use the information from the returned result(s). If the result set is empty, the LLM should inform that it can't answer the question.\n",
"\n",
@ -535,7 +559,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.1"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -5,16 +5,46 @@
"id": "c94240f5",
"metadata": {},
"source": [
"# GraphSparqlQAChain\n",
"# RDFLib\n",
"\n",
"Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. [Semantic Web](https://www.w3.org/standards/semanticweb/). [SPARQL](https://www.w3.org/TR/sparql11-query/) serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\\\n",
"Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph."
">[RDFLib](https://rdflib.readthedocs.io/) is a pure Python package for working with [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework). `RDFLib` contains most things you need to work with `RDF`, including:\n",
">- parsers and serializers for RDF/XML, N3, NTriples, N-Quads, Turtle, TriX, Trig and JSON-LD\n",
">- a Graph interface which can be backed by any one of a number of Store implementations\n",
">- store implementations for in-memory, persistent on disk (Berkeley DB) and remote SPARQL endpoints\n",
">- a SPARQL 1.1 implementation - supporting SPARQL 1.1 Queries and Update statements\n",
">- SPARQL function extension mechanisms\n",
"\n",
"Graph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends `Semantic Web Technologies`, cp. [Semantic Web](https://www.w3.org/standards/semanticweb/). \n",
"\n",
"[SPARQL](https://www.w3.org/TR/sparql11-query/) serves as a query language analogously to `SQL` or `Cypher` for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating `SPARQL`.\n",
"\n",
"**Disclaimer:** To date, `SPARQL` query generation via LLMs is still a bit unstable. Be especially careful with `UPDATE` queries, which alter the graph."
]
},
{
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {},
"source": [
"## Setting up\n",
"\n",
"We have to install a python library:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f66923ed-ba21-4584-88ac-1ad0de310889",
"metadata": {},
"outputs": [],
"source": [
"!pip install rdflib"
]
},
{
"cell_type": "markdown",
"id": "a2d2ac86-39b3-421f-a7ce-1104d2bff707",
"metadata": {},
"source": [
"There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page), and [triple stores](https://www.w3.org/wiki/LargeTripleStores)."
]
@ -57,7 +87,10 @@
"cell_type": "markdown",
"id": "7af596b5",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"Note that providing a `local_file` is necessary for storing changes locally if the source is read-only."
@ -381,7 +414,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -15,7 +15,7 @@ pip install python-arango
Connect your `ArangoDB` Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/graph/integrations/graph_arangodb_qa).
See the notebook example [here](/docs/integrations/graphs/arangodb).
```python
from arango import ArangoClient

@ -35,7 +35,7 @@ from langchain_community.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
```
See a [usage example](/docs/use_cases/graph/integrations/graph_cypher_qa)
See a [usage example](/docs/integrations/graphs/neo4j_cypher)
## Constructing a knowledge graph from text
@ -49,7 +49,7 @@ from langchain_community.graphs import Neo4jGraph
from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer
```
See a [usage example](/docs/use_cases/graph/integrations/diffbot_graphtransformer)
See a [usage example](/docs/integrations/graphs/diffbot)
## Memory

@ -13,7 +13,7 @@ pip install rdflib==7.0.0
Connect your GraphDB Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/graph/integrations/graph_ontotext_graphdb_qa).
See the notebook example [here](/docs/integrations/graphs/ontotext).
```python
from langchain_community.graphs import OntotextGraphDBGraph

@ -1,2 +0,0 @@
label: 'Integrations'
position: 3

@ -195,7 +195,7 @@
"![graph_chain.webp](../../../static/img/graph_chain.webp)\n",
"\n",
"\n",
"LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](/docs/use_cases/graph/integrations/graph_cypher_qa)"
"LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](/docs/integrations/graphs/neo4j_cypher)"
]
},
{
@ -329,7 +329,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -291,6 +291,7 @@
{ type: "category", label: "Tools", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/tools" }], link: {type: "generated-index", slug: "integrations/tools" }},
{ type: "category", label: "Toolkits", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/toolkits" }], link: {type: "generated-index", slug: "integrations/toolkits" }},
{ type: "category", label: "Memory", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/memory" }], link: {type: "generated-index", slug: "integrations/memory" }},
{ type: "category", label: "Graphs", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/graphs" }], link: {type: "generated-index", slug: "integrations/graphs" }},
{ type: "category", label: "Callbacks", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/callbacks" }], link: {type: "generated-index", slug: "integrations/callbacks" }},
{ type: "category", label: "Chat loaders", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/chat_loaders" }], link: {type: "generated-index", slug: "integrations/chat_loaders" }},
{ type: "category", label: "Adapters", collapsed: true, items: [{type: "autogenerated", dirName: "integrations/adapters" }], link: {type: "generated-index", slug: "integrations/adapters" }},

@ -5,6 +5,62 @@
"source": "/docs/integrations/providers/optimum_intel",
"destination": "/docs/integrations/providers/intel"
},
{
"source": "/docs/use_cases/graph/integrations/diffbot_graphtransformer",
"destination": "/docs/integrations/graphs/diffbot"
},
{
"source": "/docs/use_cases/graph/integrations/graph_arangodb_qa",
"destination": "/docs/integrations/graphs/arangodb"
},
{
"source": "/docs/use_cases/graph/integrations/graph_cypher_qa",
"destination": "/docs/integrations/graphs/neo4j_cypher"
},
{
"source": "/docs/use_cases/graph/integrations/graph_falkordb_qa",
"destination": "/docs/integrations/graphs/falkordb"
},
{
"source": "/docs/use_cases/graph/integrations/graph_gremlin_cosmosdb_qa",
"destination": "/docs/integrations/graphs/azure_cosmosdb_gremlin"
},
{
"source": "/docs/use_cases/graph/integrations/graph_hugegraph_qa",
"destination": "/docs/integrations/graphs/hugegraph"
},
{
"source": "/docs/use_cases/graph/integrations/graph_kuzu_qa",
"destination": "/docs/integrations/graphs/kuzu_db"
},
{
"source": "/docs/use_cases/graph/integrations/graph_memgraph_qa",
"destination": "/docs/integrations/graphs/memgraph"
},
{
"source": "/docs/use_cases/graph/integrations/graph_nebula_qa",
"destination": "/docs/integrations/graphs/nebula_graph"
},
{
"source": "/docs/use_cases/graph/integrations/graph_networkx_qa",
"destination": "/docs/integrations/graphs/networkx"
},
{
"source": "/docs/use_cases/graph/integrations/graph_ontotext_graphdb_qa",
"destination": "/docs/integrations/graphs/ontotext"
},
{
"source": "/docs/use_cases/graph/integrations/graph_sparql_qa",
"destination": "/docs/integrations/graphs/rdflib_sparql"
},
{
"source": "/docs/use_cases/graph/integrations/neptune_cypher_qa",
"destination": "/docs/integrations/graphs/amazon_neptune_open_cypher"
},
{
"source": "/docs/use_cases/graph/integrations/neptune_sparql_qa",
"destination": "/docs/integrations/graphs/amazon_neptune_sparql"
},
{
"source": "/docs/integrations/providers/facebook_chat",
"destination": "/docs/integrations/providers/facebook"

Loading…
Cancel
Save