You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/integrations/graphs/ontotext.ipynb

568 lines
21 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "1271ba5c-1700-4872-b193-f7c162944521",
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-27T18:44:53.493675Z",
"iopub.status.busy": "2024-03-27T18:44:53.493473Z",
"iopub.status.idle": "2024-03-27T18:44:53.499541Z",
"shell.execute_reply": "2024-03-27T18:44:53.498940Z",
"shell.execute_reply.started": "2024-03-27T18:44:53.493660Z"
}
},
"source": [
"# Ontotext GraphDB\n",
"\n",
">[Ontotext GraphDB](https://graphdb.ontotext.com/) is a graph database and knowledge discovery tool compliant with [RDF](https://www.w3.org/RDF/) and [SPARQL](https://www.w3.org/TR/sparql11-query/).\n",
"\n",
">This notebook shows how to use LLMs to provide natural language querying (NLQ to SPARQL, also called `text2sparql`) for `Ontotext GraphDB`. "
]
},
{
"cell_type": "markdown",
"id": "922a7a98-7d73-4a1a-8860-76a33451d1be",
"metadata": {
"id": "922a7a98-7d73-4a1a-8860-76a33451d1be"
},
"source": [
"## GraphDB LLM Functionalities\n",
"\n",
"`GraphDB` supports some LLM integration functionalities as described [here](https://github.com/w3c/sparql-dev/issues/193):\n",
"\n",
"[gpt-queries](https://graphdb.ontotext.com/documentation/10.5/gpt-queries.html)\n",
"\n",
"* magic predicates to ask an LLM for text, list or table using data from your knowledge graph (KG)\n",
"* query explanation\n",
"* result explanation, summarization, rephrasing, translation\n",
"\n",
"[retrieval-graphdb-connector](https://graphdb.ontotext.com/documentation/10.5/retrieval-graphdb-connector.html)\n",
"\n",
"* Indexing of KG entities in a vector database\n",
"* Supports any text embedding algorithm and vector database\n",
"* Uses the same powerful connector (indexing) language that GraphDB uses for Elastic, Solr, Lucene\n",
"* Automatic synchronization of changes in RDF data to the KG entity index\n",
"* Supports nested objects (no UI support in GraphDB version 10.5)\n",
"* Serializes KG entities to text like this (e.g. for a Wines dataset):\n",
"\n",
"```\n",
"Franvino:\n",
"- is a RedWine.\n",
"- made from grape Merlo.\n",
"- made from grape Cabernet Franc.\n",
"- has sugar dry.\n",
"- has year 2012.\n",
"```\n",
"\n",
"[talk-to-graph](https://graphdb.ontotext.com/documentation/10.5/talk-to-graph.html)\n",
"\n",
"* A simple chatbot using a defined KG entity index\n",
"\n",
"\n",
"For this tutorial, we won't use the GraphDB LLM integration, but `SPARQL` generation from NLQ. We'll use the `Star Wars API` (`SWAPI`) ontology and dataset that you can examine [here](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo/blob/main/starwars-data.trig).\n"
]
},
{
"cell_type": "markdown",
"id": "45b464ff-8556-403f-a3d6-14ffcd703313",
"metadata": {},
"source": [
"## Setting up\n",
"\n",
"You need a running GraphDB instance. This tutorial shows how to run the database locally using the [GraphDB Docker image](https://hub.docker.com/r/ontotext/graphdb). It provides a docker compose set-up, which populates GraphDB with the Star Wars dataset. All necessary files including this notebook can be downloaded from [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo).\n",
"\n",
"* Install [Docker](https://docs.docker.com/get-docker/). This tutorial is created using Docker version `24.0.7` which bundles [Docker Compose](https://docs.docker.com/compose/). For earlier Docker versions you may need to install Docker Compose separately.\n",
"* Clone [the GitHub repository langchain-graphdb-qa-chain-demo](https://github.com/Ontotext-AD/langchain-graphdb-qa-chain-demo) in a local folder on your machine.\n",
"* Start GraphDB with the following script executed from the same folder\n",
" \n",
"```\n",
"docker build --tag graphdb .\n",
"docker compose up -d graphdb\n",
"```\n",
"\n",
" You need to wait a couple of seconds for the database to start on `http://localhost:7200/`. The Star Wars dataset `starwars-data.trig` is automatically loaded into the `langchain` repository. The local SPARQL endpoint `http://localhost:7200/repositories/langchain` can be used to run queries against. You can also open the GraphDB Workbench from your favourite web browser `http://localhost:7200/sparql` where you can make queries interactively.\n",
"* Set up working environment\n",
"\n",
"If you use `conda`, create and activate a new conda env (e.g. `conda create -n graph_ontotext_graphdb_qa python=3.9.18`).\n",
"\n",
"Install the following libraries:\n",
"\n",
"```\n",
"pip install jupyter==1.0.0\n",
"pip install openai==1.6.1\n",
"pip install rdflib==7.0.0\n",
"pip install langchain-openai==0.0.2\n",
"pip install langchain>=0.1.5\n",
"```\n",
"\n",
"Run Jupyter with\n",
"```\n",
"jupyter notebook\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "e51b397c-2fdc-4b99-9fed-1ab2b6ef7547",
"metadata": {
"id": "e51b397c-2fdc-4b99-9fed-1ab2b6ef7547"
},
"source": [
"## Specifying the ontology\n",
"\n",
"In order for the LLM to be able to generate SPARQL, it needs to know the knowledge graph schema (the ontology). It can be provided using one of two parameters on the `OntotextGraphDBGraph` class:\n",
"\n",
"* `query_ontology`: a `CONSTRUCT` query that is executed on the SPARQL endpoint and returns the KG schema statements. We recommend that you store the ontology in its own named graph, which will make it easier to get only the relevant statements (as the example below). `DESCRIBE` queries are not supported, because `DESCRIBE` returns the Symmetric Concise Bounded Description (SCBD), i.e. also the incoming class links. In case of large graphs with a million of instances, this is not efficient. Check https://github.com/eclipse-rdf4j/rdf4j/issues/4857\n",
"* `local_file`: a local RDF ontology file. Supported RDF formats are `Turtle`, `RDF/XML`, `JSON-LD`, `N-Triples`, `Notation-3`, `Trig`, `Trix`, `N-Quads`.\n",
"\n",
"In either case, the ontology dump should:\n",
"\n",
"* Include enough information about classes, properties, property attachment to classes (using rdfs:domain, schema:domainIncludes or OWL restrictions), and taxonomies (important individuals).\n",
"* Not include overly verbose and irrelevant definitions and examples that do not help SPARQL construction."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "dc8792e0-acfb-4310-b5fa-8f649e448870",
"metadata": {
"id": "dc8792e0-acfb-4310-b5fa-8f649e448870"
},
"outputs": [],
"source": [
"from langchain_community.graphs import OntotextGraphDBGraph\n",
"\n",
"# feeding the schema using a user construct query\n",
"\n",
"graph = OntotextGraphDBGraph(\n",
" query_endpoint=\"http://localhost:7200/repositories/langchain\",\n",
" query_ontology=\"CONSTRUCT {?s ?p ?o} FROM <https://swapi.co/ontology/> WHERE {?s ?p ?o}\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a08b8d8c-af01-4401-8069-5f2cd022a6df",
"metadata": {
"id": "a08b8d8c-af01-4401-8069-5f2cd022a6df"
},
"outputs": [],
"source": [
"# feeding the schema using a local RDF file\n",
"\n",
"graph = OntotextGraphDBGraph(\n",
" query_endpoint=\"http://localhost:7200/repositories/langchain\",\n",
" local_file=\"/path/to/langchain_graphdb_tutorial/starwars-ontology.nt\", # change the path here\n",
")"
]
},
{
"cell_type": "markdown",
"id": "583b26ce-fb0d-4e9c-b5cd-9ec0e3be8922",
"metadata": {
"id": "583b26ce-fb0d-4e9c-b5cd-9ec0e3be8922"
},
"source": [
"Either way, the ontology (schema) is fed to the LLM as `Turtle` since `Turtle` with appropriate prefixes is most compact and easiest for the LLM to remember.\n",
"\n",
"The Star Wars ontology is a bit unusual in that it includes a lot of specific triples about classes, e.g. that the species `:Aleena` live on `<planet/38>`, they are a subclass of `:Reptile`, have certain typical characteristics (average height, average lifespan, skinColor), and specific individuals (characters) are representatives of that class:\n",
"\n",
"\n",
"```\n",
"@prefix : <https://swapi.co/vocabulary/> .\n",
"@prefix owl: <http://www.w3.org/2002/07/owl#> .\n",
"@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .\n",
"@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .\n",
"\n",
":Aleena a owl:Class, :Species ;\n",
" rdfs:label \"Aleena\" ;\n",
" rdfs:isDefinedBy <https://swapi.co/ontology/> ;\n",
" rdfs:subClassOf :Reptile, :Sentient ;\n",
" :averageHeight 80.0 ;\n",
" :averageLifespan \"79\" ;\n",
" :character <https://swapi.co/resource/aleena/47> ;\n",
" :film <https://swapi.co/resource/film/4> ;\n",
" :language \"Aleena\" ;\n",
" :planet <https://swapi.co/resource/planet/38> ;\n",
" :skinColor \"blue\", \"gray\" .\n",
"\n",
" ...\n",
"\n",
" ```\n"
]
},
{
"cell_type": "markdown",
"id": "6277d911-b0f6-4aeb-9aa5-96416b668468",
"metadata": {
"id": "6277d911-b0f6-4aeb-9aa5-96416b668468"
},
"source": [
"In order to keep this tutorial simple, we use un-secured GraphDB. If GraphDB is secured, you should set the environment variables 'GRAPHDB_USERNAME' and 'GRAPHDB_PASSWORD' before the initialization of `OntotextGraphDBGraph`.\n",
"\n",
"```python\n",
"os.environ[\"GRAPHDB_USERNAME\"] = \"graphdb-user\"\n",
"os.environ[\"GRAPHDB_PASSWORD\"] = \"graphdb-password\"\n",
"\n",
"graph = OntotextGraphDBGraph(\n",
" query_endpoint=...,\n",
" query_ontology=...\n",
")\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "446d8a00-c98f-43b8-9e84-77b244f7bb24",
"metadata": {
"id": "446d8a00-c98f-43b8-9e84-77b244f7bb24"
},
"source": [
"## Question Answering against the StarWars dataset\n",
"\n",
"We can now use the `OntotextGraphDBQAChain` to ask some questions."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fab63d88-511d-4049-9bf0-ca8748f1fbff",
"metadata": {
"id": "fab63d88-511d-4049-9bf0-ca8748f1fbff"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain.chains import OntotextGraphDBQAChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# We'll be using an OpenAI model which requires an OpenAI API Key.\n",
"# However, other models are available as well:\n",
"# https://python.langchain.com/docs/integrations/chat/\n",
"\n",
"# Set the environment variable `OPENAI_API_KEY` to your OpenAI API key\n",
"os.environ[\"OPENAI_API_KEY\"] = \"sk-***\"\n",
"\n",
"# Any available OpenAI model can be used here.\n",
"# We use 'gpt-4-1106-preview' because of the bigger context window.\n",
"# The 'gpt-4-1106-preview' model_name will deprecate in the future and will change to 'gpt-4-turbo' or similar,\n",
"# so be sure to consult with the OpenAI API https://platform.openai.com/docs/models for the correct naming.\n",
"\n",
"chain = OntotextGraphDBQAChain.from_llm(\n",
" ChatOpenAI(temperature=0, model_name=\"gpt-4-1106-preview\"),\n",
" graph=graph,\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "64de8463-35b1-4c65-91e4-387daf4dd7d4",
"metadata": {},
"source": [
"Let's ask a simple one."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f1dc4bea-b0f1-48f7-99a6-351a31acac7b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new OntotextGraphDBQAChain chain...\u001b[0m\n",
"Generated SPARQL:\n",
"\u001b[32;1m\u001b[1;3mPREFIX : <https://swapi.co/vocabulary/>\n",
"PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n",
"\n",
"SELECT ?climate\n",
"WHERE {\n",
" ?planet rdfs:label \"Tatooine\" ;\n",
" :climate ?climate .\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The climate on Tatooine is arid.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({chain.input_key: \"What is the climate on Tatooine?\"})[chain.output_key]"
]
},
{
"cell_type": "markdown",
"id": "6d3a37f4-5c56-4b3e-b6ae-3eb030ffcc8f",
"metadata": {},
"source": [
"And a bit more complicated one."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4dde8b18-4329-4a86-abfb-26d3e77034b7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new OntotextGraphDBQAChain chain...\u001b[0m\n",
"Generated SPARQL:\n",
"\u001b[32;1m\u001b[1;3mPREFIX : <https://swapi.co/vocabulary/>\n",
"PREFIX owl: <http://www.w3.org/2002/07/owl#>\n",
"PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n",
"\n",
"SELECT ?climate\n",
"WHERE {\n",
" ?character rdfs:label \"Luke Skywalker\" .\n",
" ?character :homeworld ?planet .\n",
" ?planet :climate ?climate .\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"The climate on Luke Skywalker's home planet is arid.\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({chain.input_key: \"What is the climate on Luke Skywalker's home planet?\"})[\n",
" chain.output_key\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "51d3ce3e-9528-4a65-8f3e-2281de08cbf1",
"metadata": {},
"source": [
"We can also ask more complicated questions like"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ab6f55f1-a3e0-4615-abd2-3cb26619c8d9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new OntotextGraphDBQAChain chain...\u001b[0m\n",
"Generated SPARQL:\n",
"\u001b[32;1m\u001b[1;3mPREFIX : <https://swapi.co/vocabulary/>\n",
"PREFIX owl: <http://www.w3.org/2002/07/owl#>\n",
"PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\n",
"PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>\n",
"\n",
"SELECT (AVG(?boxOffice) AS ?averageBoxOffice)\n",
"WHERE {\n",
" ?film a :Film .\n",
" ?film :boxOffice ?boxOfficeValue .\n",
" BIND(xsd:decimal(?boxOfficeValue) AS ?boxOffice)\n",
"}\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The average box office revenue for all the Star Wars movies is approximately 754.1 million dollars.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\n",
" {\n",
" chain.input_key: \"What is the average box office revenue for all the Star Wars movies?\"\n",
" }\n",
")[chain.output_key]"
]
},
{
"cell_type": "markdown",
"id": "11511345-8436-4634-92c6-36f2c0dd44db",
"metadata": {
"id": "11511345-8436-4634-92c6-36f2c0dd44db"
},
"source": [
"## Chain modifiers\n",
"\n",
"The Ontotext GraphDB QA chain allows prompt refinement for further improvement of your QA chain and enhancing the overall user experience of your app.\n",
"\n",
"\n",
"### \"SPARQL Generation\" prompt\n",
"\n",
"The prompt is used for the SPARQL query generation based on the user question and the KG schema.\n",
"\n",
"- `sparql_generation_prompt`\n",
"\n",
" Default value:\n",
" ````python\n",
" GRAPHDB_SPARQL_GENERATION_TEMPLATE = \"\"\"\n",
" Write a SPARQL SELECT query for querying a graph database.\n",
" The ontology schema delimited by triple backticks in Turtle format is:\n",
" ```\n",
" {schema}\n",
" ```\n",
" Use only the classes and properties provided in the schema to construct the SPARQL query.\n",
" Do not use any classes or properties that are not explicitly provided in the SPARQL query.\n",
" Include all necessary prefixes.\n",
" Do not include any explanations or apologies in your responses.\n",
" Do not wrap the query in backticks.\n",
" Do not include any text except the SPARQL query generated.\n",
" The question delimited by triple backticks is:\n",
" ```\n",
" {prompt}\n",
" ```\n",
" \"\"\"\n",
" GRAPHDB_SPARQL_GENERATION_PROMPT = PromptTemplate(\n",
" input_variables=[\"schema\", \"prompt\"],\n",
" template=GRAPHDB_SPARQL_GENERATION_TEMPLATE,\n",
" )\n",
" ````\n",
"\n",
"### \"SPARQL Fix\" prompt\n",
"\n",
"Sometimes, the LLM may generate a SPARQL query with syntactic errors or missing prefixes, etc. The chain will try to amend this by prompting the LLM to correct it a certain number of times.\n",
"\n",
"- `sparql_fix_prompt`\n",
"\n",
" Default value:\n",
" ````python\n",
" GRAPHDB_SPARQL_FIX_TEMPLATE = \"\"\"\n",
" This following SPARQL query delimited by triple backticks\n",
" ```\n",
" {generated_sparql}\n",
" ```\n",
" is not valid.\n",
" The error delimited by triple backticks is\n",
" ```\n",
" {error_message}\n",
" ```\n",
" Give me a correct version of the SPARQL query.\n",
" Do not change the logic of the query.\n",
" Do not include any explanations or apologies in your responses.\n",
" Do not wrap the query in backticks.\n",
" Do not include any text except the SPARQL query generated.\n",
" The ontology schema delimited by triple backticks in Turtle format is:\n",
" ```\n",
" {schema}\n",
" ```\n",
" \"\"\"\n",
" \n",
" GRAPHDB_SPARQL_FIX_PROMPT = PromptTemplate(\n",
" input_variables=[\"error_message\", \"generated_sparql\", \"schema\"],\n",
" template=GRAPHDB_SPARQL_FIX_TEMPLATE,\n",
" )\n",
" ````\n",
"\n",
"- `max_fix_retries`\n",
" \n",
" Default value: `5`\n",
"\n",
"### \"Answering\" prompt\n",
"\n",
"The prompt is used for answering the question based on the results returned from the database and the initial user question. By default, the LLM is instructed to only use the information from the returned result(s). If the result set is empty, the LLM should inform that it can't answer the question.\n",
"\n",
"- `qa_prompt`\n",
" \n",
" Default value:\n",
" ````python\n",
" GRAPHDB_QA_TEMPLATE = \"\"\"Task: Generate a natural language response from the results of a SPARQL query.\n",
" You are an assistant that creates well-written and human understandable answers.\n",
" The information part contains the information provided, which you can use to construct an answer.\n",
" The information provided is authoritative, you must never doubt it or try to use your internal knowledge to correct it.\n",
" Make your response sound like the information is coming from an AI assistant, but don't add any information.\n",
" Don't use internal knowledge to answer the question, just say you don't know if no information is available.\n",
" Information:\n",
" {context}\n",
" \n",
" Question: {prompt}\n",
" Helpful Answer:\"\"\"\n",
" GRAPHDB_QA_PROMPT = PromptTemplate(\n",
" input_variables=[\"context\", \"prompt\"], template=GRAPHDB_QA_TEMPLATE\n",
" )\n",
" ````"
]
},
{
"cell_type": "markdown",
"id": "2ef8c073-003d-44ab-8a7b-cf45c50f6370",
"metadata": {
"id": "2ef8c073-003d-44ab-8a7b-cf45c50f6370"
},
"source": [
"Once you're finished playing with QA with GraphDB, you can shut down the Docker environment by running\n",
"``\n",
"docker compose down -v --remove-orphans\n",
"``\n",
"from the directory with the Docker compose file."
]
}
],
"metadata": {
"colab": {
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}