forked from Archives/langchain
705431aecc
Co-authored-by: Ankush Gola <ankush.gola@gmail.com>
305 lines
7.0 KiB
Plaintext
305 lines
7.0 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "a6850189",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Graph QA\n",
|
||
"\n",
|
||
"This notebook goes over how to do question answering over a graph data structure."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9e516e3e",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Create the graph\n",
|
||
"\n",
|
||
"In this section, we construct an example graph. At the moment, this works best for small pieces of text."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "3849873d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.indexes import GraphIndexCreator\n",
|
||
"from langchain.llms import OpenAI\n",
|
||
"from langchain.document_loaders import TextLoader"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "05d65c87",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"index_creator = GraphIndexCreator(llm=OpenAI(temperature=0))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "0a45a5b9",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||
" all_text = f.read()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3fca3e1b",
|
||
"metadata": {},
|
||
"source": [
|
||
"We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "80522bd6",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"text = \"\\n\".join(all_text.split(\"\\n\\n\")[105:108])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "da5aad5a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \\nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \\nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. '"
|
||
]
|
||
},
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"text"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "8dad7b59",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"graph = index_creator.from_text(text)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "2118f363",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can inspect the created graph."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "32878c13",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[('Intel', '$20 billion semiconductor \"mega site\"', 'is going to build'),\n",
|
||
" ('Intel', 'state-of-the-art factories', 'is building'),\n",
|
||
" ('Intel', '10,000 new good-paying jobs', 'is creating'),\n",
|
||
" ('Intel', 'Silicon Valley', 'is helping build'),\n",
|
||
" ('Field of dreams',\n",
|
||
" \"America's future will be built\",\n",
|
||
" 'is the ground on which')]"
|
||
]
|
||
},
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"graph.get_triples()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e9737be1",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Querying the graph\n",
|
||
"We can now use the graph QA chain to ask question of the graph"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "76edc854",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.chains import GraphQAChain"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "8e7719b4",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "f6511169",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"\n",
|
||
"\u001b[1m> Entering new GraphQAChain chain...\u001b[0m\n",
|
||
"Entities Extracted:\n",
|
||
"\u001b[32;1m\u001b[1;3m Intel\u001b[0m\n",
|
||
"Full Context:\n",
|
||
"\u001b[32;1m\u001b[1;3mIntel is going to build $20 billion semiconductor \"mega site\"\n",
|
||
"Intel is building state-of-the-art factories\n",
|
||
"Intel is creating 10,000 new good-paying jobs\n",
|
||
"Intel is helping build Silicon Valley\u001b[0m\n",
|
||
"\n",
|
||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"' Intel is going to build a $20 billion semiconductor \"mega site\" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'"
|
||
]
|
||
},
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"chain.run(\"what is Intel going to build?\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "410aafa0",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Save the graph\n",
|
||
"We can also save and load the graph."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "bc72cca0",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"graph.write_to_gml(\"graph.gml\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "652760ad",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.indexes.graph import NetworkxEntityGraph"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "eae591fe",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"loaded_graph = NetworkxEntityGraph.from_gml(\"graph.gml\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "9439d419",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[('Intel', '$20 billion semiconductor \"mega site\"', 'is going to build'),\n",
|
||
" ('Intel', 'state-of-the-art factories', 'is building'),\n",
|
||
" ('Intel', '10,000 new good-paying jobs', 'is creating'),\n",
|
||
" ('Intel', 'Silicon Valley', 'is helping build'),\n",
|
||
" ('Field of dreams',\n",
|
||
" \"America's future will be built\",\n",
|
||
" 'is the ground on which')]"
|
||
]
|
||
},
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"loaded_graph.get_triples()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "045796cf",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.1"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|