openai-cookbook/examples/Data_extraction_transformation.ipynb

778 lines
33 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Extraction and Transformation in ELT Workflows using GPT-4o as an OCR Alternative\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A lot of enterprise data is unstructured and locked up in difficult-to-use formats, e.g. PDFs, PPT, PNG, that are not optimized for use with LLMs or databases. As a result this type of data tends to be underutilized for analysis and product development, despite it being so valuable. The traditional way of extracting information from unstructured or non-ideal formats has been to use OCR, but OCR struggles with complex layouts and can have limited multilingual support. Moreover, manually applying transforms to data can be cumbersome and timeconsuming. \n",
"\n",
"The multi-modal capabilities of GPT-4o enable new ways to extract and transform data because of GPT-4o's ability to adapt to different types of documents and to use reasoning for interpreting the content of documents. Here are some reasons why you would choose GPT-4o for your extraction and transformation workflows over traditional methods. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"| **Extraction** | **Transformation** |\n",
"|---------------------------------------------------------------|------------------------------------------------------------------|\n",
"| **Adaptable**: Handles complex document layouts better, reducing errors | **Schema Adaptability**: Easily transforms data to fit specific schemas for database ingestion |\n",
"| **Multilingual Support**: Seamlessly processes documents in multiple languages | **Dynamic Data Mapping**: Adapts to different data structures and formats, providing flexible transformation rules |\n",
"| **Contextual Understanding**: Extracts meaningful relationships and context, not just text | **Enhanced Insight Generation**: Applies reasoning to create more insightful transformations, enriching the dataset with derived metrics, metadata and relationships |\n",
"| **Multimodality**: Processes various document elements, including images and tables | |\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This cookbook has three parts:\n",
"1. How to extract data from multilingual PDFs \n",
"2. How to transform data according to a schema for loading into a database\n",
"3. How to load transformed data into a database for downstream analysis\n",
"\n",
"We're going to mimic a simple ELT workflow where data is first extracted from PDFs into JSON using GPT-4o, stored in an unstructured format somewhere like a data lake, transformed to fit a schema using GPT-4o, and then finally ingested into a relational database for querying. It's worth noting that you can do all of this with the BatchAPI if you're interested in lowering the cost of this workflow. \n",
"\n",
"![](../images/elt_workflow.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data we'll be using is a set of publicly available 2019 hotel invoices from Germany available on [Jens Walter's GitHub](https://github.com/JensWalter/my-receipts/tree/master/2019/de/hotel), (thank you Jens!). Though hotel invoices generally contain similar information (reservation details, charges, taxes etc.), you'll notice that the invoices present itemized information in different ways and are multilingual containing both German and English. Fortunately GPT-4o can adapt to a variety of different document styles without us having to specify formats and it can seamlessly handle a variety of languages, even in the same document. \n",
"Here is what one of the invoices looks like: \n",
"\n",
"![](../images/sample_hotel_invoice.png)\n",
"\n",
"## Part 1: Extracting data from PDFs using GPT-4o's vision capabilities\n",
"GPT-4o doesn't natively handle PDFs so before we extract any data we'll first need to convert each page into an image and then encode the images as base64. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from openai import OpenAI\n",
"import fitz # PyMuPDF\n",
"import io\n",
"import os\n",
"from PIL import Image\n",
"import base64\n",
"import json\n",
"\n",
"api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"client = OpenAI(api_key=api_key)\n",
"\n",
"\n",
"@staticmethod\n",
"def encode_image(image_path):\n",
" with open(image_path, \"rb\") as image_file:\n",
" return base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
"\n",
"\n",
"def pdf_to_base64_images(pdf_path):\n",
" #Handles PDFs with multiple pages\n",
" pdf_document = fitz.open(pdf_path)\n",
" base64_images = []\n",
" temp_image_paths = []\n",
"\n",
" total_pages = len(pdf_document)\n",
"\n",
" for page_num in range(total_pages):\n",
" page = pdf_document.load_page(page_num)\n",
" pix = page.get_pixmap()\n",
" img = Image.open(io.BytesIO(pix.tobytes()))\n",
" temp_image_path = f\"temp_page_{page_num}.png\"\n",
" img.save(temp_image_path, format=\"PNG\")\n",
" temp_image_paths.append(temp_image_path)\n",
" base64_image = encode_image(temp_image_path)\n",
" base64_images.append(base64_image)\n",
"\n",
" for temp_image_path in temp_image_paths:\n",
" os.remove(temp_image_path)\n",
"\n",
" return base64_images"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then pass each base64 encoded image in a GPT-4o LLM call, specifying a high level of detail and JSON as the response format. We're not concerned about enforcing a schema at this step, we just want all of the data to be extracted regardless of type."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"def extract_invoice_data(base64_image):\n",
" system_prompt = f\"\"\"\n",
" You are an OCR-like data extraction tool that extracts hotel invoice data from PDFs.\n",
" \n",
" 1. Please extract the data in this hotel invoice, grouping data according to theme/sub groups, and then output into JSON.\n",
"\n",
" 2. Please keep the keys and values of the JSON in the original language. \n",
"\n",
" 3. The type of data you might encounter in the invoice includes but is not limited to: hotel information, guest information, invoice information,\n",
" room charges, taxes, and total charges etc. \n",
"\n",
" 4. If the page contains no charge data, please output an empty JSON object and don't make up any data.\n",
"\n",
" 5. If there are blank data fields in the invoice, please include them as \"null\" values in the JSON object.\n",
" \n",
" 6. If there are tables in the invoice, capture all of the rows and columns in the JSON object. \n",
" Even if a column is blank, include it as a key in the JSON object with a null value.\n",
" \n",
" 7. If a row is blank denote missing fields with \"null\" values. \n",
" \n",
" 8. Don't interpolate or make up data.\n",
"\n",
" 9. Please maintain the table structure of the charges, i.e. capture all of the rows and columns in the JSON object.\n",
"\n",
" \"\"\"\n",
" \n",
" response = client.chat.completions.create(\n",
" model=\"gpt-4o\",\n",
" response_format={ \"type\": \"json_object\" },\n",
" messages=[\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": system_prompt\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"extract the data in this hotel invoice and output into JSON \"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/png;base64,{base64_image}\", \"detail\": \"high\"}}\n",
" ]\n",
" }\n",
" ],\n",
" temperature=0.0,\n",
" )\n",
" return response.choices[0].message.content\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because invoice data can span multiple pages in a PDF, we're going to produce JSON objects for each page in the invoice and then append them together. The final invoice extraction will be a single JSON file."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def extract_from_multiple_pages(base64_images, original_filename, output_directory):\n",
" entire_invoice = []\n",
"\n",
" for base64_image in base64_images:\n",
" invoice_json = extract_invoice_data(base64_image)\n",
" invoice_data = json.loads(invoice_json)\n",
" entire_invoice.append(invoice_data)\n",
"\n",
" # Ensure the output directory exists\n",
" os.makedirs(output_directory, exist_ok=True)\n",
"\n",
" # Construct the output file path\n",
" output_filename = os.path.join(output_directory, original_filename.replace('.pdf', '_extracted.json'))\n",
" \n",
" # Save the entire_invoice list as a JSON file\n",
" with open(output_filename, 'w', encoding='utf-8') as f:\n",
" json.dump(entire_invoice, f, ensure_ascii=False, indent=4)\n",
" return output_filename\n",
"\n",
"\n",
"def main_extract(read_path, write_path):\n",
" for filename in os.listdir(read_path):\n",
" file_path = os.path.join(read_path, filename)\n",
" if os.path.isfile(file_path):\n",
" base64_images = pdf_to_base64_images(file_path)\n",
" extract_from_multiple_pages(base64_images, filename, write_path)\n",
"\n",
"\n",
"read_path= \"./data/hotel_invoices/receipts_2019_de_hotel\"\n",
"write_path= \"./data/hotel_invoices/extracted_invoice_json\"\n",
"\n",
"main_extract(read_path, write_path)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Each invoice JSON will have different keys depending on what data the original invoice contained, so at this point you can store the unschematized JSON files in a data lake that can handle unstructured data. For simplicity though, we're going to store the files in a folder. Here is what one of the extracted JSON files looks like, you'll notice that even though we didn't specify a schema, GPT-4o was able to understand German and group similar information together. Moreover, if there was a blank field in the invoice GPT-4o transcribed that as \"null\". "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"vscode": {
"languageId": "javascript"
}
},
"outputs": [],
"source": [
"[\n",
" {\n",
" \"Hotel Information\": {\n",
" \"Name\": \"Hamburg City (Zentrum)\",\n",
" \"Address\": \"Willy-Brandt-Straße 21, 20457 Hamburg, Deutschland\",\n",
" \"Phone\": \"+49 (0) 40 3039 379 0\"\n",
" },\n",
" \"Guest Information\": {\n",
" \"Name\": \"APIMEISTER CONSULTING GmbH\",\n",
" \"Guest\": \"Herr Jens Walter\",\n",
" \"Address\": \"Friedrichstr. 123, 10117 Berlin\"\n",
" },\n",
" \"Invoice Information\": {\n",
" \"Rechnungsnummer\": \"GABC19014325\",\n",
" \"Rechnungsdatum\": \"23.09.19\",\n",
" \"Referenznummer\": \"GABC015452127\",\n",
" \"Buchungsnummer\": \"GABR15867\",\n",
" \"Ankunft\": \"23.09.19\",\n",
" \"Abreise\": \"27.09.19\",\n",
" \"Nächte\": 4,\n",
" \"Zimmer\": 626,\n",
" \"Kundereferenz\": 2\n",
" },\n",
" \"Charges\": [\n",
" {\n",
" \"Datum\": \"23.09.19\",\n",
" \"Uhrzeit\": \"16:36\",\n",
" \"Beschreibung\": \"Übernachtung\",\n",
" \"MwSt.%\": 7.0,\n",
" \"Betrag\": 77.0,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"24.09.19\",\n",
" \"Uhrzeit\": null,\n",
" \"Beschreibung\": \"Übernachtung\",\n",
" \"MwSt.%\": 7.0,\n",
" \"Betrag\": 135.0,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"25.09.19\",\n",
" \"Uhrzeit\": null,\n",
" \"Beschreibung\": \"Übernachtung\",\n",
" \"MwSt.%\": 7.0,\n",
" \"Betrag\": 82.0,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"26.09.19\",\n",
" \"Uhrzeit\": null,\n",
" \"Beschreibung\": \"Übernachtung\",\n",
" \"MwSt.%\": 7.0,\n",
" \"Betrag\": 217.0,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"24.09.19\",\n",
" \"Uhrzeit\": \"9:50\",\n",
" \"Beschreibung\": \"Premier Inn Frühstücksbuffet\",\n",
" \"MwSt.%\": 19.0,\n",
" \"Betrag\": 9.9,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"25.09.19\",\n",
" \"Uhrzeit\": \"9:50\",\n",
" \"Beschreibung\": \"Premier Inn Frühstücksbuffet\",\n",
" \"MwSt.%\": 19.0,\n",
" \"Betrag\": 9.9,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"26.09.19\",\n",
" \"Uhrzeit\": \"9:50\",\n",
" \"Beschreibung\": \"Premier Inn Frühstücksbuffet\",\n",
" \"MwSt.%\": 19.0,\n",
" \"Betrag\": 9.9,\n",
" \"Zahlung\": null\n",
" },\n",
" {\n",
" \"Datum\": \"27.09.19\",\n",
" \"Uhrzeit\": \"9:50\",\n",
" \"Beschreibung\": \"Premier Inn Frühstücksbuffet\",\n",
" \"MwSt.%\": 19.0,\n",
" \"Betrag\": 9.9,\n",
" \"Zahlung\": null\n",
" }\n",
" ],\n",
" \"Payment Information\": {\n",
" \"Zahlung\": \"550,60\",\n",
" \"Gesamt (Rechnungsbetrag)\": \"550,60\",\n",
" \"Offener Betrag\": \"0,00\",\n",
" \"Bezahlart\": \"Mastercard-Kreditkarte\"\n",
" },\n",
" \"Tax Information\": {\n",
" \"MwSt.%\": [\n",
" {\n",
" \"Rate\": 19.0,\n",
" \"Netto\": 33.28,\n",
" \"MwSt.\": 6.32,\n",
" \"Brutto\": 39.6\n",
" },\n",
" {\n",
" \"Rate\": 7.0,\n",
" \"Netto\": 477.57,\n",
" \"MwSt.\": 33.43,\n",
" \"Brutto\": 511.0\n",
" }\n",
" ]\n",
" }\n",
" }\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: Transforming data according to a schema \n",
"\n",
"You've extracted data from PDFs and have likely loaded the unstructured extractions as JSON objects in a data lake. The next step in our ELT workflow is to use GPT-4o to transform the extractions according to our desired schema. This will enable us to ingest any resulting tables into a database. We've decided upon the following schema that broadly covers most of the information we would have seen across the different invoices. This schema will be used to process each raw JSON extraction into our desired schematized JSON and can specify particular formats such as \"date\": \"YYYY-MM-DD\". We're also going to translate the data into English at this step. \n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"vscode": {
"languageId": "javascript"
}
},
"outputs": [],
"source": [
"[\n",
" {\n",
" \"hotel_information\": {\n",
" \"name\": \"string\",\n",
" \"address\": {\n",
" \"street\": \"string\",\n",
" \"city\": \"string\",\n",
" \"country\": \"string\",\n",
" \"postal_code\": \"string\"\n",
" },\n",
" \"contact\": {\n",
" \"phone\": \"string\",\n",
" \"fax\": \"string\",\n",
" \"email\": \"string\",\n",
" \"website\": \"string\"\n",
" }\n",
" },\n",
" \"guest_information\": {\n",
" \"company\": \"string\",\n",
" \"address\": \"string\",\n",
" \"guest_name\": \"string\"\n",
" },\n",
" \"invoice_information\": {\n",
" \"invoice_number\": \"string\",\n",
" \"reservation_number\": \"string\",\n",
" \"date\": \"YYYY-MM-DD\", \n",
" \"room_number\": \"string\",\n",
" \"check_in_date\": \"YYYY-MM-DD\", \n",
" \"check_out_date\": \"YYYY-MM-DD\" \n",
" },\n",
" \"charges\": [\n",
" {\n",
" \"date\": \"YYYY-MM-DD\", \n",
" \"description\": \"string\",\n",
" \"charge\": \"number\",\n",
" \"credit\": \"number\"\n",
" }\n",
" ],\n",
" \"totals_summary\": {\n",
" \"currency\": \"string\",\n",
" \"total_net\": \"number\",\n",
" \"total_tax\": \"number\",\n",
" \"total_gross\": \"number\",\n",
" \"total_charge\": \"number\",\n",
" \"total_credit\": \"number\",\n",
" \"balance_due\": \"number\"\n",
" },\n",
" \"taxes\": [\n",
" {\n",
" \"tax_type\": \"string\",\n",
" \"tax_rate\": \"string\",\n",
" \"net_amount\": \"number\",\n",
" \"tax_amount\": \"number\",\n",
" \"gross_amount\": \"number\"\n",
" }\n",
" ]\n",
" }\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def transform_invoice_data(json_raw, json_schema):\n",
" system_prompt = f\"\"\"\n",
" You are a data transformation tool that takes in JSON data and a reference JSON schema, and outputs JSON data according to the schema.\n",
" Not all of the data in the input JSON will fit the schema, so you may need to omit some data or add null values to the output JSON.\n",
" Translate all data into English if not already in English.\n",
" Ensure values are formatted as specified in the schema (e.g. dates as YYYY-MM-DD).\n",
" Here is the schema:\n",
" {json_schema}\n",
"\n",
" \"\"\"\n",
" \n",
" response = client.chat.completions.create(\n",
" model=\"gpt-4o\",\n",
" response_format={ \"type\": \"json_object\" },\n",
" messages=[\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": system_prompt\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": f\"Transform the following raw JSON data according to the provided schema. Ensure all data is in English and formatted as specified by values in the schema. Here is the raw JSON: {json_raw}\"}\n",
" ]\n",
" }\n",
" ],\n",
" temperature=0.0,\n",
" )\n",
" return json.loads(response.choices[0].message.content)\n",
"\n",
"\n",
"\n",
"def main_transform(extracted_invoice_json_path, json_schema_path, save_path):\n",
" # Load the JSON schema\n",
" with open(json_schema_path, 'r', encoding='utf-8') as f:\n",
" json_schema = json.load(f)\n",
"\n",
" # Ensure the save directory exists\n",
" os.makedirs(save_path, exist_ok=True)\n",
"\n",
" # Process each JSON file in the extracted invoices directory\n",
" for filename in os.listdir(extracted_invoice_json_path):\n",
" if filename.endswith(\".json\"):\n",
" file_path = os.path.join(extracted_invoice_json_path, filename)\n",
"\n",
" # Load the extracted JSON\n",
" with open(file_path, 'r', encoding='utf-8') as f:\n",
" json_raw = json.load(f)\n",
"\n",
" # Transform the JSON data\n",
" transformed_json = transform_invoice_data(json_raw, json_schema)\n",
"\n",
" # Save the transformed JSON to the save directory\n",
" transformed_filename = f\"transformed_{filename}\"\n",
" transformed_file_path = os.path.join(save_path, transformed_filename)\n",
" with open(transformed_file_path, 'w', encoding='utf-8') as f:\n",
" json.dump(transformed_json, f, ensure_ascii=False, indent=2)\n",
"\n",
" \n",
" extracted_invoice_json_path = \"./data/hotel_invoices/extracted_invoice_json\"\n",
" json_schema_path = \"./data/hotel_invoices/invoice_schema.json\"\n",
" save_path = \"./data/hotel_invoices/transformed_invoice_json\"\n",
"\n",
" main_transform(extracted_invoice_json_path, json_schema_path, save_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3: Loading transformed data into a database \n",
"\n",
"Now that we've schematized all of our data, we can segment it into tables for ingesting into a relational database. In particular, we're going to create four tables: Hotels, Invoices, Charges and Taxes. All of the invoices pertained to one guest, so we won't create a guest table. "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"import sqlite3\n",
"\n",
"def ingest_transformed_jsons(json_folder_path, db_path):\n",
" conn = sqlite3.connect(db_path)\n",
" cursor = conn.cursor()\n",
"\n",
" # Create necessary tables\n",
" cursor.execute('''\n",
" CREATE TABLE IF NOT EXISTS Hotels (\n",
" hotel_id INTEGER PRIMARY KEY AUTOINCREMENT,\n",
" name TEXT,\n",
" street TEXT,\n",
" city TEXT,\n",
" country TEXT,\n",
" postal_code TEXT,\n",
" phone TEXT,\n",
" fax TEXT,\n",
" email TEXT,\n",
" website TEXT\n",
" )\n",
" ''')\n",
"\n",
" cursor.execute('''\n",
" CREATE TABLE IF NOT EXISTS Invoices (\n",
" invoice_id INTEGER PRIMARY KEY AUTOINCREMENT,\n",
" hotel_id INTEGER,\n",
" invoice_number TEXT,\n",
" reservation_number TEXT,\n",
" date TEXT,\n",
" room_number TEXT,\n",
" check_in_date TEXT,\n",
" check_out_date TEXT,\n",
" currency TEXT,\n",
" total_net REAL,\n",
" total_tax REAL,\n",
" total_gross REAL,\n",
" total_charge REAL,\n",
" total_credit REAL,\n",
" balance_due REAL,\n",
" guest_company TEXT,\n",
" guest_address TEXT,\n",
" guest_name TEXT,\n",
" FOREIGN KEY(hotel_id) REFERENCES Hotels(hotel_id)\n",
" )\n",
" ''')\n",
"\n",
" cursor.execute('''\n",
" CREATE TABLE IF NOT EXISTS Charges (\n",
" charge_id INTEGER PRIMARY KEY AUTOINCREMENT,\n",
" invoice_id INTEGER,\n",
" date TEXT,\n",
" description TEXT,\n",
" charge REAL,\n",
" credit REAL,\n",
" FOREIGN KEY(invoice_id) REFERENCES Invoices(invoice_id)\n",
" )\n",
" ''')\n",
"\n",
" cursor.execute('''\n",
" CREATE TABLE IF NOT EXISTS Taxes (\n",
" tax_id INTEGER PRIMARY KEY AUTOINCREMENT,\n",
" invoice_id INTEGER,\n",
" tax_type TEXT,\n",
" tax_rate TEXT,\n",
" net_amount REAL,\n",
" tax_amount REAL,\n",
" gross_amount REAL,\n",
" FOREIGN KEY(invoice_id) REFERENCES Invoices(invoice_id)\n",
" )\n",
" ''')\n",
"\n",
" # Loop over all JSON files in the specified folder\n",
" for filename in os.listdir(json_folder_path):\n",
" if filename.endswith(\".json\"):\n",
" file_path = os.path.join(json_folder_path, filename)\n",
"\n",
" # Load the JSON data\n",
" with open(file_path, 'r', encoding='utf-8') as f:\n",
" data = json.load(f)\n",
"\n",
" # Insert Hotel Information\n",
" cursor.execute('''\n",
" INSERT INTO Hotels (name, street, city, country, postal_code, phone, fax, email, website) \n",
" VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
" ''', (\n",
" data[\"hotel_information\"][\"name\"],\n",
" data[\"hotel_information\"][\"address\"][\"street\"],\n",
" data[\"hotel_information\"][\"address\"][\"city\"],\n",
" data[\"hotel_information\"][\"address\"][\"country\"],\n",
" data[\"hotel_information\"][\"address\"][\"postal_code\"],\n",
" data[\"hotel_information\"][\"contact\"][\"phone\"],\n",
" data[\"hotel_information\"][\"contact\"][\"fax\"],\n",
" data[\"hotel_information\"][\"contact\"][\"email\"],\n",
" data[\"hotel_information\"][\"contact\"][\"website\"]\n",
" ))\n",
" hotel_id = cursor.lastrowid\n",
"\n",
" # Insert Invoice Information\n",
" cursor.execute('''\n",
" INSERT INTO Invoices (hotel_id, invoice_number, reservation_number, date, room_number, check_in_date, check_out_date, currency, total_net, total_tax, total_gross, total_charge, total_credit, balance_due, guest_company, guest_address, guest_name)\n",
" VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
" ''', (\n",
" hotel_id,\n",
" data[\"invoice_information\"][\"invoice_number\"],\n",
" data[\"invoice_information\"][\"reservation_number\"],\n",
" data[\"invoice_information\"][\"date\"],\n",
" data[\"invoice_information\"][\"room_number\"],\n",
" data[\"invoice_information\"][\"check_in_date\"],\n",
" data[\"invoice_information\"][\"check_out_date\"],\n",
" data[\"totals_summary\"][\"currency\"],\n",
" data[\"totals_summary\"][\"total_net\"],\n",
" data[\"totals_summary\"][\"total_tax\"],\n",
" data[\"totals_summary\"][\"total_gross\"],\n",
" data[\"totals_summary\"][\"total_charge\"],\n",
" data[\"totals_summary\"][\"total_credit\"],\n",
" data[\"totals_summary\"][\"balance_due\"],\n",
" data[\"guest_information\"][\"company\"],\n",
" data[\"guest_information\"][\"address\"],\n",
" data[\"guest_information\"][\"guest_name\"]\n",
" ))\n",
" invoice_id = cursor.lastrowid\n",
"\n",
" # Insert Charges\n",
" for charge in data[\"charges\"]:\n",
" cursor.execute('''\n",
" INSERT INTO Charges (invoice_id, date, description, charge, credit) \n",
" VALUES (?, ?, ?, ?, ?)\n",
" ''', (\n",
" invoice_id,\n",
" charge[\"date\"],\n",
" charge[\"description\"],\n",
" charge[\"charge\"],\n",
" charge[\"credit\"]\n",
" ))\n",
"\n",
" # Insert Taxes\n",
" for tax in data[\"taxes\"]:\n",
" cursor.execute('''\n",
" INSERT INTO Taxes (invoice_id, tax_type, tax_rate, net_amount, tax_amount, gross_amount) \n",
" VALUES (?, ?, ?, ?, ?, ?)\n",
" ''', (\n",
" invoice_id,\n",
" tax[\"tax_type\"],\n",
" tax[\"tax_rate\"],\n",
" tax[\"net_amount\"],\n",
" tax[\"tax_amount\"],\n",
" tax[\"gross_amount\"]\n",
" ))\n",
"\n",
" conn.commit()\n",
" conn.close()\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's check that we've correctly ingested the data by running a sample SQL query to determine the most expensive hotel stay and the same of the hotel! \n",
"You can even automate the generation of SQL queries at this step by using function calling, check out our [cookbook on function calling with model generated arguments](https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#how-to-call-functions-with-model-generated-arguments) to learn how to do that. "
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"('Citadines Michel Hamburg', 903.63)\n"
]
}
],
"source": [
"\n",
"def execute_query(db_path, query, params=()):\n",
" \"\"\"\n",
" Execute a SQL query and return the results.\n",
"\n",
" Parameters:\n",
" db_path (str): Path to the SQLite database file.\n",
" query (str): SQL query to be executed.\n",
" params (tuple): Parameters to be passed to the query (default is an empty tuple).\n",
"\n",
" Returns:\n",
" list: List of rows returned by the query.\n",
" \"\"\"\n",
" try:\n",
" # Connect to the SQLite database\n",
" conn = sqlite3.connect(db_path)\n",
" cursor = conn.cursor()\n",
"\n",
" # Execute the query with parameters\n",
" cursor.execute(query, params)\n",
" results = cursor.fetchall()\n",
"\n",
" # Commit if it's an INSERT/UPDATE/DELETE query\n",
" if query.strip().upper().startswith(('INSERT', 'UPDATE', 'DELETE')):\n",
" conn.commit()\n",
"\n",
" return results\n",
" except sqlite3.Error as e:\n",
" print(f\"An error occurred: {e}\")\n",
" return []\n",
" finally:\n",
" # Close the connection\n",
" if conn:\n",
" conn.close()\n",
"\n",
"\n",
"# Example usage\n",
"transformed_invoices_path = \"./data/hotel_invoices/transformed_invoice_json\"\n",
"db_path = \"./data/hotel_invoices/hotel_DB.db\"\n",
"ingest_transformed_jsons(transformed_invoices_path, db_path)\n",
"\n",
"query = '''\n",
" SELECT \n",
" h.name AS hotel_name,\n",
" i.total_gross AS max_spent\n",
" FROM \n",
" Invoices i\n",
" JOIN \n",
" Hotels h ON i.hotel_id = h.hotel_id\n",
" ORDER BY \n",
" i.total_gross DESC\n",
" LIMIT 1;\n",
" '''\n",
"\n",
"results = execute_query(db_path, query)\n",
"for row in results:\n",
" print(row)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To recap in this cookbook we showed you how to use GPT-4o for extracting and transforming data that would otherwise be inaccessible for data analysis. If you don't need these workflows to happen in real-time, you can take advantage of OpenAI's BatchAPI to run jobs asynchronously at a much lower cost! "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "openai",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}