mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-11 13:11:02 +00:00
1277 lines
54 KiB
Plaintext
1277 lines
54 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "I_IHQTO8xXBn"
|
|
},
|
|
"source": [
|
|
"# Synthetic Data generation (Part 1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "VBoxtnxVdTWZ"
|
|
},
|
|
"source": [
|
|
"\n",
|
|
"Synthetic data generation using large language models (LLMs) offers a powerful solution to a commonly faced problem: the availability of high-quality, diverse, and privacy-compliant data. This could be used in a number of scenarios such as training a data science machine learning model (SVMs, decision trees, KNN's), finetuning a different GPT model on the data, as a solution to the coldstart problem, helping build compelling demos/apps with realistic data, scenario testing etc.\n",
|
|
"\n",
|
|
"There are a number of key drivers which may see you wanting to leverage synthetic data. \n",
|
|
"1. Human data may have privacy restrictions and/or identifiable data within it which we do not want to be used. \n",
|
|
"2. Synthetic data can be much more structured and therefore easier to manipulate than real data. \n",
|
|
"3. In domains where data is sparse or data of certain categories is sparse we may want to augment the data. \n",
|
|
"4. When dealing with imbalanced datasets or datasets which lack diversity, we may want to create data to improve the richness of our datasets.\n",
|
|
"\n",
|
|
"Unlike traditional data augmentation or manual data creation methods, using LLMs allows for the generation of rich, nuanced, and contextually relevant datasets that can significantly enhance it's usefulness to enterprises and developers.\n",
|
|
"\n",
|
|
"We split this tutorial into 2 parts. In this cookbook, we will have the following agenda:\n",
|
|
"1. CSV with a structured prompt\n",
|
|
"2. CSV with a Python program\n",
|
|
"3. Multitable CSV with a python program\n",
|
|
"4. Simply creating textual data\n",
|
|
"5. Dealing with imbalanced or non-diverse textual data\n",
|
|
"while in part 2, we will look at prompting strategies for getting better textual data.\n",
|
|
"\n",
|
|
"The last two in particular are useful for creating synthetic data to finetune another GPT model. For example using higher quality data produced by gpt-4 to finetune the cheaper and quicker gpt-3.5 for improved performance while reducing costs.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "NE9Rr29zlRsA"
|
|
},
|
|
"source": [
|
|
"### Getting setup"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "YGncxYrgQ8eb"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"%pip install openai\n",
|
|
"%pip install pandas\n",
|
|
"%pip install scikit-learn\n",
|
|
"%pip install matplotlib"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {
|
|
"id": "8pzwvE-YQPtU"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from openai import OpenAI\n",
|
|
"import re\n",
|
|
"import numpy as np\n",
|
|
"import pandas as pd\n",
|
|
"from sklearn.cluster import KMeans\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"import json\n",
|
|
"import matplotlib"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "B8eAx4-JxaZB"
|
|
},
|
|
"source": [
|
|
"### 1. CSV with a structure prompt\n",
|
|
"Here we create data in the simplest way. You can quickly generate data by addressing 3 key points: telling it the format of the data (CSV), the schema, and useful information regarding how columns relate (the LLM will be able to deduce this from the column names but a helping hand will improve performance)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "dqbvepd0n4vS",
|
|
"outputId": "8735cacc-baa5-463e-938c-783e6b508b00"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"```csv\n",
|
|
"id,house size,house price,location,number of bedrooms\n",
|
|
"1,100,220000,Suburbs,3\n",
|
|
"2,80,180000,Suburbs,2\n",
|
|
"3,120,320000,Suburbs,4\n",
|
|
"4,65,160000,Countryside,2\n",
|
|
"5,150,500000,City Center,4\n",
|
|
"6,90,200000,Countryside,3\n",
|
|
"7,200,700000,City Center,5\n",
|
|
"8,180,600000,Suburbs,5\n",
|
|
"9,70,140000,Countryside,2\n",
|
|
"10,130,400000,City Center,3\n",
|
|
"```\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"datagen_model = \"gpt-4-0125-preview\"\n",
|
|
"question = \"\"\"\n",
|
|
"Create a CSV file with 10 rows of housing data.\n",
|
|
"Each row should include the following fields:\n",
|
|
" - id (incrementing integer starting at 1)\n",
|
|
" - house size (m^2)\n",
|
|
" - house price\n",
|
|
" - location\n",
|
|
" - number of bedrooms\n",
|
|
"\n",
|
|
"Make sure that the numbers make sense (i.e. more rooms is usually bigger size, more expensive locations increase price. more size is usually higher price etc. make sure all the numbers make sense). Also only respond with the CSV.\n",
|
|
"\"\"\"\n",
|
|
"\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=datagen_model,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to generate synthetic data.\"},\n",
|
|
" {\"role\": \"user\", \"content\": question}\n",
|
|
" ]\n",
|
|
")\n",
|
|
"res = response.choices[0].message.content\n",
|
|
"print(res)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "6ym0NiIyxiVj"
|
|
},
|
|
"source": [
|
|
"### 2. CSV with a Python program\n",
|
|
"The issue with generating data directly is we are limited in the amount of data we can generate because of the context. Instead what we can do is ask the LLM to generate a python program to generate the synthetic data. This allows us to scale to much more data while also providing us a view into how the data was generated by inspecting the python program.\n",
|
|
"\n",
|
|
"This would then let us edit the python program as we desire while giving us a good basis to start from.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "2yDuwB5ZxWS3",
|
|
"outputId": "dcbe1093-90f0-4f60-d9c6-34bf679bb092"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"To generate synthetic housing data and output it as a Pandas DataFrame, we can use Python with the `pandas` and `numpy` libraries. Below is a script that creates 100 rows of housing data considering the prescribed logic for house size, price, and number of bedrooms. It also takes into account the impact of location on house price.\n",
|
|
"\n",
|
|
"First, ensure you have pandas and numpy installed. You can install them via pip if you haven't already:\n",
|
|
"\n",
|
|
"```\n",
|
|
"pip install pandas numpy\n",
|
|
"```\n",
|
|
"\n",
|
|
"The script:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"import pandas as pd\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"# Seed for reproducibility\n",
|
|
"np.random.seed(42)\n",
|
|
"\n",
|
|
"# Initialize the lists\n",
|
|
"ids = list(range(1, 101))\n",
|
|
"sizes = np.random.normal(150, 50, 100).astype(int) # House sizes with a mean of 150 m^2 and a std of 50\n",
|
|
"bedrooms = np.random.choice([1, 2, 3, 4, 5], 100) # Number of bedrooms\n",
|
|
"locations = np.random.choice(['Downtown', 'Suburb', 'Countryside'], 100, p=[0.4, 0.4, 0.2]) # Location of houses with a preferential distribution\n",
|
|
"\n",
|
|
"# Prices will be influenced by location, size, and bedrooms. This part is simplistic and can be made more complex.\n",
|
|
"base_price = 100000 # Base price\n",
|
|
"price_per_m2 = 1000 # Base price per m^2\n",
|
|
"extra_per_bedroom = 5000 # Extra cost per additional bedroom\n",
|
|
"\n",
|
|
"prices = []\n",
|
|
"\n",
|
|
"for i in range(100):\n",
|
|
" base_location_multiplier = 1.5 if locations[i] == 'Downtown' else 1.2 if locations[i] == 'Suburb' else 1\n",
|
|
" location_multiplier = base_location_multiplier * (1 + (sizes[i] / 1000)) # More expensive if bigger, especially downtown\n",
|
|
" price = base_price + (sizes[i] * price_per_m2) + (bedrooms[i] * extra_per_bedroom)\n",
|
|
" prices.append(int(price * location_multiplier))\n",
|
|
"\n",
|
|
"# Create DataFrame\n",
|
|
"data = {\n",
|
|
" 'id': ids,\n",
|
|
" 'house size (m^2)': sizes,\n",
|
|
" 'number of bedrooms': bedrooms,\n",
|
|
" 'location': locations,\n",
|
|
" 'house price': prices\n",
|
|
"}\n",
|
|
"\n",
|
|
"df = pd.DataFrame(data)\n",
|
|
"\n",
|
|
"print(df)\n",
|
|
"```\n",
|
|
"\n",
|
|
"This program initializes with a seed for reproducibility while creating randomized but plausible data for housing. The sizes are normally distributed around a mean value, and bedrooms are chosen from a set number. The pricing logic uses base values plus increases according to size, bedroom count, and a location multiplier, with downtown locations inflating prices more than suburbs or countryside locations. Adjustments are simplistic for the purpose of example and can be refined for more nuanced simulations.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"question = \"\"\"\n",
|
|
"Create a Python program to generate 100 rows of housing data.\n",
|
|
"I want you to at the end of it output a pandas dataframe with 100 rows of data.\n",
|
|
"Each row should include the following fields:\n",
|
|
" - id (incrementing integer starting at 1)\n",
|
|
" - house size (m^2)\n",
|
|
" - house price\n",
|
|
" - location\n",
|
|
" - number of bedrooms\n",
|
|
"\n",
|
|
"Make sure that the numbers make sense (i.e. more rooms is usually bigger size, more expensive locations increase price. more size is usually higher price etc. make sure all the numbers make sense).\n",
|
|
"\"\"\"\n",
|
|
"\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=datagen_model,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to generate synthetic data.\"},\n",
|
|
" {\"role\": \"user\", \"content\": question}\n",
|
|
" ]\n",
|
|
")\n",
|
|
"res = response.choices[0].message.content\n",
|
|
"print(res)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We need to make sure to parse the output of this appropriately as often there may be surrounding text to the python code. We can also explicitly ask it to state all assumptions it made about the data it's generating, however in this circumstance it told us that automatically."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "HZaJs7q8xm3L"
|
|
},
|
|
"source": [
|
|
"### 3. Multitable CSV with a python program\n",
|
|
"For more complex relationships however we need to make sure to specify a few more characteristics. \n",
|
|
"\n",
|
|
"To create multiple different datasets which relate to each other (for example housing, location, house type), as before we would need to specify the format, schema and useful information. However, the useful information required to get good performance is higher now. It's case-specific but a good amount of things to describe would be how the datasets relate to each other, addressing the size of the datasets in relation to one another, making sure foreign and primary keys are made appropriately and ideally using previously generated datasets to populate new ones so the actual data values match where necessary."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "3TWAhIYIxnbS",
|
|
"outputId": "8f766838-b2f0-419a-a4fb-543d029afce5"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"To create a Python program generating three Pandas DataFrames as described, I'll lay out a step-by-step process considering the relationships between the different types of data:\n",
|
|
"\n",
|
|
"1. Install pandas if you haven't yet: `pip install pandas`\n",
|
|
"2. Import pandas and generate each DataFrame. I'll make some assumptions for the synthetic data to keep it relatively simple.\n",
|
|
"\n",
|
|
"Let's start coding:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"import pandas as pd\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"# Generating Location DataFrame\n",
|
|
"np.random.seed(42) # For reproducibility\n",
|
|
"location_data = {\n",
|
|
" 'id': range(1, 11), # Assuming 10 unique locations\n",
|
|
" 'country': ['CountryA'] * 5 + ['CountryB'] * 5,\n",
|
|
" 'city': ['City' + str(i) for i in range(1, 11)],\n",
|
|
" 'population': np.random.randint(100000, 1000000, size=10),\n",
|
|
" 'area': np.random.randint(500, 20000, size=10),\n",
|
|
"}\n",
|
|
"locations_df = pd.DataFrame(location_data)\n",
|
|
"\n",
|
|
"# Generating House Types DataFrame\n",
|
|
"house_types_data = {\n",
|
|
" 'id': range(1, 5), # Assuming 4 unique house types\n",
|
|
" 'house type': ['Villa', 'Apartment', 'Townhouse', 'Bungalow'],\n",
|
|
" 'average house type price': [300000, 200000, 250000, 220000], # Just arbitrary prices\n",
|
|
" 'number of houses': [25, 50, 15, 10], # Total = 100 houses, matching the housing data requirement\n",
|
|
"}\n",
|
|
"house_types_df = pd.DataFrame(house_types_data)\n",
|
|
"\n",
|
|
"# Generating Housing Data\n",
|
|
"housing_data = {\n",
|
|
" 'id': range(1, 101),\n",
|
|
" 'house size': np.random.randint(50, 500, size=100),\n",
|
|
" 'house price': [], # To be calculated based on size, location, etc.\n",
|
|
" 'location_id': np.random.choice(locations_df['id'], size=100),\n",
|
|
" 'number of bedrooms': np.random.randint(1, 6, size=100),\n",
|
|
" 'house_type_id': np.random.choice(house_types_df['id'], size=100),\n",
|
|
"}\n",
|
|
"# Simple model to calculate house price based on size, type, and a base price from the location's median\n",
|
|
"base_prices = locations_df['population'] / 100000 # Simplified assumption: more populous => more expensive\n",
|
|
"housing_data['house price'] = [\n",
|
|
" (1200 * size) + (house_types_df.loc[type_id - 1, 'average house type price']) + (base_prices[loc_id - 1] * 1000)\n",
|
|
" for size, type_id, loc_id\n",
|
|
" in zip(housing_data['house size'], housing_data['house_type_id'], housing_data['location_id'])\n",
|
|
"]\n",
|
|
"\n",
|
|
"housing_df = pd.DataFrame(housing_data)\n",
|
|
"\n",
|
|
"# Display the first few rows of each DataFrame\n",
|
|
"print(locations_df.head())\n",
|
|
"print(house_types_df.head())\n",
|
|
"print(housing_df.head())\n",
|
|
"```\n",
|
|
"\n",
|
|
"Notes:\n",
|
|
"- This script assumes 10 unique locations and 4 house types for simplicity.\n",
|
|
"- House prices are arbitrarily calculated using the house size, type, and a base price influenced by the location's population. Reality would require a more complex model.\n",
|
|
"- `numpy.random.randint` is used to generate integer values. Similarly, `numpy.random.choice` is used to randomly assign locations and house types to each house, demonstrating a form of foreign key relationship.\n",
|
|
"- For simplicity, foreign keys are represented by corresponding ID fields (e.g., `location_id` in the housing data references the `id` in the location data).\n",
|
|
"\n",
|
|
"This simple synthetic data generation strategy illustrates creating related data sets with Python and pandas. The synthetic data should make general sense within the constraints provided, but keep in mind that for more complex or realistic data modeling, you'd need to incorporate more detailed rules and possibly real-world data.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"question = \"\"\"\n",
|
|
"Create a Python program to generate 3 different pandas dataframes.\n",
|
|
"\n",
|
|
"1. Housing data\n",
|
|
"I want 100 rows. Each row should include the following fields:\n",
|
|
" - id (incrementing integer starting at 1)\n",
|
|
" - house size (m^2)\n",
|
|
" - house price\n",
|
|
" - location\n",
|
|
" - number of bedrooms\n",
|
|
" - house type\n",
|
|
" + any relevant foreign keys\n",
|
|
"\n",
|
|
"2. Location\n",
|
|
"Each row should include the following fields:\n",
|
|
" - id (incrementing integer starting at 1)\n",
|
|
" - country\n",
|
|
" - city\n",
|
|
" - population\n",
|
|
" - area (m^2)\n",
|
|
" + any relevant foreign keys\n",
|
|
"\n",
|
|
" 3. House types\n",
|
|
" - id (incrementing integer starting at 1)\n",
|
|
" - house type\n",
|
|
" - average house type price\n",
|
|
" - number of houses\n",
|
|
" + any relevant foreign keys\n",
|
|
"\n",
|
|
"Make sure that the numbers make sense (i.e. more rooms is usually bigger size, more expensive locations increase price. more size is usually higher price etc. make sure all the numbers make sense).\n",
|
|
"Make sure that the dataframe generally follow common sense checks, e.g. the size of the dataframes make sense in comparison with one another.\n",
|
|
"Make sure the foreign keys match up and you can use previously generated dataframes when creating each consecutive dataframes.\n",
|
|
"You can use the previously generated dataframe to generate the next dataframe.\n",
|
|
"\"\"\"\n",
|
|
"\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=datagen_model,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to generate synthetic data.\"},\n",
|
|
" {\"role\": \"user\", \"content\": question}\n",
|
|
" ]\n",
|
|
")\n",
|
|
"res = response.choices[0].message.content\n",
|
|
"print(res)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Yv9XlRtauZYZ"
|
|
},
|
|
"source": [
|
|
"### 4. Simply creating textual data\n",
|
|
"Here we take a first look at creating textual data. This can be used to finetune another GPT model for example. In this case we imagine ourselves a retailer trying to streamline the process of creating descriptions for items they are selling. We again need to specify the format of the data, in particular in this case we want one which is easy to parse as an output."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The example we consider below is one in which we want to create input output training pairs for GPT model to finetune on. We will have the products' name and the category it belongs to as input and the output will be a description. \n",
|
|
"\n",
|
|
"Specifying the structure of the output explicitly and giving commands to not deviate from this help enforce the output structure. You can run this in a loop and append the data to generate more synthetic data. Again, as before we will need to parse the data well so that our code further downstream does not break."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {
|
|
"id": "2KJVwjV0upby"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"1.\n",
|
|
"Input: Northface Waterproof Jacket, Clothing\n",
|
|
"Output: Stay dry and stylish with the Northface Waterproof Jacket. Perfect for outdoor adventurers and city dwellers alike, this jacket combines cutting-edge waterproof technology with a sleek, modern design. Ideal for unpredictable weather, it ensures you're prepared for anything Mother Nature throws your way.\n",
|
|
"\n",
|
|
"2.\n",
|
|
"Input: Apple iPhone 12, Electronics\n",
|
|
"Output: Experience the next level of innovation with the Apple iPhone 12. Featuring a stunning Super Retina XDR display, a powerful A14 Bionic chip, and advanced dual-camera system, this phone is designed to push the boundaries of what's possible. With 5G capability for super-fast downloads and high-quality streaming, it's the perfect device for tech enthusiasts.\n",
|
|
"\n",
|
|
"3.\n",
|
|
"Input: Adidas Ultraboost Sneakers, Footwear\n",
|
|
"Output: Revolutionize your running experience with Adidas Ultraboost Sneakers. Engineered for long-lasting comfort and superior performance, these sneakers feature the innovative Boost \n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"output_string = \"\"\n",
|
|
"for i in range(3):\n",
|
|
" question = f\"\"\"\n",
|
|
" I am creating input output training pairs to fine tune my gpt model. The usecase is a retailer generating a description for a product from a product catalogue. I want the input to be product name and category (to which the product belongs to) and output to be description.\n",
|
|
" The format should be of the form:\n",
|
|
" 1.\n",
|
|
" Input: product_name, category\n",
|
|
" Output: description\n",
|
|
" 2.\n",
|
|
" Input: product_name, category\n",
|
|
" Output: description\n",
|
|
"\n",
|
|
" Do not add any extra characters around that formatting as it will make the output parsing break.\n",
|
|
" Create as many training pairs as possible.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" response = client.chat.completions.create(\n",
|
|
" model=datagen_model,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to generate synthetic data.\"},\n",
|
|
" {\"role\": \"user\", \"content\": question}\n",
|
|
" ]\n",
|
|
" )\n",
|
|
" res = response.choices[0].message.content\n",
|
|
" output_string += res + \"\\n\" + \"\\n\"\n",
|
|
"print(output_string[:1000]) #displaying truncated response\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "K5EmKTEa7GlC"
|
|
},
|
|
"source": [
|
|
"Note: the above output is truncated. And now we can parse it as below to get a list of products, categories and their descriptions. For example, let's take a look at the products it's generated."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "owvoyJBh0o2n",
|
|
"outputId": "ee48bcc9-fd29-42bf-9beb-ef3800cdbcb2"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['Northface Waterproof Jacket',\n",
|
|
" 'Apple iPhone 12',\n",
|
|
" 'Adidas Ultraboost Sneakers',\n",
|
|
" 'LEGO Star Wars Millennium Falcon',\n",
|
|
" 'Vitamix Professional Series 750 Blender',\n",
|
|
" 'Panasonic Lumix GH5 Camera',\n",
|
|
" 'Moleskine Classic Notebook',\n",
|
|
" 'Bodum French Press Coffee Maker',\n",
|
|
" 'Classic White Sneakers',\n",
|
|
" 'Multi-Purpose Blender',\n",
|
|
" 'Eco-Friendly Yoga Mat',\n",
|
|
" 'Organic Green Tea',\n",
|
|
" 'Smart LED Light Bulb',\n",
|
|
" 'Waterproof Hiking Boots',\n",
|
|
" 'Bamboo Toothbrush',\n",
|
|
" 'Modern Minimalist Floor Lamp',\n",
|
|
" 'Classic Leather Office Chair',\n",
|
|
" 'Stainless Steel French Press',\n",
|
|
" 'Eco-Friendly Bamboo Cutting Board',\n",
|
|
" 'Ultimate Gaming Laptop',\n",
|
|
" 'Waterproof Hiking Boots',\n",
|
|
" 'Compact Travel Umbrella',\n",
|
|
" \"Professional Chef's Knife\"]"
|
|
]
|
|
},
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"#regex to parse data\n",
|
|
"pattern = re.compile(r'Input:\\s*(.+?),\\s*(.+?)\\nOutput:\\s*(.+?)(?=\\n\\n|\\Z)', re.DOTALL)\n",
|
|
"matches = pattern.findall(output_string)\n",
|
|
"products = []\n",
|
|
"categories = []\n",
|
|
"descriptions = []\n",
|
|
"\n",
|
|
"for match in matches:\n",
|
|
" product, category, description = match\n",
|
|
" products.append(product.strip())\n",
|
|
" categories.append(category.strip())\n",
|
|
" descriptions.append(description.strip())\n",
|
|
"products"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "bO3PgRwpyocn"
|
|
},
|
|
"source": [
|
|
"\n",
|
|
"### 5. Dealing with imbalanced or non-diverse textual data\n",
|
|
"Some of the most important aspects of generating high-quality synthetic data are accuracy (does the data make sense), consistency (are two separate data points for the same input roughly the same) and diversity (making sure our data distribution matches as much of the distribution that exists in production).\n",
|
|
"\n",
|
|
"\n",
|
|
"To increase the diversity of our data, we start first by clustering the data. This will provide us information about which clusters are underrepresented (imbalanced dataset) or which data is not addressed at all (widening the data distribution). Then, we will either suggest new clusters (using self-reflection type call from GPT) or ask the next iteration of our synthetic generation calls to explicitly target the underrepresented clusters. \n",
|
|
"\n",
|
|
"We can then recursively run this generation and analysis of cluster loop to automate generating diverse synthetic data."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "ubdPEFYR-myU"
|
|
},
|
|
"source": [
|
|
"For demonstrative purposes, we explicitly prompt the LLM to generate information about 4 different topical areas: vehicle, clothing, toiletries, food. We will then cluster the data and see if it managed to find these 4 topic areas."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "m-yncn8s1hWZ",
|
|
"outputId": "35ae248d-4859-4d3f-ba29-94478aed7305"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"2) toiletries\n",
|
|
"Input: \"Toothbrush X5+, Electronic toothbrush\"\n",
|
|
"Output: \"Experience a superior cleanse with the Toothbrush X5+. It comes equipped with an advanced sonic technology that guarantees a gentle yet effective clean every time.\"\n",
|
|
"\n",
|
|
"3) vehicle\n",
|
|
"Input: \"Pegasus Pro 300, Motorcycle\"\n",
|
|
"Output: \"Dominate the road with the stylish Pegasus Pro 300. This motorcycle guarantees a powerful, efficient, and thrilling performance on every ride.\"\n",
|
|
"\n",
|
|
"4) food\n",
|
|
"Input: \"Tasty Delight Instant Noodles, Instant food\"\n",
|
|
"Output: \"Tasty Delight Instant Noodles offer a quick, delicious meal ready in minutes. The perfect solution for those stepping up their cooking game.\"\n",
|
|
"\n",
|
|
"5) clothing\n",
|
|
"Input: \"UltraSport Men's Running Jacket, Sportswear\"\n",
|
|
"Output: \"UltraSport Men's Running Jacket combines functionality and style. The breathable material allows for comfortable workouts, even in colder weather.\"\n",
|
|
"\n",
|
|
"6) toiletries\n",
|
|
"Input: \"FreshBliss Shower Gel, Bath and body\"\n",
|
|
"Output: \"Indulge in luxury every morning with the FreshBliss Showe\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"output_string = \"\"\n",
|
|
"for i in range(3):\n",
|
|
" question = f\"\"\"\n",
|
|
" I am creating input output training pairs to fine tune my gpt model. I want the input to be product name and category and output to be description. the category should be things like: mobile phones, shoes, headphones, laptop, electronic toothbrush, etc. and also more importantly the categories should come under 4 main topics: vehicle, clothing, toiletries, food)\n",
|
|
" After the number of each example also state the topic area. The format should be of the form:\n",
|
|
" 1. topic_area\n",
|
|
" Input: product_name, category\n",
|
|
" Output: description\n",
|
|
"\n",
|
|
" Do not add any extra characters around that formatting as it will make the output parsing break.\n",
|
|
"\n",
|
|
" Here are some helpful examples so you get the style of output correct.\n",
|
|
"\n",
|
|
" 1) clothing\n",
|
|
" Input: \"Shoe Name, Shoes\"\n",
|
|
" Output: \"Experience unparalleled comfort. These shoes feature a blend of modern style and the traditional superior cushioning, perfect for those always on the move.\"\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" response = client.chat.completions.create(\n",
|
|
" model=\"gpt-4\",\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to generate synthetic data.\"},\n",
|
|
" {\"role\": \"user\", \"content\": question}\n",
|
|
" ]\n",
|
|
" )\n",
|
|
" res = response.choices[0].message.content\n",
|
|
" output_string += res + \"\\n\" + \"\\n\"\n",
|
|
"print(output_string[:1000]) #displaying truncated response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note: The above output is truncated. In the example above, we would explicitly include the topic area as part of the response per example as it helps condition the proceeding output and tends to give better performance. We can also give it an actual example of what the output should look like so it gets the right idea of style of output but also to help enforce structure."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {
|
|
"id": "k7GAokEC1hUY"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"pattern = re.compile(r'(\\d+)\\) (\\w+(?: \\w+)?)\\s*Input: \"(.+?), (.+?)\"\\s*Output: \"(.+?)\"', re.DOTALL)\n",
|
|
"matches = pattern.findall(output_string)\n",
|
|
"\n",
|
|
"\n",
|
|
"topics = []\n",
|
|
"products = []\n",
|
|
"categories = []\n",
|
|
"descriptions = []\n",
|
|
"\n",
|
|
"for match in matches:\n",
|
|
" number, topic, product, category, description = match\n",
|
|
" topics.append(topic)\n",
|
|
" products.append(product)\n",
|
|
" categories.append(category)\n",
|
|
" descriptions.append(description)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "Z49B3LrJ1hSG",
|
|
"outputId": "d76a9038-1879-44d9-f1db-dc933e066c54"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['Toothbrush X5+',\n",
|
|
" 'Pegasus Pro 300',\n",
|
|
" 'Tasty Delight Instant Noodles',\n",
|
|
" \"UltraSport Men's Running Jacket\",\n",
|
|
" 'FreshBliss Shower Gel',\n",
|
|
" 'OceanBlue Yacht 700',\n",
|
|
" 'FarmFresh Organic Apples',\n",
|
|
" \"Elegance Women's Velvet Dress\",\n",
|
|
" \"GentleCare Men's Face Wash\",\n",
|
|
" 'AquaBreathe',\n",
|
|
" 'Lunar Ride',\n",
|
|
" 'Sunrise Juice',\n",
|
|
" 'TitanFlex',\n",
|
|
" 'GlowRadiant',\n",
|
|
" 'SolarSpeed',\n",
|
|
" 'HealthyBite',\n",
|
|
" 'Brushify',\n",
|
|
" 'Choco Crunchy',\n",
|
|
" 'Super X100',\n",
|
|
" 'Le Bliz',\n",
|
|
" 'Purely Lavender',\n",
|
|
" 'Cheesy Delight',\n",
|
|
" 'EcoSprint',\n",
|
|
" 'Denim Duo',\n",
|
|
" 'Fresh Dawn']"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"products"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "J1qKxLkAzKq0"
|
|
},
|
|
"source": [
|
|
"We will now cluster the data to analyze it. We will use K-means clustering to segregate the data. An important parameter of K-means to set is K, the number of clusters.\n",
|
|
"\n",
|
|
"We know that there should be 4 cluster (4 topics) since we specified this in prompt: vehicle, electronics, clothing, food. However in general for our data, we do not know the number of clusters that exist. Therefore we will use the elbow method to find the optimal number of clusters.\n",
|
|
"\n",
|
|
"In the elbow method, we iterate through a range of different K's, each time storing the inertia. The inertia measures the sum of the squared distances between each point in a cluster and the centroid of that cluster thus telling us how well-separated and dense each cluster is. If we plot K against the inertia, we are able to see how the inertia drops and where the drop in inertia is least rapid (often making an elbow shape) we can set our optimal number of clusters. You can read into more depth about the elbow method [here](https://en.wikipedia.org/wiki/Elbow_method_(clustering))."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "1BxwPTkpGzu8"
|
|
},
|
|
"source": [
|
|
"First let's store our data into a pandas dataframe for ease of analysis\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {
|
|
"id": "XcPBzORtKWv6"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"data = {\n",
|
|
" 'Product': products,\n",
|
|
" 'Category': categories,\n",
|
|
" 'Description': descriptions\n",
|
|
"}\n",
|
|
"\n",
|
|
"df = pd.DataFrame(data)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "HQbg6r37KjG0"
|
|
},
|
|
"source": [
|
|
"Next let us embed our data as the embeddings is what we will cluster since they should be close to each other in vector space if they are similar."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {
|
|
"id": "l8M7MX-1Jctr"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"def get_embedding(text, model=\"text-embedding-3-small\"):\n",
|
|
" text = text.replace(\"\\n\", \" \")\n",
|
|
"\n",
|
|
" response = client.embeddings.create(input=[text], model=model)\n",
|
|
"\n",
|
|
" return response.data[0].embedding\n",
|
|
"\n",
|
|
"embedding_model = \"text-embedding-3-small\"\n",
|
|
"df[\"embedding\"] = df.Category.apply(lambda x: get_embedding(x, model=embedding_model))\n",
|
|
"\n",
|
|
"matrix = np.vstack(df.embedding.values)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "ZcdZscBNKy0F"
|
|
},
|
|
"source": [
|
|
"Now we perform the elbow method. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/",
|
|
"height": 981
|
|
},
|
|
"id": "1Azw_xgVl_aY",
|
|
"outputId": "5b7076aa-a03c-4a40-f52c-08b09248cf18"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Determine the optimal number of clusters using the elbow method\n",
|
|
"inertias = []\n",
|
|
"range_of_clusters = range(1, 13) # Adjust the range as necessary\n",
|
|
"\n",
|
|
"for n_clusters in range_of_clusters:\n",
|
|
" kmeans = KMeans(n_clusters=n_clusters, init=\"k-means++\", random_state=42, n_init=10)\n",
|
|
" kmeans.fit(matrix)\n",
|
|
" inertias.append(kmeans.inertia_)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This will output a chart for us in which we have to visually tell where the optimal cluster point is. We can see below that we see a gradual decrease of inertia rather than a sharp elbow but the point of steepest decrease appears to occur around 3, 4 or 5 clusters which lines up with our expectations given our prompt. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Plotting the elbow plot\n",
|
|
"plt.figure(figsize=(10, 6))\n",
|
|
"plt.plot(range_of_clusters, inertias, '-o')\n",
|
|
"plt.title('Elbow Method to Determine Optimal Number of Clusters')\n",
|
|
"plt.xlabel('Number of Clusters')\n",
|
|
"plt.ylabel('Inertia')\n",
|
|
"plt.xticks(range_of_clusters)\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"![elbow_chart](../images/elbow_chart.png)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "NN7NbTmiLe_-"
|
|
},
|
|
"source": [
|
|
"For demonstration purposes we will pick 5 as the optimal cluster number to show it doesn't matter exactly where we pick it as long as we are approximately right. There are numerous correct ways to categorize data. We also store which cluster each data point belongs to."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "KvrDe3WYKWgZ",
|
|
"outputId": "d0d95227-b9d2-4c52-a5f7-59159ba848d2"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"n_clusters = 5\n",
|
|
"\n",
|
|
"kmeans = KMeans(n_clusters=n_clusters, init=\"k-means++\", random_state=42)\n",
|
|
"kmeans.fit(matrix)\n",
|
|
"labels = kmeans.labels_\n",
|
|
"df[\"Cluster\"] = labels"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "tLvO4AISDM0J"
|
|
},
|
|
"source": [
|
|
"We will analyze the cluster data now. There are two separate things we will look to address. 1. imbalanced data, 2. Expanding the data distribution."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "zaQ_mdhpOJqs"
|
|
},
|
|
"source": [
|
|
"First for imbalanced data we count the number of examples in each cluster. Then we select a few examples from each cluster at random and ask the LLM what topics these map to. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 20,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "crUT7OR7QcFD",
|
|
"outputId": "04f49ac2-1259-4622-bcf2-0686af93068c"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Cluster\n",
|
|
"0 4\n",
|
|
"1 7\n",
|
|
"2 6\n",
|
|
"3 4\n",
|
|
"4 4\n",
|
|
"Name: count, dtype: int64\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"cluster_counts = df[\"Cluster\"].value_counts().sort_index()\n",
|
|
"print(cluster_counts)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "8i1FGtM-Xx3k"
|
|
},
|
|
"source": [
|
|
"We can see the topics found here:\n",
|
|
"Eco-friendly Transportation, Luxury and Leisure Items, Personal Care Products, Electronic Toothbrushes and Clothing and Apparel\n",
|
|
"match well enough but not exactly to our initial prompt of:\n",
|
|
"vehicle, clothing, toiletries, food.\n",
|
|
"\n",
|
|
"As we chose 5 clusters, it split up toiletries into Skincare and Personal Care which doesn't affect us too much further downstream."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "RRwIet9DUdKe",
|
|
"outputId": "8e7835c9-884a-4504-bbed-f556382dd9f5"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"[\n",
|
|
" {\n",
|
|
" \"cluster\": 0,\n",
|
|
" \"topic\": \"Electronic Toothbrushes\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"cluster\": 1,\n",
|
|
" \"topic\": \"Clothing and Apparel\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"cluster\": 2,\n",
|
|
" \"topic\": \"Personal Care Products\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"cluster\": 3,\n",
|
|
" \"topic\": \"Eco-friendly Transportation\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"cluster\": 4,\n",
|
|
" \"topic\": \"Luxury and Leisure Items\"\n",
|
|
" }\n",
|
|
"]\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"selected_examples = df.groupby('Cluster').apply(lambda x: x.sample(3)).reset_index(drop=True)\n",
|
|
"\n",
|
|
"# Format the selected examples\n",
|
|
"formatted_examples = \"\\n\".join(\n",
|
|
" f'Input: \"{row[\"Product\"]}, {row[\"Category\"]}\"\\nOutput: \"{row[\"Description\"]}\"\\nCluster: \"{row[\"Cluster\"]}\"'\n",
|
|
" for _, row in selected_examples.iterrows()\n",
|
|
")\n",
|
|
"\n",
|
|
"topic_prompt = f\"\"\"\n",
|
|
" I previously generated some examples of input output trainings pairs and then I clustered them based on category. From each cluster I picked 3 example data point which you can find below.\n",
|
|
" I want you identify the broad topic areas these clusters belong to.\n",
|
|
" Previous examples:\n",
|
|
" {formatted_examples}\n",
|
|
"\n",
|
|
"\n",
|
|
" Your output should be strictly of the format:\n",
|
|
" Cluster: number, topic: topic\n",
|
|
" Cluster: number, topic: topic\n",
|
|
" Cluster: number, topic: topic\n",
|
|
"\n",
|
|
" Do not add any extra characters around that formatting as it will make the output parsing break.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=datagen_model,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed analyze clustered data\"},\n",
|
|
" {\"role\": \"user\", \"content\": topic_prompt}\n",
|
|
" ]\n",
|
|
")\n",
|
|
"res = response.choices[0].message.content\n",
|
|
"\n",
|
|
"pattern = r\"Cluster: (\\d+), topic: ([^\\n]+)\"\n",
|
|
"matches = re.findall(pattern, res)\n",
|
|
"clusters = [{\"cluster\": int(cluster), \"topic\": topic} for cluster, topic in matches]\n",
|
|
"json_output = json.dumps(clusters, indent=2)\n",
|
|
"print(json_output)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "x5hszl-SZVdi"
|
|
},
|
|
"source": [
|
|
"We now have the clusters and their counts so we could prompt the LLM to generate more examples within the topics we want. However for this example we won't take that further as they are well-split and you would just follow the procedure above for prompting the model to generate data while passing in the underrepresented topics."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "yVD_TPsHYvDb"
|
|
},
|
|
"source": [
|
|
"Next, we will try and deal with increasing the diversity of our data distribution. \n",
|
|
"\n",
|
|
"First we start in a similar way by finding a few examples from each cluster at random and ask the LLM what topics these map to. In addition to this in the same LLM call, we will ask it to generate more topics to increase the diversity of our data. We do this in one call to save time/cost."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/",
|
|
"height": 53
|
|
},
|
|
"id": "mZjBbfFaZ3mn",
|
|
"outputId": "8864421a-e9d4-4ea6-f747-a76a3291a593"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"1. Cluster topic mapping\n",
|
|
"Cluster: 0, topic: Electronic/Health Products\n",
|
|
"Cluster: 1, topic: Fashion and Food\n",
|
|
"Cluster: 2, topic: Personal Care/Wellness\n",
|
|
"Cluster: 3, topic: Eco-friendly Transportation\n",
|
|
"Cluster: 4, topic: Chocolate/Motorcycles\n",
|
|
"\n",
|
|
"2. New topics\n",
|
|
"1. Home Automation Gadgets\n",
|
|
"2. Educational Tools and Apps\n",
|
|
"3. Renewable Energy Solutions\n",
|
|
"4. Virtual Reality Experiences\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"selected_examples = df.groupby('Cluster').apply(lambda x: x.sample(3)).reset_index(drop=True)\n",
|
|
"\n",
|
|
"# Format the selected examples\n",
|
|
"formatted_examples = \"\\n\".join(\n",
|
|
" f'Input: \"{row[\"Product\"]}, {row[\"Category\"]}\"\\nOutput: \"{row[\"Description\"]}\"\\nCluster: \"{row[\"Cluster\"]}\"'\n",
|
|
" for _, row in selected_examples.iterrows()\n",
|
|
")\n",
|
|
"\n",
|
|
"topic_prompt = f\"\"\"\n",
|
|
" I previously generated some examples of input output trainings pairs and then I clustered them based on category. From each cluster I picked 3 example data point which you can find below.\n",
|
|
" I want to promote diversity in my examples across categories so follow the procedure below:\n",
|
|
" 1. You must identify the broad topic areas these clusters belong to.\n",
|
|
" 2. You should generate further topic areas which don't exist so I can generate data within these topics to improve diversity.\n",
|
|
"\n",
|
|
"\n",
|
|
" Previous examples:\n",
|
|
" {formatted_examples}\n",
|
|
"\n",
|
|
"\n",
|
|
" Your output should be strictly of the format:\n",
|
|
"\n",
|
|
" 1. Cluster topic mapping\n",
|
|
" Cluster: number, topic: topic\n",
|
|
" Cluster: number, topic: topic\n",
|
|
" Cluster: number, topic: topic\n",
|
|
"\n",
|
|
" 2. New topics\n",
|
|
" 1. topic\n",
|
|
" 2. topic\n",
|
|
" 3. topic\n",
|
|
" 4. topic\n",
|
|
"\n",
|
|
" Do not add any extra characters around that formatting as it will make the output parsing break. It is very important you stick to that output format\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=datagen_model,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to analyze clustered data\"},\n",
|
|
" {\"role\": \"user\", \"content\": topic_prompt}\n",
|
|
" ]\n",
|
|
")\n",
|
|
"res = response.choices[0].message.content\n",
|
|
"print(res)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see here again that we explicitly prompt the output structure it should follow. I also tell it the purpose of generating topics (to promote diversity) so the model has full context."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "s254oJ-Ecka0"
|
|
},
|
|
"source": [
|
|
"We then parse the data into a list of cluster-mapping jsons and a list of topics"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {
|
|
"colab": {
|
|
"base_uri": "https://localhost:8080/"
|
|
},
|
|
"id": "HTS4ybspcivw",
|
|
"outputId": "52ea363c-cbf5-420e-b81e-0d2710c50203"
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"([{'cluster': 0, 'topic': 'Electronic/Health Products'},\n",
|
|
" {'cluster': 1, 'topic': 'Fashion and Food'},\n",
|
|
" {'cluster': 2, 'topic': 'Personal Care/Wellness'},\n",
|
|
" {'cluster': 3, 'topic': 'Eco-friendly Transportation'},\n",
|
|
" {'cluster': 4, 'topic': 'Chocolate/Motorcycles'}],\n",
|
|
" ['Home Automation Gadgets',\n",
|
|
" 'Educational Tools and Apps',\n",
|
|
" 'Renewable Energy Solutions',\n",
|
|
" 'Virtual Reality Experiences'])"
|
|
]
|
|
},
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"parts = res.split(\"\\n\\n\")\n",
|
|
"cluster_mapping_part = parts[0]\n",
|
|
"new_topics_part = parts[1]\n",
|
|
"\n",
|
|
"# Parse cluster topic mapping\n",
|
|
"cluster_topic_mapping_lines = cluster_mapping_part.split(\"\\n\")[1:] # Skip the first two lines\n",
|
|
"cluster_topic_mapping = [{\"cluster\": int(line.split(\",\")[0].split(\":\")[1].strip()), \"topic\": line.split(\":\")[2].strip()} for line in cluster_topic_mapping_lines]\n",
|
|
"\n",
|
|
"# Parse new topics\n",
|
|
"new_topics_lines = new_topics_part.split(\"\\n\")[1:] # Skip the first line\n",
|
|
"new_topics = [line.split(\". \")[1] for line in new_topics_lines]\n",
|
|
"\n",
|
|
"cluster_topic_mapping, new_topics"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "CX26-PGdcui0"
|
|
},
|
|
"source": [
|
|
"And finally we can use this information to further prompt a model to keep generating synthetic data. We do this by passing all the topics in the list of jsons to the prompt below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"id": "zHf4LnVk0aHw"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"output_string = \"\"\n",
|
|
"for i in range(3):\n",
|
|
" question = f\"\"\"\n",
|
|
" I am creating input output training pairs to fine tune my gpt model. I want the input to be product name and category and output to be description. the category should be things like: mobile phones, shoes, headphones, laptop, electronic toothbrush, etc. and also more importantly the categories should come under some main topics: {[entry['topic'] for entry in cluster_topic_mapping]})\n",
|
|
" After the number of each example also state the topic area. The format should be of the form:\n",
|
|
" 1. topic_area\n",
|
|
" Input: product_name, category\n",
|
|
" Output: description\n",
|
|
"\n",
|
|
" Do not add any extra characters around that formatting as it will make the output parsing break.\n",
|
|
"\n",
|
|
" Here are some helpful examples so you get the style of output correct.\n",
|
|
"\n",
|
|
" 1) clothing\n",
|
|
" Input: \"Shoe Name, Shoes\"\n",
|
|
" Output: \"Experience unparalleled comfort. These shoes feature a blend of modern style and the traditional superior cushioning, perfect for those always on the move.\"\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" response = client.chat.completions.create(\n",
|
|
" model=\"gpt-4\",\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant designed to generate synthetic data.\"},\n",
|
|
" {\"role\": \"user\", \"content\": question}\n",
|
|
" ]\n",
|
|
" )\n",
|
|
" res = response.choices[0].message.content\n",
|
|
" output_string += res + \"\\n\" + \"\\n\"\n",
|
|
"print(output_string)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "RQMHQxnZdRug"
|
|
},
|
|
"source": [
|
|
"You can run this in a loop to append to your previous data and in this way you can keep generating more textual synthetic data to train another GPT model while making sure that we cater to imbalanced datasets and generating a diversity of data."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"id": "Hiim8Xg5djGH"
|
|
},
|
|
"source": [
|
|
"You have now completed part 1 of the synthetic data generation tutorial where we have gone through:\n",
|
|
"* CSV with a structured prompt\n",
|
|
"* CSV with a Python program\n",
|
|
"* Multitable CSV with a python program\n",
|
|
"* Simply creating textual data\n",
|
|
"* Dealing with imbalanced or non-diverse textual data\n",
|
|
"\n",
|
|
"In part 2 you will find find out techniques for better prompting an LLM to enhance textual synthetic data generation."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"colab": {
|
|
"provenance": []
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.14"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
}
|