mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-15 18:13:18 +00:00
c1b8431914
Co-authored-by: Simón Fishman <simonpfish@gmail.com>
1016 lines
38 KiB
Plaintext
1016 lines
38 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Fine tuning on synthetic function-calling data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This notebook covers how to fine-tune to increase function calling accuracy and reliability.\\\n",
|
|
"You can find more information on function calling [here](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb), \n",
|
|
"and on fine tuning [here](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_finetune_chat_models.ipynb)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"For context, from the function calling notebook above:\n",
|
|
"> `functions` is an optional parameter in the Chat Completion API which can be used to provide function specifications. The purpose of this is to enable models to generate function arguments which adhere to the provided specifications. Note that the API will not actually execute any function calls. It is up to developers to execute function calls using model outputs."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Function calling is a very powerful tool when it functions as intended. However, we have seen that as the number of \\\n",
|
|
"functions increases, and the complexity of the task at hand increases, function calling becomes less accurate (e.g.: more hallucinated\n",
|
|
"invocations, and incorrect invocations).\\\n",
|
|
"Before fine tuning for function calling, it's best to begin with:\n",
|
|
"- Improvements to the function definitions. Make them more clear, and more distinct from one another.\n",
|
|
"- Experiment with prompt engineering: often a more detailed prompt can help the model call the correct function.\\\n",
|
|
"\n",
|
|
"*If* the steps above fail to improve function calling to a satisfactory level, then you can try fine tuning for function calling."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Overview"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This notebook contains three sections\n",
|
|
"- **Assessing baseline function calling performance:** Evaluating an out-of-the-box `gpt-3.5-turbo` model on our given function (let's assume that for latency + cost reasons we cannot use `gpt-4` for a drone copilot)\n",
|
|
"- **Generating synthetic data:** Using `gpt-4` to create 'golden' set of prompts and function invocations to use as training data\n",
|
|
"- **Fine-tuning**: Running the fine tuning job, and evaluating the fine-tuned model\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Note: *This notebook provides an example of how to create synthetic training data for fine tuning for function calling given just a list of functions. While real-world production test evals are preferable, this method produces strong results and can be used in conjuction with real-world training data.*"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Getting baseline function calling performance"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# !pip install tenacity\n",
|
|
"# !pip insta openai\n",
|
|
"# !pip install typing\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import openai\n",
|
|
"import numpy as np\n",
|
|
"import json\n",
|
|
"import os\n",
|
|
"import openai\n",
|
|
"import itertools\n",
|
|
"from tenacity import retry, wait_random_exponential, stop_after_attempt\n",
|
|
"from typing import Any, Dict, List, Generator\n",
|
|
"import ast\n",
|
|
"openai.api_key = os.getenv('OPENAI_API_KEY')\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Utilities"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's define utility functions for making calls to the Chat Completions API, one to get the completion and one to get the function call."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(5))\n",
|
|
"def get_chat_completion(\n",
|
|
" messages: list[dict[str, str]],\n",
|
|
" model: str = \"gpt-4\",\n",
|
|
" max_tokens=500,\n",
|
|
" temperature=1.0,\n",
|
|
" stop=None,\n",
|
|
" functions=None,\n",
|
|
") -> str:\n",
|
|
" params = {\n",
|
|
" 'model': model,\n",
|
|
" 'messages': messages,\n",
|
|
" 'max_tokens': max_tokens,\n",
|
|
" 'temperature': temperature,\n",
|
|
" 'stop': stop,\n",
|
|
" }\n",
|
|
" if functions:\n",
|
|
" params['functions'] = functions\n",
|
|
"\n",
|
|
" completion = openai.ChatCompletion.create(**params)\n",
|
|
" return completion.choices[0].message\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Baseline testing"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's build an intelligent drone co-pilot. We want to be able to give the co-pilot commands, and have it either call the function\n",
|
|
"for that command, or deny that request if the command is unfeasible.\n",
|
|
"We can first define a system prompt for the copilot."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"DRONE_SYSTEM_PROMPT = \"\"\"You are an intelligent AI that controls a drone. Given a command or request from the user,\n",
|
|
"call one of your functions to complete the request. If the request cannot be completed by your available functions, call the reject_request function.\n",
|
|
"If the request is ambiguous or unclear, reject the request.\"\"\"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's define functions for all of the actions the copilot can take."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"function_list = [\n",
|
|
" {\n",
|
|
" \"name\": \"takeoff_drone\",\n",
|
|
" \"description\": \"Initiate the drone's takeoff sequence.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"altitude\": {\n",
|
|
" \"type\": \"integer\",\n",
|
|
" \"description\": \"Specifies the altitude in meters to which the drone should ascend.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"altitude\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"land_drone\",\n",
|
|
" \"description\": \"Land the drone at its current location or a specified landing point.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"location\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"current\", \"home_base\", \"custom\"],\n",
|
|
" \"description\": \"Specifies the landing location for the drone.\"\n",
|
|
" },\n",
|
|
" \"coordinates\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"description\": \"GPS coordinates for custom landing location. Required if location is 'custom'.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"location\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"control_drone_movement\",\n",
|
|
" \"description\": \"Direct the drone's movement in a specific direction.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"direction\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"forward\", \"backward\", \"left\", \"right\", \"up\", \"down\"],\n",
|
|
" \"description\": \"Direction in which the drone should move.\"\n",
|
|
" },\n",
|
|
" \"distance\": {\n",
|
|
" \"type\": \"integer\",\n",
|
|
" \"description\": \"Distance in meters the drone should travel in the specified direction.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"direction\", \"distance\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_drone_speed\",\n",
|
|
" \"description\": \"Adjust the speed of the drone.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"speed\": {\n",
|
|
" \"type\": \"integer\",\n",
|
|
" \"description\": \"Specifies the speed in km/h.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"speed\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"control_camera\",\n",
|
|
" \"description\": \"Control the drone's camera to capture images or videos.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"mode\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"photo\", \"video\", \"panorama\"],\n",
|
|
" \"description\": \"Camera mode to capture content.\"\n",
|
|
" },\n",
|
|
" \"duration\": {\n",
|
|
" \"type\": \"integer\",\n",
|
|
" \"description\": \"Duration in seconds for video capture. Required if mode is 'video'.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"mode\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"control_gimbal\",\n",
|
|
" \"description\": \"Adjust the drone's gimbal for camera stabilization and direction.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"tilt\": {\n",
|
|
" \"type\": \"integer\",\n",
|
|
" \"description\": \"Tilt angle for the gimbal in degrees.\"\n",
|
|
" },\n",
|
|
" \"pan\": {\n",
|
|
" \"type\": \"integer\",\n",
|
|
" \"description\": \"Pan angle for the gimbal in degrees.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"tilt\", \"pan\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_drone_lighting\",\n",
|
|
" \"description\": \"Control the drone's lighting for visibility and signaling.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"mode\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"on\", \"off\", \"blink\", \"sos\"],\n",
|
|
" \"description\": \"Lighting mode for the drone.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"mode\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"return_to_home\",\n",
|
|
" \"description\": \"Command the drone to return to its home or launch location.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {}\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_battery_saver_mode\",\n",
|
|
" \"description\": \"Toggle battery saver mode.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"status\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"on\", \"off\"],\n",
|
|
" \"description\": \"Toggle battery saver mode.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"status\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_obstacle_avoidance\",\n",
|
|
" \"description\": \"Configure obstacle avoidance settings.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"mode\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"on\", \"off\"],\n",
|
|
" \"description\": \"Toggle obstacle avoidance.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"mode\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_follow_me_mode\",\n",
|
|
" \"description\": \"Enable or disable 'follow me' mode.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"status\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"on\", \"off\"],\n",
|
|
" \"description\": \"Toggle 'follow me' mode.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"status\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"calibrate_sensors\",\n",
|
|
" \"description\": \"Initiate calibration sequence for drone's sensors.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {}\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_autopilot\",\n",
|
|
" \"description\": \"Enable or disable autopilot mode.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"status\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"on\", \"off\"],\n",
|
|
" \"description\": \"Toggle autopilot mode.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"status\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"configure_led_display\",\n",
|
|
" \"description\": \"Configure the drone's LED display pattern and colors.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"pattern\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"solid\", \"blink\", \"pulse\", \"rainbow\"],\n",
|
|
" \"description\": \"Pattern for the LED display.\"\n",
|
|
" },\n",
|
|
" \"color\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"red\", \"blue\", \"green\", \"yellow\", \"white\"],\n",
|
|
" \"description\": \"Color for the LED display. Not required if pattern is 'rainbow'.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"pattern\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"set_home_location\",\n",
|
|
" \"description\": \"Set or change the home location for the drone.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"coordinates\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"description\": \"GPS coordinates for the home location.\"\n",
|
|
" }\n",
|
|
" },\n",
|
|
" \"required\": [\"coordinates\"]\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"name\": \"reject_request\",\n",
|
|
" \"description\": \"Use this function if the request is not possible.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {}\n",
|
|
" }\n",
|
|
" },\n",
|
|
"]\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"For starters, let's see how function calling performs with some straight forward feasible prompts, and then one obviously impossible request which call the 'reject_request' function."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"straightforward_prompts = ['Land the drone at the home base',\n",
|
|
" 'Take off the drone to 50 meters',\n",
|
|
" 'change speed to 15 kilometers per hour',\n",
|
|
" 'turn into an elephant!']\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"for prompt in straightforward_prompts:\n",
|
|
" messages = []\n",
|
|
" messages.append({\"role\": \"system\", \"content\": DRONE_SYSTEM_PROMPT})\n",
|
|
" messages.append({\"role\": \"user\", \"content\": prompt})\n",
|
|
" completion = get_chat_completion(model=\"gpt-3.5-turbo\",messages=messages,functions=function_list)\n",
|
|
" print(prompt)\n",
|
|
" print(completion.function_call,'\\n')\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Nice! The model performs quite well with these requests. Now let's try some more difficult requests: requests that are *almost* feasible and are drone-related, but that the drone cannot actually do, and the pilot should reject."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"challenging_prompts = ['Play pre-recorded audio message',\n",
|
|
" 'Initiate live-streaming on social media',\n",
|
|
" 'Scan environment for heat signatures',\n",
|
|
" 'Enable stealth mode',\n",
|
|
" \"Change drone's paint job color\"]\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"for prompt in challenging_prompts:\n",
|
|
" messages = []\n",
|
|
" messages.append({\"role\": \"system\", \"content\": DRONE_SYSTEM_PROMPT})\n",
|
|
" messages.append({\"role\": \"user\", \"content\": prompt})\n",
|
|
" completion = get_chat_completion(model=\"gpt-3.5-turbo\",messages=messages,functions=function_list)\n",
|
|
" print(prompt)\n",
|
|
" try:\n",
|
|
" print(completion.function_call)\n",
|
|
" print('\\n')\n",
|
|
" except:\n",
|
|
" print(completion.content)\n",
|
|
" print('\\n')\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now we run into some problems.\n",
|
|
"The model here should reject all of these requests, as they are impossible given the functions, however instead the model calls functions that are somewhat related to the request, but incorrect. The model sets the camera to video when asked to begin 'live streaming to social media', and changes the LED's to blue when asked to 'change the paint color'...\\\n",
|
|
"<br>\n",
|
|
"In this simple case, more prompt engineering may resolve some of these issues, but for the purpose of this example we will demonstrate how fine tuning can be used to improve performance. Additionally, while this case is relatively straightforward, as the number of and complexity of the functions increases, fine tuning becomes more and more impactful."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Generating synthetic data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Helper functions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We want to generate every invocation of every function, so that we have\n",
|
|
"full coverage of all potential invocations to create synthetic data for. Then, we will use `gpt-4` to come up with prompts that would call each invocation, and we will use that prompt - function invocation pair as training data."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Generating every invocation for a function with fixed enums is more simple, but for a function such as\n",
|
|
" `control_gimbal` we need to set the `tilt` and `pan` integer values, so to generate those synthetic invocations we will first set a placeholder, and then later use `gpt-4` to come up with reasonable values."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"placeholder_int = 'fill_in_int'\n",
|
|
"placeholder_string = 'fill_in_string'\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The functions below take in all the functions from the function list, and look \n",
|
|
"at all the potential invocations of those functions given each function's parameters.\n",
|
|
"The functions also account for `required` parameters, so that all the invocations\n",
|
|
"are actually feasible."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def generate_permutations(params: Dict[str, Dict[str, Any]]) -> Generator[Dict[str, Any], None, None]:\n",
|
|
" \"\"\"\n",
|
|
" Generates all possible permutations for given parameters.\n",
|
|
"\n",
|
|
" :param params: Parameter dictionary containing required and optional fields.\n",
|
|
" :return: A generator yielding each permutation.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" # Extract the required fields from the parameters\n",
|
|
" required_fields = params.get('required', [])\n",
|
|
"\n",
|
|
" # Generate permutations for required fields\n",
|
|
" required_permutations = generate_required_permutations(params, required_fields)\n",
|
|
"\n",
|
|
" # Generate optional permutations based on each required permutation\n",
|
|
" for required_perm in required_permutations:\n",
|
|
" yield from generate_optional_permutations(params, required_perm)\n",
|
|
"\n",
|
|
"\n",
|
|
"def generate_required_permutations(params: Dict[str, Dict[str, Any]], required_fields: List[str]) -> List[Dict[str, Any]]:\n",
|
|
" \"\"\"\n",
|
|
" Generates permutations for the required fields.\n",
|
|
"\n",
|
|
" :param params: Parameter dictionary.\n",
|
|
" :param required_fields: List of required fields.\n",
|
|
" :return: A list of permutations for required fields.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" # Get all possible values for each required field\n",
|
|
" required_values = [get_possible_values(params, field) for field in required_fields]\n",
|
|
"\n",
|
|
" # Generate permutations from possible values\n",
|
|
" return [dict(zip(required_fields, values)) for values in itertools.product(*required_values)]\n",
|
|
"\n",
|
|
"\n",
|
|
"def generate_optional_permutations(params: Dict[str, Dict[str, Any]], base_perm: Dict[str, Any]) -> Generator[Dict[str, Any], None, None]:\n",
|
|
" \"\"\"\n",
|
|
" Generates permutations for optional fields based on a base permutation.\n",
|
|
"\n",
|
|
" :param params: Parameter dictionary.\n",
|
|
" :param base_perm: Base permutation dictionary.\n",
|
|
" :return: A generator yielding each permutation for optional fields.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" # Determine the fields that are optional by subtracting the base permutation's fields from all properties\n",
|
|
" optional_fields = set(params['properties']) - set(base_perm)\n",
|
|
"\n",
|
|
" # Iterate through all combinations of optional fields\n",
|
|
" for field_subset in itertools.chain.from_iterable(itertools.combinations(optional_fields, r) for r in range(len(optional_fields) + 1)):\n",
|
|
"\n",
|
|
" # Generate product of possible values for the current subset of fields\n",
|
|
" for values in itertools.product(*(get_possible_values(params, field) for field in field_subset)):\n",
|
|
"\n",
|
|
" # Create a new permutation by combining base permutation and current field values\n",
|
|
" new_perm = {**base_perm, **dict(zip(field_subset, values))}\n",
|
|
"\n",
|
|
" yield new_perm\n",
|
|
"\n",
|
|
"\n",
|
|
"def get_possible_values(params: Dict[str, Dict[str, Any]], field: str) -> List[Any]:\n",
|
|
" \"\"\"\n",
|
|
" Retrieves possible values for a given field.\n",
|
|
"\n",
|
|
" :param params: Parameter dictionary.\n",
|
|
" :param field: The field for which to get possible values.\n",
|
|
" :return: A list of possible values.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" # Extract field information from the parameters\n",
|
|
" field_info = params['properties'][field]\n",
|
|
"\n",
|
|
" # Based on the field's type or presence of 'enum', determine and return the possible values\n",
|
|
" if 'enum' in field_info:\n",
|
|
" return field_info['enum']\n",
|
|
" elif field_info['type'] == 'integer':\n",
|
|
" return [placeholder_int]\n",
|
|
" elif field_info['type'] == 'string':\n",
|
|
" return [placeholder_string]\n",
|
|
" elif field_info['type'] == 'boolean':\n",
|
|
" return [True, False]\n",
|
|
" elif field_info['type'] == 'array' and 'enum' in field_info['items']:\n",
|
|
" enum_values = field_info['items']['enum']\n",
|
|
" all_combinations = [list(combo) for i in range(1, len(enum_values) + 1) for combo in itertools.combinations(enum_values, i)]\n",
|
|
" return all_combinations\n",
|
|
" return []\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Let's generate every invocation for every function first"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Prompts:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"INVOCATION_FILLER_PROMPT = \"\"\"\n",
|
|
"1) Input reasonable values for 'fill_in_string' and 'fill_in_int' in the invocation here: {invocation}. Reasonable values are determined by the function definition. Use the\n",
|
|
"the entire function provided here :{function} to get context over what proper fill_in_string and fill_in_int values would be.\n",
|
|
"Example:\n",
|
|
"\n",
|
|
"Input: invocation: {{\n",
|
|
" \"name\": \"control_camera\",\n",
|
|
" \"arguments\": {{\n",
|
|
" \"mode\":\"video\",\n",
|
|
" \"duration\":\"fill_in_int\"\n",
|
|
" }}\n",
|
|
"}},\n",
|
|
"function:{function}\n",
|
|
"\n",
|
|
"Output: invocation: {{\n",
|
|
" \"name\": \"control_camera\",\n",
|
|
" \"arguments\": {{\n",
|
|
" \"mode\":\"video\",\n",
|
|
" \"duration\": 30\n",
|
|
" }}\n",
|
|
"}}\n",
|
|
"\n",
|
|
"\n",
|
|
"MAKE SURE output is just a dictionary with keys 'name' and 'arguments', no other text or response.\n",
|
|
"\n",
|
|
"Input: {invocation}\n",
|
|
"Output:\n",
|
|
"\"\"\"\n",
|
|
"\n",
|
|
"\n",
|
|
"COMMAND_GENERATION_PROMPT= \"\"\"\n",
|
|
"You are to output 2 commands, questions or statements that would generate the inputted function and parameters.\n",
|
|
"Please make the commands or questions natural, as a person would ask, and the command or questions should be varied and not repetitive.\n",
|
|
"It should not always mirror the exact technical terminology used in the function and parameters, rather reflect a conversational and intuitive request.\n",
|
|
"For instance, the prompt should not be 'turn on the dome light', as that is too technical, but rather 'turn on the inside lights'.\n",
|
|
"Another example, is the prompt should not be 'turn on the HVAC', but rather 'turn on the air conditioning'. Use language a normal driver would use, even if\n",
|
|
"it is technically incorrect but colloquially used.\n",
|
|
"\n",
|
|
"RULES: ALWAYS put a backwards slash before an apostrophe or single quote '. For example, do not say don't but say don\\'t.\n",
|
|
"Prompts MUST be in double quotes as well.\n",
|
|
"\n",
|
|
"Example\n",
|
|
"\n",
|
|
"Input: {{'name': 'calibrate_sensors','arguments': {{}}'' }}\n",
|
|
"Prompt: [\"The sensors are out of whack, can you reset them\", \"The calibration of the drone is off, fix it please!\"]\n",
|
|
"\n",
|
|
"Input: {{'name': 'set_autopilot','arguments': {{'status': 'off'}}}}\n",
|
|
"Prompt: [\"OK, I want to take back pilot control now\",\"Turn off the automatic pilot I'm ready control it\"]\n",
|
|
"\n",
|
|
"Input: {invocation}\n",
|
|
"Prompt:\n",
|
|
"\"\"\"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"In the below snippet, we generate the invocation of each function except for the rejection_request function.\\\n",
|
|
"To perform effective fine-tuning we need correctly labeled data. We could manually come up with examples and label the data,\\\n",
|
|
"or we can generate synthetic data with the help of `gpt-4` <br>\n",
|
|
"Empirically, `gpt-4` needs a bit more help to get good realistic examples of prompts that would generate the reject_request function, so we'll do that next..."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"input_objects = []\n",
|
|
"all_but_reject = [f for f in function_list if f.get('name') != 'reject_request']\n",
|
|
"\n",
|
|
"for function in all_but_reject:\n",
|
|
" func_name = function[\"name\"]\n",
|
|
" params = function[\"parameters\"]\n",
|
|
" for arguments in generate_permutations(params):\n",
|
|
" if any(val in arguments.values() for val in ['fill_in_int', 'fill_in_str']):\n",
|
|
" input_object = {\n",
|
|
" \"name\": func_name,\n",
|
|
" \"arguments\": arguments\n",
|
|
" }\n",
|
|
" messages = [{\"role\": \"user\", \"content\": INVOCATION_FILLER_PROMPT.format(invocation=input_object,function=function)}]\n",
|
|
" input_object = get_chat_completion(model='gpt-4',messages=messages, max_tokens = 200,temperature=.1).content\n",
|
|
" else:\n",
|
|
" input_object = {\n",
|
|
" \"name\": func_name,\n",
|
|
" \"arguments\": arguments\n",
|
|
" }\n",
|
|
"\n",
|
|
" input_objects.append(input_object)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now that we have all the invocations, let's use `gpt-4` to generate prompts that would result in those invocations"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def create_commands(invocation_list):\n",
|
|
" example_list = []\n",
|
|
" for i, invocation in enumerate(invocation_list):\n",
|
|
" print(f'\\033[34m{np.round(100*i/len(invocation_list),1)}% complete\\033[0m')\n",
|
|
" print(invocation)\n",
|
|
"\n",
|
|
" # Format the prompt with the invocation string\n",
|
|
" request_prompt = COMMAND_GENERATION_PROMPT.format(invocation=invocation)\n",
|
|
"\n",
|
|
" messages = [{\"role\": \"user\", \"content\": f\"{request_prompt}\"}]\n",
|
|
" completion = get_chat_completion(messages,temperature=0.8)\n",
|
|
" command_dict = {\n",
|
|
" \"Input\": invocation,\n",
|
|
" \"Prompt\": completion\n",
|
|
" }\n",
|
|
" example_list.append(command_dict)\n",
|
|
" return example_list\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"training_examples_unformatted = create_commands(input_objects)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's format the training examples properly. For more documentation on the proper training data formatting for fine tuning for function calling, see here: https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"training_examples = []\n",
|
|
"for prompt in training_examples_unformatted:\n",
|
|
" #adjust formatting for training data specs\n",
|
|
" try:\n",
|
|
" prompt[\"Input\"] = ast.literal_eval(prompt[\"Input\"])\n",
|
|
" except:\n",
|
|
" continue\n",
|
|
" prompt['Input']['arguments']=json.dumps(prompt['Input']['arguments'])\n",
|
|
" for p in prompt['Prompt']:\n",
|
|
" training_examples.append({\"messages\": [{\"role\":\"system\",\"content\":DRONE_SYSTEM_PROMPT\n",
|
|
" },{\"role\":\"user\",\"content\": p},\n",
|
|
" {\"role\":\"assistant\",\"function_call\": prompt['Input']}],\n",
|
|
" \"functions\":function_list})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now, back to the rejection function. Let's generate some prompts that are *nearly* possible, but should result in the `decline_request` function being called. To do so, we queried `gpt-4` asking for requests that are related to, but not quite possible with, the given list of functions. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"reject_list = ['Translate broadcast message to another language',\n",
|
|
"'Automatically capture photos when face is detected',\n",
|
|
"'Detect nearby drones',\n",
|
|
"'Measure wind resistance',\n",
|
|
"'Capture slow motion video',\n",
|
|
"\"Adjust drone's altitude to ground level changes\",\n",
|
|
"'Display custom message on LED display',\n",
|
|
"\"Sync drone's time with smartphone\",\n",
|
|
"'Alert when drone travels out of designated area',\n",
|
|
"'Detect moisture levels',\n",
|
|
"'Automatically follow GPS tagged object',\n",
|
|
"'Toggle night vision mode',\n",
|
|
"'Maintain current altitude when battery is low',\n",
|
|
"'Decide best landing spot using AI',\n",
|
|
"\"Program drone's route based on wind direction\"]\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"reject_training_list = []\n",
|
|
"for prompt in reject_list:\n",
|
|
"\n",
|
|
" #Adjust formatting\n",
|
|
" reject_training_list.append({\"messages\": [{\"role\":\"system\",\"content\":DRONE_SYSTEM_PROMPT\n",
|
|
" },{\"role\":\"user\",\"content\": prompt},\n",
|
|
" {\"role\":\"assistant\",\"function_call\": {\"name\": \"reject_request\",\"arguments\": \"{}\"}}],\n",
|
|
" \"functions\":function_list})\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now combine all the training examples together"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"training_list_total = training_examples+reject_training_list\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"training_file = 'data/drone_training.jsonl'\n",
|
|
"with open(training_file, 'w') as f:\n",
|
|
" for item in training_list_total:\n",
|
|
" json_str = json.dumps(item)\n",
|
|
" f.write(f'{json_str}\\n')\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Fine tuning"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Finally, we can kick off the fine-tuning job"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"if __name__ == \"__main__\":\n",
|
|
" file = openai.File.create(\n",
|
|
" file=open(training_file, \"rb\"),\n",
|
|
" purpose=\"fine-tune\",\n",
|
|
" )\n",
|
|
" file_id = file.id\n",
|
|
" print(file_id)\n",
|
|
" ft = openai.FineTuningJob.create(\n",
|
|
" # model=\"gpt-4-0613\",\n",
|
|
" model=\"gpt-3.5-turbo\",\n",
|
|
" training_file=file_id,\n",
|
|
")\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Evaluations"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Great! We trained a fine-tuned model for function calling. Let's see how it does on our evaluation set for prompts that the drone assistant\n",
|
|
"should automatically reject."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"for eval_question in challenging_prompts:\n",
|
|
" messages = []\n",
|
|
" messages.append({\"role\": \"system\", \"content\": DRONE_SYSTEM_PROMPT})\n",
|
|
" messages.append({\"role\": \"user\", \"content\": eval_question})\n",
|
|
" completion = get_chat_completion(model=\"ft:gpt-3.5-turbo-0613:openai-internal::8DloQKS2\",messages=messages,functions=function_list)\n",
|
|
" print(eval_question)\n",
|
|
" print(completion.function_call,'\\n')\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Great! While the original model only rejected 1 of the 5 requests, the fine tuned model rejected all 5 requests."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Conclustion"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Congratulations! You are now ready to fine tune your model for function calling. We can't wait to see what you build."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.16"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|