You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Prompt-Engineering-Guide/notebooks/pe-function-calling.ipynb

506 lines
14 KiB
Plaintext

5 months ago
{
"cells": [
5 months ago
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Function Calling with OpenAI APIs"
]
},
5 months ago
{
"cell_type": "code",
5 months ago
"execution_count": 1,
5 months ago
"metadata": {},
"outputs": [],
5 months ago
"source": [
"import os\n",
"import openai\n",
"import json\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()\n",
"\n",
"# set openai api key\n",
5 months ago
"openai.api_key = os.environ['OPENAI_API_KEY']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define a Get Completion Function"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 2,
5 months ago
"metadata": {},
"outputs": [],
"source": [
"def get_completion(messages, model=\"gpt-3.5-turbo-1106\", temperature=0, max_tokens=300, tools=None, tool_choice=None):\n",
5 months ago
" response = openai.chat.completions.create(\n",
" model=model,\n",
" messages=messages,\n",
" temperature=temperature,\n",
" max_tokens=max_tokens,\n",
" tools=tools,\n",
" tool_choice=tool_choice\n",
5 months ago
" )\n",
" return response.choices[0].message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define Dummy Function"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 3,
5 months ago
"metadata": {},
"outputs": [],
"source": [
"# Defines a dummy function to get the current weather\n",
"def get_current_weather(location, unit=\"fahrenheit\"):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
" weather = {\n",
" \"location\": location,\n",
" \"temperature\": \"50\",\n",
" \"unit\": unit,\n",
" }\n",
"\n",
" return json.dumps(weather)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define Functions\n",
"\n",
"As demonstrated in the OpenAI documentation, here is a simple example of how to define the functions that are going to be part of the request. \n",
"\n",
"The descriptions are important because these are passed directly to the LLM and the LLM will use the description to determine whether to use the functions or how to use/call."
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 4,
5 months ago
"metadata": {},
"outputs": [],
"source": [
"# define a function as tools\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\n",
" \"type\": \"string\", \n",
" \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
" },\n",
" \"required\": [\"location\"],\n",
" },\n",
" }, \n",
" }\n",
"]"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 5,
5 months ago
"metadata": {},
"outputs": [],
"source": [
"# define a list of messages\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"What is the weather like in London?\"\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 6,
5 months ago
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
5 months ago
"ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_GYg3yhSh1bbMabne9brwTqmA', function=Function(arguments='{\"location\":\"London\",\"unit\":\"celsius\"}', name='get_current_weather'), type='function')])\n"
5 months ago
]
}
],
"source": [
"response = get_completion(messages, tools=tools)\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"response.tool_calls[0].function.arguments"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now capture the arguments:"
]
},
5 months ago
{
"cell_type": "code",
5 months ago
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"args = json.loads(response.tool_calls[0].function.arguments)"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 8,
5 months ago
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\"location\": \"London\", \"temperature\": \"50\", \"unit\": \"celsius\"}'"
5 months ago
]
},
5 months ago
"execution_count": 8,
5 months ago
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_current_weather(**args)"
5 months ago
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Controlling Function Calling Behavior"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's say we were interested in designing this `function_calling` functionality in the context of an LLM-powered conversational agent. Your solution should then know what function to call or if it needs to be called at all. Let's try a simple example of a greeting message:"
5 months ago
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 9,
5 months ago
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"Hello! How are you?\",\n",
" }\n",
"]"
5 months ago
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 10,
5 months ago
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletionMessage(content=\"Hello! I'm here and ready to assist you. How can I help you today?\", role='assistant', function_call=None, tool_calls=None)"
5 months ago
]
},
5 months ago
"execution_count": 10,
5 months ago
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_completion(messages, tools=tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can specify the behavior you want from function calling, which is desired to control the behavior of your system. By default, the model decide on its own whether to call a function and which function to call. This is achieved by setting `tool_choice: \"auto\"` which is the default setting. "
5 months ago
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletionMessage(content=\"Hello! I'm here and ready to assist you. How can I help you today?\", role='assistant', function_call=None, tool_calls=None)"
]
},
5 months ago
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_completion(messages, tools=tools, tool_choice=\"auto\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Setting `tool_choice: \"none\"` forces the model to not use any of the functions provided. "
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletionMessage(content=\"Hello! I'm here and ready to assist you. How can I help you today?\", role='assistant', function_call=None, tool_calls=None)"
]
},
5 months ago
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_completion(messages, tools=tools, tool_choice=\"none\")"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletionMessage(content='I will check the current weather in London for you.', role='assistant', function_call=None, tool_calls=None)"
]
},
5 months ago
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"What's the weather like in London?\",\n",
" }\n",
"]\n",
"get_completion(messages, tools=tools, tool_choice=\"none\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also force the model to choose a function if that's the behavior you want in your application. Example:"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
5 months ago
"ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_ctUCZtwxZYhfF3sirByNY2qC', function=Function(arguments='{\"location\":\"London\",\"unit\":\"celsius\"}', name='get_current_weather'), type='function')])"
]
},
5 months ago
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"What's the weather like in London?\",\n",
" }\n",
"]\n",
"get_completion(messages, tools=tools, tool_choice={\"type\": \"function\", \"function\": {\"name\": \"get_current_weather\"}})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The OpenAI APIs also support parallel function calling that can call multiple functions in one turn. "
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
5 months ago
"ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_GhyrfrY9QWkrJs9vKqxpGW22', function=Function(arguments='{\"location\": \"London\", \"unit\": \"celsius\"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_Cl9fqGh1AZgi4yD8n5Lrc6NE', function=Function(arguments='{\"location\": \"Belmopan\", \"unit\": \"celsius\"}', name='get_current_weather'), type='function')])"
]
},
5 months ago
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"What's the weather like in London and Belmopan in the coming days?\",\n",
" }\n",
"]\n",
"get_completion(messages, tools=tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see in the response above that the response contains information from the function calls for the two locations queried. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
5 months ago
"### Function Calling Response for Model Feedback\n",
"\n",
"You might also be interested in developing an agent that passes back the result obtained after calling your APIs with the inputs generated from function calling. Let's look at an example next:\n"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"messages = []\n",
"messages.append({\"role\": \"user\", \"content\": \"What's the weather like in Boston!\"})\n",
"assistant_message = get_completion(messages, tools=tools, tool_choice=\"auto\")\n",
"assistant_message = json.loads(assistant_message.model_dump_json())\n",
"assistant_message[\"content\"] = str(assistant_message[\"tool_calls\"][0][\"function\"])\n",
"\n",
"#a temporary patch but this should be handled differently\n",
"# remove \"function_call\" from assistant message\n",
"del assistant_message[\"function_call\"]"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"messages.append(assistant_message)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then append the results of the `get_current_weather` function and pass it back to the model using a `tool` role."
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"# get the weather information to pass back to the model\n",
"weather = get_current_weather(messages[1][\"tool_calls\"][0][\"function\"][\"arguments\"])\n",
"\n",
"messages.append({\"role\": \"tool\",\n",
" \"tool_call_id\": assistant_message[\"tool_calls\"][0][\"id\"],\n",
" \"name\": assistant_message[\"tool_calls\"][0][\"function\"][\"name\"],\n",
" \"content\": weather})"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 19,
5 months ago
"metadata": {},
"outputs": [],
"source": [
"final_response = get_completion(messages, tools=tools)"
]
},
{
"cell_type": "code",
5 months ago
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
5 months ago
"ChatCompletionMessage(content='The current temperature in Boston, MA is 50°F.', role='assistant', function_call=None, tool_calls=None)"
]
},
5 months ago
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"final_response"
]
5 months ago
}
],
"metadata": {
"kernelspec": {
"display_name": "papersql",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
4 months ago
"version": "3.9.0"
5 months ago
}
},
"nbformat": 4,
"nbformat_minor": 2
}