mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-11 13:11:02 +00:00
2c441ab9a2
Co-authored-by: ayush rajgor <ayushrajgorar@gmail.com>
350 lines
12 KiB
Plaintext
350 lines
12 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Azure functions example\n",
|
|
"\n",
|
|
"This notebook shows how to use the function calling capability with the Azure OpenAI service. Functions allow a caller of chat completions to define capabilities that the model can use to extend its\n",
|
|
"functionality into external tools and data sources.\n",
|
|
"\n",
|
|
"You can read more about chat functions on OpenAI's blog: https://openai.com/blog/function-calling-and-other-api-updates\n",
|
|
"\n",
|
|
"**NOTE**: Chat functions require model versions beginning with gpt-4 and gpt-35-turbo's `-0613` labels. They are not supported by older versions of the models."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Setup\n",
|
|
"\n",
|
|
"First, we install the necessary dependencies and import the libraries we will be using."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
|
"! pip install python-dotenv"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"import openai\n",
|
|
"import dotenv\n",
|
|
"\n",
|
|
"dotenv.load_dotenv()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Authentication\n",
|
|
"\n",
|
|
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Authentication using API key\n",
|
|
"\n",
|
|
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"if not use_azure_active_directory:\n",
|
|
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
|
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
|
"\n",
|
|
" client = openai.AzureOpenAI(\n",
|
|
" azure_endpoint=endpoint,\n",
|
|
" api_key=api_key,\n",
|
|
" api_version=\"2023-09-01-preview\"\n",
|
|
" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Authentication using Azure Active Directory\n",
|
|
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
|
"\n",
|
|
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"! pip install \"azure-identity>=1.15.0\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
|
"\n",
|
|
"if use_azure_active_directory:\n",
|
|
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
|
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
|
"\n",
|
|
" client = openai.AzureOpenAI(\n",
|
|
" azure_endpoint=endpoint,\n",
|
|
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
|
" api_version=\"2023-09-01-preview\"\n",
|
|
" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
|
"\n",
|
|
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
|
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
|
"- `api_version` from `OPENAI_API_VERSION`\n",
|
|
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Deployments\n",
|
|
"\n",
|
|
"In this section we are going to create a deployment of a GPT model that we can use to call functions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Deployments: Create in the Azure OpenAI Studio\n",
|
|
"Let's deploy a model to use with chat completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for chat completions. The deployment name that you give the model will be used in the code below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"deployment = \"\" # Fill in the deployment name from the portal here"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Functions\n",
|
|
"\n",
|
|
"With setup and authentication complete, you can now use functions with the Azure OpenAI service. This will be split into a few steps:\n",
|
|
"\n",
|
|
"1. Define the function(s)\n",
|
|
"2. Pass function definition(s) into chat completions API\n",
|
|
"3. Call function with arguments from the response\n",
|
|
"4. Feed function response back into chat completions API"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### 1. Define the function(s)\n",
|
|
"\n",
|
|
"A list of functions can be defined, each containing the name of the function, an optional description, and the parameters the function accepts (described as a JSON schema)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"functions = [\n",
|
|
" {\n",
|
|
" \"name\": \"get_current_weather\",\n",
|
|
" \"description\": \"Get the current weather\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"type\": \"object\",\n",
|
|
" \"properties\": {\n",
|
|
" \"location\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
|
|
" },\n",
|
|
" \"format\": {\n",
|
|
" \"type\": \"string\",\n",
|
|
" \"enum\": [\"celsius\", \"fahrenheit\"],\n",
|
|
" \"description\": \"The temperature unit to use. Infer this from the users location.\",\n",
|
|
" },\n",
|
|
" },\n",
|
|
" \"required\": [\"location\"],\n",
|
|
" },\n",
|
|
" }\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### 2. Pass function definition(s) into chat completions API\n",
|
|
"\n",
|
|
"Now we can pass the function into the chat completions API. If the model determines it should call the function, a `finish_reason` of \"tool_calls\" will be populated on the choice and the details of which function to call and its arguments will be present in the `message`. Optionally, you can set the `tool_choice` keyword argument to force the model to call a particular function (e.g. `{\"type\": \"function\", \"function\": {\"name\": get_current_weather}}`). By default, this is set to `auto`, allowing the model to choose whether to call the function or not. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages = [\n",
|
|
" {\"role\": \"system\", \"content\": \"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"What's the weather like today in Seattle?\"}\n",
|
|
"]\n",
|
|
"\n",
|
|
"chat_completion = client.chat.completions.create(\n",
|
|
" model=deployment,\n",
|
|
" messages=messages,\n",
|
|
" tools=functions,\n",
|
|
")\n",
|
|
"print(chat_completion)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### 3. Call function with arguments from the response\n",
|
|
"\n",
|
|
"The name of the function call will be one that was provided initially and the arguments will include JSON matching the schema included in the function definition."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"\n",
|
|
"def get_current_weather(request):\n",
|
|
" \"\"\"\n",
|
|
" This function is for illustrative purposes.\n",
|
|
" The location and unit should be used to determine weather\n",
|
|
" instead of returning a hardcoded response.\n",
|
|
" \"\"\"\n",
|
|
" location = request.get(\"location\")\n",
|
|
" unit = request.get(\"unit\")\n",
|
|
" return {\"temperature\": \"22\", \"unit\": \"celsius\", \"description\": \"Sunny\"}\n",
|
|
"\n",
|
|
"function_call = chat_completion.choices[0].message.tool_calls[0].function\n",
|
|
"print(function_call.name)\n",
|
|
"print(function_call.arguments)\n",
|
|
"\n",
|
|
"if function_call.name == \"get_current_weather\":\n",
|
|
" response = get_current_weather(json.loads(function_call.arguments))"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### 4. Feed function response back into chat completions API\n",
|
|
"\n",
|
|
"The response from the function should be serialized into a new message with the role set to \"function\". Now the model will use the response data to formulate its answer."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages.append(\n",
|
|
" {\n",
|
|
" \"role\": \"function\",\n",
|
|
" \"name\": \"get_current_weather\",\n",
|
|
" \"content\": json.dumps(response)\n",
|
|
" }\n",
|
|
")\n",
|
|
"\n",
|
|
"function_completion = client.chat.completions.create(\n",
|
|
" model=deployment,\n",
|
|
" messages=messages,\n",
|
|
" tools=functions,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(function_completion.choices[0].message.content.strip())"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.0"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "3a5103089ab7e7c666b279eeded403fcec76de49a40685dbdfe9f9c78ad97c17"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|