{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Azure chat completions example\n", "\n", "This example will cover chat completions using the Azure OpenAI service. It also includes information on content filtering." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "\n", "First, we install the necessary dependencies and import the libraries we will be using." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install \"openai>=1.0.0,<2.0.0\"\n", "! pip install python-dotenv" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import openai\n", "import dotenv\n", "\n", "dotenv.load_dotenv()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Authentication\n", "\n", "The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Authentication using API key\n", "\n", "To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "if not use_azure_active_directory:\n", " endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n", " api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n", "\n", " client = openai.AzureOpenAI(\n", " azure_endpoint=endpoint,\n", " api_key=api_key,\n", " api_version=\"2023-09-01-preview\"\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Authentication using Azure Active Directory\n", "Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n", "\n", "For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install \"azure-identity>=1.15.0\"" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n", "\n", "if use_azure_active_directory:\n", " endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n", "\n", " client = openai.AzureOpenAI(\n", " azure_endpoint=endpoint,\n", " azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n", " api_version=\"2023-09-01-preview\"\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n", "\n", "- `api_key` from `AZURE_OPENAI_API_KEY`\n", "- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n", "- `api_version` from `OPENAI_API_VERSION`\n", "- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Deployments\n", "\n", "In this section we are going to create a deployment of a GPT model that we can use to create chat completions." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Deployments: Create in the Azure OpenAI Studio\n", "Let's deploy a model to use with chat completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for chat completions. The deployment name that you give the model will be used in the code below." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "deployment = \"\" # Fill in the deployment name from the portal here" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Create chat completions\n", "\n", "Now let's create a chat completion using the client we built." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create\n", "response = client.chat.completions.create(\n", " model=deployment,\n", " messages=[\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": \"Knock knock.\"},\n", " {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n", " {\"role\": \"user\", \"content\": \"Orange.\"},\n", " ],\n", " temperature=0,\n", ")\n", "\n", "print(f\"{response.choices[0].message.role}: {response.choices[0].message.content}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Create a streaming chat completion\n", "\n", "We can also stream the response." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = client.chat.completions.create(\n", " model=deployment,\n", " messages=[\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": \"Knock knock.\"},\n", " {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n", " {\"role\": \"user\", \"content\": \"Orange.\"},\n", " ],\n", " temperature=0,\n", " stream=True\n", ")\n", "\n", "for chunk in response:\n", " if len(chunk.choices) > 0:\n", " delta = chunk.choices[0].delta\n", "\n", " if delta.role:\n", " print(delta.role + \": \", end=\"\", flush=True)\n", " if delta.content:\n", " print(delta.content, end=\"\", flush=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Content filtering\n", "\n", "Azure OpenAI service includes content filtering of prompts and completion responses. You can learn more about content filtering and how to configure it [here](https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter).\n", "\n", "If the prompt is flagged by the content filter, the library will raise a `BadRequestError` exception with a `content_filter` error code. Otherwise, you can access the `prompt_filter_results` and `content_filter_results` on the response to see the results of the content filtering and what categories were flagged." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prompt flagged by content filter" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "\n", "messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": \"\"}\n", "]\n", "\n", "try:\n", " completion = client.chat.completions.create(\n", " messages=messages,\n", " model=deployment,\n", " )\n", "except openai.BadRequestError as e:\n", " err = json.loads(e.response.text)\n", " if err[\"error\"][\"code\"] == \"content_filter\":\n", " print(\"Content filter triggered!\")\n", " content_filter_result = err[\"error\"][\"innererror\"][\"content_filter_result\"]\n", " for category, details in content_filter_result.items():\n", " print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking the result of the content filter" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "messages = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n", " {\"role\": \"user\", \"content\": \"What's the biggest city in Washington?\"}\n", "]\n", "\n", "completion = client.chat.completions.create(\n", " messages=messages,\n", " model=deployment,\n", ")\n", "print(f\"Answer: {completion.choices[0].message.content}\")\n", "\n", "# prompt content filter result in \"model_extra\" for azure\n", "prompt_filter_result = completion.model_extra[\"prompt_filter_results\"][0][\"content_filter_results\"]\n", "print(\"\\nPrompt content filter results:\")\n", "for category, details in prompt_filter_result.items():\n", " print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")\n", "\n", "# completion content filter result\n", "print(\"\\nCompletion content filter results:\")\n", "completion_filter_result = completion.choices[0].model_extra[\"content_filter_results\"]\n", "for category, details in completion_filter_result.items():\n", " print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.0" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }