mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-15 18:13:18 +00:00
321 lines
9.8 KiB
Plaintext
321 lines
9.8 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Azure completions example\n",
|
|
"\n",
|
|
"This example will cover completions using the Azure OpenAI service. It also includes information on content filtering."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Setup\n",
|
|
"\n",
|
|
"First, we install the necessary dependencies and import the libraries we will be using."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"! pip install \"openai>=1.0.0,<2.0.0\"\n",
|
|
"! pip install python-dotenv"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"import openai\n",
|
|
"import dotenv\n",
|
|
"\n",
|
|
"dotenv.load_dotenv()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Authentication\n",
|
|
"\n",
|
|
"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Authentication using API key\n",
|
|
"\n",
|
|
"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"if not use_azure_active_directory:\n",
|
|
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
|
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
|
"\n",
|
|
" client = openai.AzureOpenAI(\n",
|
|
" azure_endpoint=endpoint,\n",
|
|
" api_key=api_key,\n",
|
|
" api_version=\"2023-09-01-preview\"\n",
|
|
" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Authentication using Azure Active Directory\n",
|
|
"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
|
|
"\n",
|
|
"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"! pip install \"azure-identity>=1.15.0\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
|
"\n",
|
|
"if use_azure_active_directory:\n",
|
|
" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
|
|
" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
|
|
"\n",
|
|
" client = openai.AzureOpenAI(\n",
|
|
" azure_endpoint=endpoint,\n",
|
|
" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
|
|
" api_version=\"2023-09-01-preview\"\n",
|
|
" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
|
|
"\n",
|
|
"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
|
|
"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
|
|
"- `api_version` from `OPENAI_API_VERSION`\n",
|
|
"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Deployments\n",
|
|
"\n",
|
|
"In this section we are going to create a deployment of a model that we can use to create completions."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Deployments: Create in the Azure OpenAI Studio\n",
|
|
"Let's deploy a model to use with completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for completions. The deployment name that you give the model will be used in the code below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"deployment = \"\" # Fill in the deployment name from the portal here"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Completions\n",
|
|
"\n",
|
|
"Now let's create a completion using the client we built."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"prompt = \"The food was delicious and the waiter\"\n",
|
|
"completion = client.completions.create(\n",
|
|
" model=deployment,\n",
|
|
" prompt=prompt,\n",
|
|
" stop=\".\",\n",
|
|
" temperature=0\n",
|
|
")\n",
|
|
" \n",
|
|
"print(f\"{prompt}{completion.choices[0].text}.\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create a streaming completion\n",
|
|
"\n",
|
|
"We can also stream the response."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"prompt = \"The food was delicious and the waiter\"\n",
|
|
"response = client.completions.create(\n",
|
|
" model=deployment,\n",
|
|
" prompt=prompt,\n",
|
|
" stream=True,\n",
|
|
")\n",
|
|
"for completion in response:\n",
|
|
" if len(completion.choices) > 0:\n",
|
|
" print(f\"{completion.choices[0].text}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Content filtering\n",
|
|
"\n",
|
|
"Azure OpenAI service includes content filtering of prompts and completion responses. You can learn more about content filtering and how to configure it [here](https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter).\n",
|
|
"\n",
|
|
"If the prompt is flagged by the content filter, the library will raise a `BadRequestError` exception with a `content_filter` error code. Otherwise, you can access the `prompt_filter_results` and `content_filter_results` on the response to see the results of the content filtering and what categories were flagged."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Prompt flagged by content filter"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"\n",
|
|
"try:\n",
|
|
" completion = client.completions.create(\n",
|
|
" prompt=\"<text violating the content policy>\",\n",
|
|
" model=deployment,\n",
|
|
" )\n",
|
|
"except openai.BadRequestError as e:\n",
|
|
" err = json.loads(e.response.text)\n",
|
|
" if err[\"error\"][\"code\"] == \"content_filter\":\n",
|
|
" print(\"Content filter triggered!\")\n",
|
|
" content_filter_result = err[\"error\"][\"innererror\"][\"content_filter_result\"]\n",
|
|
" for category, details in content_filter_result.items():\n",
|
|
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Checking the result of the content filter"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"completion = client.completions.create(\n",
|
|
" prompt=\"What's the biggest city in Washington?\",\n",
|
|
" model=deployment,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(f\"Answer: {completion.choices[0].text}\")\n",
|
|
"\n",
|
|
"# prompt content filter result in \"model_extra\" for azure\n",
|
|
"prompt_filter_result = completion.model_extra[\"prompt_filter_results\"][0][\"content_filter_results\"]\n",
|
|
"print(\"\\nPrompt content filter results:\")\n",
|
|
"for category, details in prompt_filter_result.items():\n",
|
|
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")\n",
|
|
"\n",
|
|
"# completion content filter result\n",
|
|
"print(\"\\nCompletion content filter results:\")\n",
|
|
"completion_filter_result = completion.choices[0].model_extra[\"content_filter_results\"]\n",
|
|
"for category, details in completion_filter_result.items():\n",
|
|
" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.0"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "3a5103089ab7e7c666b279eeded403fcec76de49a40685dbdfe9f9c78ad97c17"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|