You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
openai-cookbook/examples/azure/archive/chat.ipynb

288 lines
8.3 KiB
Plaintext

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure chat completions example (preview)\n",
"\n",
"> Note: There is a newer version of the openai library available. See https://github.com/openai/openai-python/discussions/742\n",
"\n",
"This example will cover chat completions using the Azure OpenAI service."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we install the necessary dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! pip install \"openai>=0.28.1,<1.0.0\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"openai.api_version = '2023-05-15'\n",
"openai.api_base = '' # Please add your endpoint here"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup: Portal\n",
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"openai.api_type = 'azure'\n",
"openai.api_key = os.environ[\"OPENAI_API_KEY\"]\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"> Note: In this example, we configured the library to use the Azure API by setting the variables in code. For development, consider setting the environment variables instead:\n",
"\n",
"```\n",
"OPENAI_API_BASE\n",
"OPENAI_API_KEY\n",
"OPENAI_API_TYPE\n",
"OPENAI_API_VERSION\n",
"```"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# from azure.identity import DefaultAzureCredential\n",
"\n",
"# default_credential = DefaultAzureCredential()\n",
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
"\n",
"# openai.api_type = 'azure_ad'\n",
"# openai.api_key = token.token"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"A token is valid for a period of time, after which it will expire. To ensure a valid token is sent with every request, you can refresh an expiring token by hooking into requests.auth:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import typing\n",
"import time\n",
"import requests\n",
"if typing.TYPE_CHECKING:\n",
" from azure.core.credentials import TokenCredential\n",
"\n",
"class TokenRefresh(requests.auth.AuthBase):\n",
"\n",
" def __init__(self, credential: \"TokenCredential\", scopes: typing.List[str]) -> None:\n",
" self.credential = credential\n",
" self.scopes = scopes\n",
" self.cached_token: typing.Optional[str] = None\n",
"\n",
" def __call__(self, req):\n",
" if not self.cached_token or self.cached_token.expires_on - time.time() < 300:\n",
" self.cached_token = self.credential.get_token(*self.scopes)\n",
" req.headers[\"Authorization\"] = f\"Bearer {self.cached_token.token}\"\n",
" return req\n",
"\n",
"session = requests.Session()\n",
"session.auth = TokenRefresh(default_credential, [\"https://cognitiveservices.azure.com/.default\"])\n",
"\n",
"openai.requestssession = session"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deployments\n",
"In this section we are going to create a deployment using the `gpt-35-turbo` model that we can then use to create chat completions."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deployments: Create manually\n",
"Let's create a deployment using the `gpt-35-turbo` model. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Model deployments\" create a new `gpt-35-turbo` deployment. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"deployment_id = '' # Fill in the deployment id from the portal here"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create chat completion\n",
"Now let's send a sample chat completion to the deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create\n",
"response = openai.ChatCompletion.create(\n",
" deployment_id=deployment_id,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
" ],\n",
" temperature=0,\n",
")\n",
"\n",
"print(f\"{response.choices[0].message.role}: {response.choices[0].message.content}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also stream the response.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = openai.ChatCompletion.create(\n",
" deployment_id=deployment_id,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
" ],\n",
" temperature=0,\n",
" stream=True\n",
")\n",
"\n",
"for chunk in response:\n",
" if len(chunk.choices) > 0:\n",
" delta = chunk.choices[0].delta\n",
"\n",
" if \"role\" in delta.keys():\n",
" print(delta.role + \": \", end=\"\", flush=True)\n",
" if \"content\" in delta.keys():\n",
" print(delta.content, end=\"\", flush=True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}