mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-19 15:25:37 +00:00
[Azure] Chat completions example (#271)
* Added notebook for chat completions on Azure * Added pointer from README
This commit is contained in:
parent
1d17959dd7
commit
9d4e6e31c6
@ -52,6 +52,7 @@ Most code examples are written in Python, though the concepts can be applied in
|
||||
- DALL-E
|
||||
- [How to generate and edit images with DALL-E](examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb)
|
||||
- Azure OpenAI (alternative API from Microsoft Azure)
|
||||
- [How to use ChatGPT with Azure OpenAI](examples/azure/chat.ipynb)
|
||||
- [How to get completions from Azure OpenAI](examples/azure/completions.ipynb)
|
||||
- [How to get embeddings from Azure OpenAI](examples/azure/embeddings.ipynb)
|
||||
- [How to fine-tune GPT-3 with Azure OpenAI](examples/azure/finetuning.ipynb)
|
||||
|
273
examples/azure/chat.ipynb
Normal file
273
examples/azure/chat.ipynb
Normal file
@ -0,0 +1,273 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure chat completions example (preview)\n",
|
||||
"In this example we'll try to go over all operations needed to get chat completions working using the Azure endpoints. \\\n",
|
||||
"This example focuses on chat completions but also touches on some other operations that are also available using the API. This example is meant to be a quick way of showing simple operations and is not meant as a tutorial."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"For the following sections to work properly we first have to setup some things. Let's start with the `api_base` and `api_version`. To find your `api_base` go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for the \"Endpoint\" value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_version = '2023-03-15-preview'\n",
|
||||
"openai.api_base = '' # Please add your endpoint here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We next have to setup the `api_type` and `api_key`. We can either get the key from the portal or we can get it through Microsoft Active Directory Authentication. Depending on this the `api_type` is either `azure` or `azure_ad`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup: Portal\n",
|
||||
"Let's first look at getting the key from the portal. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Keys and Endpoints\" look for one of the \"Keys\" values."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"openai.api_type = 'azure'\n",
|
||||
"openai.api_key = '' # Please add your api key here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Setup: Microsoft Active Directory Authentication\n",
|
||||
"Let's now see how we can get a key via Microsoft Active Directory Authentication. Uncomment the following code if you want to use Active Directory Authentication instead of keys from the portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azure.identity import DefaultAzureCredential\n",
|
||||
"\n",
|
||||
"# default_credential = DefaultAzureCredential()\n",
|
||||
"# token = default_credential.get_token(\"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"# openai.api_type = 'azure_ad'\n",
|
||||
"# openai.api_key = token.token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deployments\n",
|
||||
"In this section we are going to create a deployment using the `gpt-35-turbo` model that we can then use to create chat completions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deployments: Create manually\n",
|
||||
"Let's create a deployment using the `gpt-35-turbo` model. Go to https://portal.azure.com, find your resource and then under \"Resource Management\" -> \"Model deployments\" create a new `gpt-35-turbo` deployment. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"deployment_id = \"\" # Fill in the deployment id from the portal here"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Deployments: Create programatically\n",
|
||||
"We can also create the deployment using code. Note that you can only create one deployment per model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = \"gpt-35-turbo\"\n",
|
||||
"\n",
|
||||
"# Now let's create the deployment\n",
|
||||
"print(f'Creating a new deployment with model: {model}')\n",
|
||||
"result = openai.Deployment.create(model=model, scale_settings={\"scale_type\":\"standard\"})\n",
|
||||
"deployment_id = result[\"id\"]\n",
|
||||
"print(f'Successfully created deployment with id: {deployment_id}')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Deployments: Wait for deployment to succeed\n",
|
||||
"Now let's check the status of the newly created deployment and wait till it is succeeded."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(f'Checking for deployment status.')\n",
|
||||
"resp = openai.Deployment.retrieve(id=deployment_id)\n",
|
||||
"status = resp[\"status\"]\n",
|
||||
"print(f'Deployment {deployment_id} has status: {status}')\n",
|
||||
"while status not in [\"succeeded\", \"failed\"]:\n",
|
||||
" resp = openai.Deployment.retrieve(id=deployment_id)\n",
|
||||
" status = resp[\"status\"]\n",
|
||||
" print(f'Deployment {deployment_id} has status: {status}')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create chat completion\n",
|
||||
"Now let's send a sample chat completion to the deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create\n",
|
||||
"response = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
||||
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(f\"{response.choices[0].message.role}: {response.choices[0].message.content}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also stream the response.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = openai.ChatCompletion.create(\n",
|
||||
" deployment_id=deployment_id,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
||||
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
|
||||
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
" stream=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for chunk in response:\n",
|
||||
" delta = chunk.choices[0].delta\n",
|
||||
"\n",
|
||||
" if \"role\" in delta.keys():\n",
|
||||
" print(delta.role + \": \", end=\"\", flush=True)\n",
|
||||
" if \"content\" in delta.keys():\n",
|
||||
" print(delta.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### (Optional) Deployments: Delete\n",
|
||||
"Finally let's delete the deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(f'Deleting deployment: {deployment_id}')\n",
|
||||
"openai.Deployment.delete(sid=deployment_id)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
@ -11,12 +11,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import openai\n",
|
||||
"from openai import cli"
|
||||
"import openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -29,7 +28,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -207,7 +206,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.8"
|
||||
"version": "3.8.10"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
Loading…
Reference in New Issue
Block a user