mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-09 19:10:56 +00:00
380 lines
18 KiB
Plaintext
380 lines
18 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# How to stream completions\n",
|
|
"\n",
|
|
"By default, when you send a prompt to the OpenAI Completions endpoint, it computes the entire completion and sends it back in a single response.\n",
|
|
"\n",
|
|
"If you're generating very long completions from a davinci-level model, waiting for the response can take many seconds. As of Aug 2022, responses from `text-davinci-002` typically take something like ~1 second plus ~2 seconds per 100 completion tokens.\n",
|
|
"\n",
|
|
"If you want to get the response faster, you can 'stream' the completion as it's being generated. This allows you to start printing or otherwise processing the beginning of the completion before the entire completion is finished.\n",
|
|
"\n",
|
|
"To stream completions, set `stream=True` when calling the Completions endpoint. This will return an object that streams back text as [data-only server-sent events](https://app.mode.com/openai/reports/4fce5ba22b5b/runs/f518a0be4495).\n",
|
|
"\n",
|
|
"## Downsides\n",
|
|
"\n",
|
|
"Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, which has implications for [approved usage](https://beta.openai.com/docs/usage-guidelines).\n",
|
|
"\n",
|
|
"Another small drawback of streaming responses is that the response no longer includes the `usage` field to tell you how many tokens were consumed. After receiving and combining all of the responses, you can calculate this yourself using [`tiktoken`](How_to_count_tokens_with_tiktoken.ipynb).\n",
|
|
"\n",
|
|
"## Example code\n",
|
|
"\n",
|
|
"Below is a Python code example of how to receive streaming completions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# imports\n",
|
|
"import openai # for OpenAI API calls\n",
|
|
"import time # for measuring time savings"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### A typical completion request\n",
|
|
"\n",
|
|
"With a typical Completions API call, the text is first computed and then returned all at once."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Full response received 7.32 seconds after request\n",
|
|
"Full text received: 4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Example of an OpenAI Completion request\n",
|
|
"# https://beta.openai.com/docs/api-reference/completions/create\n",
|
|
"\n",
|
|
"# record the time before the request is sent\n",
|
|
"start_time = time.time()\n",
|
|
"\n",
|
|
"# send a Completion request to count to 100\n",
|
|
"response = openai.Completion.create(\n",
|
|
" model='text-davinci-002',\n",
|
|
" prompt='1,2,3,',\n",
|
|
" max_tokens=193,\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"# calculate the time it took to receive the response\n",
|
|
"response_time = time.time() - start_time\n",
|
|
"\n",
|
|
"# extract the text from the response\n",
|
|
"completion_text = response['choices'][0]['text']\n",
|
|
"\n",
|
|
"# print the time delay and text received\n",
|
|
"print(f\"Full response received {response_time:.2f} seconds after request\")\n",
|
|
"print(f\"Full text received: {completion_text}\")"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### A streaming completion request\n",
|
|
"\n",
|
|
"With a streaming Completions API call, the text is sent back via a series of events. In Python, you can iterate over these events with a `for` loop."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Text received: 4 (0.16 seconds after request)\n",
|
|
"Text received: , (0.19 seconds after request)\n",
|
|
"Text received: 5 (0.21 seconds after request)\n",
|
|
"Text received: , (0.24 seconds after request)\n",
|
|
"Text received: 6 (0.27 seconds after request)\n",
|
|
"Text received: , (0.29 seconds after request)\n",
|
|
"Text received: 7 (0.32 seconds after request)\n",
|
|
"Text received: , (0.35 seconds after request)\n",
|
|
"Text received: 8 (0.37 seconds after request)\n",
|
|
"Text received: , (0.40 seconds after request)\n",
|
|
"Text received: 9 (0.43 seconds after request)\n",
|
|
"Text received: , (0.46 seconds after request)\n",
|
|
"Text received: 10 (0.48 seconds after request)\n",
|
|
"Text received: , (0.51 seconds after request)\n",
|
|
"Text received: 11 (0.54 seconds after request)\n",
|
|
"Text received: , (0.56 seconds after request)\n",
|
|
"Text received: 12 (0.59 seconds after request)\n",
|
|
"Text received: , (0.62 seconds after request)\n",
|
|
"Text received: 13 (0.64 seconds after request)\n",
|
|
"Text received: , (0.67 seconds after request)\n",
|
|
"Text received: 14 (0.70 seconds after request)\n",
|
|
"Text received: , (0.72 seconds after request)\n",
|
|
"Text received: 15 (0.75 seconds after request)\n",
|
|
"Text received: , (0.78 seconds after request)\n",
|
|
"Text received: 16 (0.84 seconds after request)\n",
|
|
"Text received: , (0.84 seconds after request)\n",
|
|
"Text received: 17 (0.86 seconds after request)\n",
|
|
"Text received: , (0.89 seconds after request)\n",
|
|
"Text received: 18 (0.91 seconds after request)\n",
|
|
"Text received: , (0.94 seconds after request)\n",
|
|
"Text received: 19 (1.41 seconds after request)\n",
|
|
"Text received: , (1.41 seconds after request)\n",
|
|
"Text received: 20 (1.41 seconds after request)\n",
|
|
"Text received: , (1.41 seconds after request)\n",
|
|
"Text received: 21 (1.41 seconds after request)\n",
|
|
"Text received: , (1.41 seconds after request)\n",
|
|
"Text received: 22 (1.41 seconds after request)\n",
|
|
"Text received: , (1.41 seconds after request)\n",
|
|
"Text received: 23 (1.41 seconds after request)\n",
|
|
"Text received: , (1.41 seconds after request)\n",
|
|
"Text received: 24 (1.46 seconds after request)\n",
|
|
"Text received: , (1.46 seconds after request)\n",
|
|
"Text received: 25 (1.46 seconds after request)\n",
|
|
"Text received: , (1.55 seconds after request)\n",
|
|
"Text received: 26 (1.61 seconds after request)\n",
|
|
"Text received: , (1.65 seconds after request)\n",
|
|
"Text received: 27 (1.66 seconds after request)\n",
|
|
"Text received: , (1.70 seconds after request)\n",
|
|
"Text received: 28 (1.72 seconds after request)\n",
|
|
"Text received: , (1.75 seconds after request)\n",
|
|
"Text received: 29 (1.78 seconds after request)\n",
|
|
"Text received: , (2.05 seconds after request)\n",
|
|
"Text received: 30 (2.08 seconds after request)\n",
|
|
"Text received: , (2.13 seconds after request)\n",
|
|
"Text received: 31 (2.16 seconds after request)\n",
|
|
"Text received: , (2.20 seconds after request)\n",
|
|
"Text received: 32 (2.26 seconds after request)\n",
|
|
"Text received: , (2.28 seconds after request)\n",
|
|
"Text received: 33 (2.31 seconds after request)\n",
|
|
"Text received: , (2.35 seconds after request)\n",
|
|
"Text received: 34 (2.38 seconds after request)\n",
|
|
"Text received: , (2.54 seconds after request)\n",
|
|
"Text received: 35 (2.55 seconds after request)\n",
|
|
"Text received: , (2.59 seconds after request)\n",
|
|
"Text received: 36 (2.61 seconds after request)\n",
|
|
"Text received: , (2.64 seconds after request)\n",
|
|
"Text received: 37 (2.67 seconds after request)\n",
|
|
"Text received: , (2.71 seconds after request)\n",
|
|
"Text received: 38 (2.86 seconds after request)\n",
|
|
"Text received: , (2.89 seconds after request)\n",
|
|
"Text received: 39 (2.92 seconds after request)\n",
|
|
"Text received: , (2.95 seconds after request)\n",
|
|
"Text received: 40 (2.99 seconds after request)\n",
|
|
"Text received: , (3.01 seconds after request)\n",
|
|
"Text received: 41 (3.04 seconds after request)\n",
|
|
"Text received: , (3.08 seconds after request)\n",
|
|
"Text received: 42 (3.15 seconds after request)\n",
|
|
"Text received: , (3.33 seconds after request)\n",
|
|
"Text received: 43 (3.36 seconds after request)\n",
|
|
"Text received: , (3.43 seconds after request)\n",
|
|
"Text received: 44 (3.47 seconds after request)\n",
|
|
"Text received: , (3.50 seconds after request)\n",
|
|
"Text received: 45 (3.53 seconds after request)\n",
|
|
"Text received: , (3.56 seconds after request)\n",
|
|
"Text received: 46 (3.59 seconds after request)\n",
|
|
"Text received: , (3.63 seconds after request)\n",
|
|
"Text received: 47 (3.65 seconds after request)\n",
|
|
"Text received: , (3.68 seconds after request)\n",
|
|
"Text received: 48 (3.71 seconds after request)\n",
|
|
"Text received: , (3.77 seconds after request)\n",
|
|
"Text received: 49 (3.77 seconds after request)\n",
|
|
"Text received: , (3.79 seconds after request)\n",
|
|
"Text received: 50 (3.82 seconds after request)\n",
|
|
"Text received: , (3.85 seconds after request)\n",
|
|
"Text received: 51 (3.89 seconds after request)\n",
|
|
"Text received: , (3.91 seconds after request)\n",
|
|
"Text received: 52 (3.93 seconds after request)\n",
|
|
"Text received: , (3.96 seconds after request)\n",
|
|
"Text received: 53 (3.98 seconds after request)\n",
|
|
"Text received: , (4.04 seconds after request)\n",
|
|
"Text received: 54 (4.05 seconds after request)\n",
|
|
"Text received: , (4.07 seconds after request)\n",
|
|
"Text received: 55 (4.10 seconds after request)\n",
|
|
"Text received: , (4.13 seconds after request)\n",
|
|
"Text received: 56 (4.19 seconds after request)\n",
|
|
"Text received: , (4.20 seconds after request)\n",
|
|
"Text received: 57 (4.20 seconds after request)\n",
|
|
"Text received: , (4.23 seconds after request)\n",
|
|
"Text received: 58 (4.26 seconds after request)\n",
|
|
"Text received: , (4.30 seconds after request)\n",
|
|
"Text received: 59 (4.31 seconds after request)\n",
|
|
"Text received: , (4.59 seconds after request)\n",
|
|
"Text received: 60 (4.61 seconds after request)\n",
|
|
"Text received: , (4.64 seconds after request)\n",
|
|
"Text received: 61 (4.67 seconds after request)\n",
|
|
"Text received: , (4.72 seconds after request)\n",
|
|
"Text received: 62 (4.73 seconds after request)\n",
|
|
"Text received: , (4.76 seconds after request)\n",
|
|
"Text received: 63 (4.80 seconds after request)\n",
|
|
"Text received: , (4.83 seconds after request)\n",
|
|
"Text received: 64 (4.86 seconds after request)\n",
|
|
"Text received: , (4.89 seconds after request)\n",
|
|
"Text received: 65 (4.92 seconds after request)\n",
|
|
"Text received: , (4.94 seconds after request)\n",
|
|
"Text received: 66 (4.97 seconds after request)\n",
|
|
"Text received: , (5.00 seconds after request)\n",
|
|
"Text received: 67 (5.03 seconds after request)\n",
|
|
"Text received: , (5.06 seconds after request)\n",
|
|
"Text received: 68 (5.09 seconds after request)\n",
|
|
"Text received: , (5.14 seconds after request)\n",
|
|
"Text received: 69 (5.16 seconds after request)\n",
|
|
"Text received: , (5.19 seconds after request)\n",
|
|
"Text received: 70 (5.22 seconds after request)\n",
|
|
"Text received: , (5.28 seconds after request)\n",
|
|
"Text received: 71 (5.30 seconds after request)\n",
|
|
"Text received: , (5.33 seconds after request)\n",
|
|
"Text received: 72 (5.36 seconds after request)\n",
|
|
"Text received: , (5.38 seconds after request)\n",
|
|
"Text received: 73 (5.41 seconds after request)\n",
|
|
"Text received: , (5.44 seconds after request)\n",
|
|
"Text received: 74 (5.48 seconds after request)\n",
|
|
"Text received: , (5.51 seconds after request)\n",
|
|
"Text received: 75 (5.53 seconds after request)\n",
|
|
"Text received: , (5.56 seconds after request)\n",
|
|
"Text received: 76 (5.60 seconds after request)\n",
|
|
"Text received: , (5.62 seconds after request)\n",
|
|
"Text received: 77 (5.65 seconds after request)\n",
|
|
"Text received: , (5.68 seconds after request)\n",
|
|
"Text received: 78 (5.71 seconds after request)\n",
|
|
"Text received: , (5.77 seconds after request)\n",
|
|
"Text received: 79 (5.77 seconds after request)\n",
|
|
"Text received: , (5.79 seconds after request)\n",
|
|
"Text received: 80 (5.82 seconds after request)\n",
|
|
"Text received: , (5.85 seconds after request)\n",
|
|
"Text received: 81 (5.88 seconds after request)\n",
|
|
"Text received: , (5.92 seconds after request)\n",
|
|
"Text received: 82 (5.93 seconds after request)\n",
|
|
"Text received: , (5.97 seconds after request)\n",
|
|
"Text received: 83 (5.98 seconds after request)\n",
|
|
"Text received: , (6.01 seconds after request)\n",
|
|
"Text received: 84 (6.04 seconds after request)\n",
|
|
"Text received: , (6.07 seconds after request)\n",
|
|
"Text received: 85 (6.09 seconds after request)\n",
|
|
"Text received: , (6.11 seconds after request)\n",
|
|
"Text received: 86 (6.14 seconds after request)\n",
|
|
"Text received: , (6.17 seconds after request)\n",
|
|
"Text received: 87 (6.19 seconds after request)\n",
|
|
"Text received: , (6.22 seconds after request)\n",
|
|
"Text received: 88 (6.24 seconds after request)\n",
|
|
"Text received: , (6.27 seconds after request)\n",
|
|
"Text received: 89 (6.30 seconds after request)\n",
|
|
"Text received: , (6.31 seconds after request)\n",
|
|
"Text received: 90 (6.35 seconds after request)\n",
|
|
"Text received: , (6.36 seconds after request)\n",
|
|
"Text received: 91 (6.40 seconds after request)\n",
|
|
"Text received: , (6.44 seconds after request)\n",
|
|
"Text received: 92 (6.46 seconds after request)\n",
|
|
"Text received: , (6.49 seconds after request)\n",
|
|
"Text received: 93 (6.51 seconds after request)\n",
|
|
"Text received: , (6.54 seconds after request)\n",
|
|
"Text received: 94 (6.56 seconds after request)\n",
|
|
"Text received: , (6.59 seconds after request)\n",
|
|
"Text received: 95 (6.62 seconds after request)\n",
|
|
"Text received: , (6.64 seconds after request)\n",
|
|
"Text received: 96 (6.68 seconds after request)\n",
|
|
"Text received: , (6.68 seconds after request)\n",
|
|
"Text received: 97 (6.70 seconds after request)\n",
|
|
"Text received: , (6.73 seconds after request)\n",
|
|
"Text received: 98 (6.75 seconds after request)\n",
|
|
"Text received: , (6.78 seconds after request)\n",
|
|
"Text received: 99 (6.90 seconds after request)\n",
|
|
"Text received: , (6.92 seconds after request)\n",
|
|
"Text received: 100 (7.25 seconds after request)\n",
|
|
"Full response received 7.25 seconds after request\n",
|
|
"Full text received: 4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Example of an OpenAI Completion request, using the stream=True option\n",
|
|
"# https://beta.openai.com/docs/api-reference/completions/create\n",
|
|
"\n",
|
|
"# record the time before the request is sent\n",
|
|
"start_time = time.time()\n",
|
|
"\n",
|
|
"# send a Completion request to count to 100\n",
|
|
"response = openai.Completion.create(\n",
|
|
" model='text-davinci-002',\n",
|
|
" prompt='1,2,3,',\n",
|
|
" max_tokens=193,\n",
|
|
" temperature=0,\n",
|
|
" stream=True, # this time, we set stream=True\n",
|
|
")\n",
|
|
"\n",
|
|
"# create variables to collect the stream of events\n",
|
|
"collected_events = []\n",
|
|
"completion_text = ''\n",
|
|
"# iterate through the stream of events\n",
|
|
"for event in response:\n",
|
|
" event_time = time.time() - start_time # calculate the time delay of the event\n",
|
|
" collected_events.append(event) # save the event response\n",
|
|
" event_text = event['choices'][0]['text'] # extract the text\n",
|
|
" completion_text += event_text # append the text\n",
|
|
" print(f\"Text received: {event_text} ({event_time:.2f} seconds after request)\") # print the delay and text\n",
|
|
"\n",
|
|
"# print the time delay and text received\n",
|
|
"print(f\"Full response received {event_time:.2f} seconds after request\")\n",
|
|
"print(f\"Full text received: {completion_text}\")"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Time comparison\n",
|
|
"\n",
|
|
"In the example above, both requests took about 7 seconds to fully complete.\n",
|
|
"\n",
|
|
"However, with the streaming request, you would have received the first token after 0.16 seconds, and subsequent tokens after about ~0.035 seconds each."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3.9.9 ('openai')",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.9 (main, Dec 7 2021, 18:04:56) \n[Clang 13.0.0 (clang-1300.0.29.3)]"
|
|
},
|
|
"orig_nbformat": 4,
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "365536dcbde60510dc9073d6b991cd35db2d9bac356a11f5b64279a5e6708b97"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|