mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-04 06:00:33 +00:00
2c441ab9a2
Co-authored-by: ayush rajgor <ayushrajgorar@gmail.com>
682 lines
31 KiB
Plaintext
682 lines
31 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# How to format inputs to ChatGPT models\n",
|
|
"\n",
|
|
"ChatGPT is powered by `gpt-3.5-turbo` and `gpt-4`, OpenAI's most advanced models.\n",
|
|
"\n",
|
|
"You can build your own applications with `gpt-3.5-turbo` or `gpt-4` using the OpenAI API.\n",
|
|
"\n",
|
|
"Chat models take a series of messages as input, and return an AI-written message as output.\n",
|
|
"\n",
|
|
"This guide illustrates the chat format with a few example API calls."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## 1. Import the openai library"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# if needed, install and/or upgrade to the latest version of the OpenAI Python library\n",
|
|
"%pip install --upgrade openai\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# import the OpenAI Python library for calling the OpenAI API\n",
|
|
"from openai import OpenAI\n",
|
|
"import os\n",
|
|
"\n",
|
|
"client = OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\", \"<your OpenAI API key if not set as env var>\"))"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## 2. An example chat completion API call\n",
|
|
"\n",
|
|
"A chat completion API call parameters,\n",
|
|
"**Required**\n",
|
|
"- `model`: the name of the model you want to use (e.g., `gpt-3.5-turbo`, `gpt-4`, `gpt-3.5-turbo-16k-1106`)\n",
|
|
"- `messages`: a list of message objects, where each object has two required fields:\n",
|
|
" - `role`: the role of the messenger (either `system`, `user`, `assistant` or `tool`)\n",
|
|
" - `content`: the content of the message (e.g., `Write me a beautiful poem`)\n",
|
|
"\n",
|
|
"Messages can also contain an optional `name` field, which give the messenger a name. E.g., `example-user`, `Alice`, `BlackbeardBot`. Names may not contain spaces.\n",
|
|
"\n",
|
|
"**Optional**\n",
|
|
"- `frequency_penalty`: Penalizes tokens based on their frequency, reducing repetition.\n",
|
|
"- `logit_bias`: Modifies likelihood of specified tokens with bias values.\n",
|
|
"- `logprobs`: Returns log probabilities of output tokens if true.\n",
|
|
"- `top_logprobs`: Specifies the number of most likely tokens to return at each position.\n",
|
|
"- `max_tokens`: Sets the maximum number of generated tokens in chat completion.\n",
|
|
"- `n`: Generates a specified number of chat completion choices for each input.\n",
|
|
"- `presence_penalty`: Penalizes new tokens based on their presence in the text.\n",
|
|
"- `response_format`: Specifies the output format, e.g., JSON mode.\n",
|
|
"- `seed`: Ensures deterministic sampling with a specified seed.\n",
|
|
"- `stop`: Specifies up to 4 sequences where the API should stop generating tokens.\n",
|
|
"- `stream`: Sends partial message deltas as tokens become available.\n",
|
|
"- `temperature`: Sets the sampling temperature between 0 and 2.\n",
|
|
"- `top_p`: Uses nucleus sampling; considers tokens with top_p probability mass.\n",
|
|
"- `tools`: Lists functions the model may call.\n",
|
|
"- `tool_choice`: Controls the model's function calls (none/auto/function).\n",
|
|
"- `user`: Unique identifier for end-user monitoring and abuse detection.\n",
|
|
"\n",
|
|
"\n",
|
|
"As of January 2024, you can also optionally submit a list of `functions` that tell GPT whether it can generate JSON to feed into a function. For details, see the [documentation](https://platform.openai.com/docs/guides/function-calling), [API reference](https://platform.openai.com/docs/api-reference/chat), or the Cookbook guide [How to call functions with chat models](How_to_call_functions_with_chat_models.ipynb).\n",
|
|
"\n",
|
|
"Typically, a conversation will start with a system message that tells the assistant how to behave, followed by alternating user and assistant messages, but you are not required to follow this format.\n",
|
|
"\n",
|
|
"Let's look at an example chat API calls to see how the chat format works in practice."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Example OpenAI Python library request\n",
|
|
"MODEL = \"gpt-3.5-turbo\"\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\n",
|
|
" \"id\": \"chatcmpl-8dee9DuEFcg2QILtT2a6EBXZnpirM\",\n",
|
|
" \"choices\": [\n",
|
|
" {\n",
|
|
" \"finish_reason\": \"stop\",\n",
|
|
" \"index\": 0,\n",
|
|
" \"logprobs\": null,\n",
|
|
" \"message\": {\n",
|
|
" \"content\": \"Orange who?\",\n",
|
|
" \"role\": \"assistant\",\n",
|
|
" \"function_call\": null,\n",
|
|
" \"tool_calls\": null\n",
|
|
" }\n",
|
|
" }\n",
|
|
" ],\n",
|
|
" \"created\": 1704461729,\n",
|
|
" \"model\": \"gpt-3.5-turbo-0613\",\n",
|
|
" \"object\": \"chat.completion\",\n",
|
|
" \"system_fingerprint\": null,\n",
|
|
" \"usage\": {\n",
|
|
" \"completion_tokens\": 3,\n",
|
|
" \"prompt_tokens\": 35,\n",
|
|
" \"total_tokens\": 38\n",
|
|
" }\n",
|
|
"}\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(json.dumps(json.loads(response.model_dump_json()), indent=4))"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As you can see, the response object has a few fields:\n",
|
|
"- `id`: the ID of the request\n",
|
|
"- `choices`: a list of completion objects (only one, unless you set `n` greater than 1)\n",
|
|
" - `finish_reason`: the reason the model stopped generating text (either `stop`, or `length` if `max_tokens` limit was reached)\n",
|
|
" - `index`: The index of the choice in the list of choices.\n",
|
|
" - `logprobs`: Log probability information for the choice.\n",
|
|
" - `message`: the message object generated by the model\n",
|
|
" - `content`: content of message\n",
|
|
" - `role`: The role of the author of this message.\n",
|
|
" - `tool_calls`: The tool calls generated by the model, such as function calls. if the tools is given\n",
|
|
"- `created`: the timestamp of the request\n",
|
|
"- `model`: the full name of the model used to generate the response\n",
|
|
"- `object`: the type of object returned (e.g., `chat.completion`)\n",
|
|
"- `system_fingerprint`: This fingerprint represents the backend configuration that the model runs with.\n",
|
|
"- `usage`: the number of tokens used to generate the replies, counting prompt, completion, and total"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Extract just the reply with:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Orange who?'"
|
|
]
|
|
},
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response.choices[0].message.content\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Even non-conversation-based tasks can fit into the chat format, by placing the instruction in the first user message.\n",
|
|
"\n",
|
|
"For example, to ask the model to explain asynchronous programming in the style of the pirate Blackbeard, we can structure conversation as follows:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Arr, me matey! Let me tell ye a tale of asynchronous programming, in the style of the fearsome pirate Blackbeard!\n",
|
|
"\n",
|
|
"Picture this, me hearties. In the vast ocean of programming, there be times when ye need to perform multiple tasks at once. But fear not, for asynchronous programming be here to save the day!\n",
|
|
"\n",
|
|
"Ye see, in traditional programming, ye be waitin' for one task to be done before movin' on to the next. But with asynchronous programming, ye can be takin' care of multiple tasks at the same time, just like a pirate multitaskin' on the high seas!\n",
|
|
"\n",
|
|
"Instead of waitin' for a task to be completed, ye can be sendin' it off on its own journey, while ye move on to the next task. It be like havin' a crew of trusty sailors, each takin' care of their own duties, without waitin' for the others.\n",
|
|
"\n",
|
|
"Now, ye may be wonderin', how does this sorcery work? Well, me matey, it be all about callbacks and promises. When ye be sendin' off a task, ye be attachin' a callback function to it. This be like leavin' a message in a bottle, tellin' the task what to do when it be finished.\n",
|
|
"\n",
|
|
"While the task be sailin' on its own, ye can be movin' on to the next task, without wastin' any precious time. And when the first task be done, it be sendin' a signal back to ye, lettin' ye know it be finished. Then ye can be takin' care of the callback function, like openin' the bottle and readin' the message inside.\n",
|
|
"\n",
|
|
"But wait, there be more! With promises, ye can be makin' even fancier arrangements. Instead of callbacks, ye be makin' a promise that the task will be completed. It be like a contract between ye and the task, swearin' that it will be done.\n",
|
|
"\n",
|
|
"Ye can be attachin' multiple promises to a task, promisin' different outcomes. And when the task be finished, it be fulfillin' the promises, lettin' ye know it be done. Then ye can be handlin' the fulfillments, like collectin' the rewards of yer pirate adventures!\n",
|
|
"\n",
|
|
"So, me hearties, that be the tale of asynchronous programming, told in the style of the fearsome pirate Blackbeard! With callbacks and promises, ye can be takin' care of multiple tasks at once, just like a pirate conquerin' the seven seas!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# example with a system message\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Explain asynchronous programming in the style of the pirate Blackbeard.\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Arr, me hearties! Gather 'round and listen up, for I be tellin' ye about the mysterious art of asynchronous programming, in the style of the fearsome pirate Blackbeard!\n",
|
|
"\n",
|
|
"Now, ye see, in the world of programming, there be times when we need to perform tasks that take a mighty long time to complete. These tasks might involve fetchin' data from the depths of the internet, or performin' complex calculations that would make even Davy Jones scratch his head.\n",
|
|
"\n",
|
|
"In the olden days, we pirates used to wait patiently for each task to finish afore movin' on to the next one. But that be a waste of precious time, me hearties! We be pirates, always lookin' for ways to be more efficient and plunder more booty!\n",
|
|
"\n",
|
|
"That be where asynchronous programming comes in, me mateys. It be a way to tackle multiple tasks at once, without waitin' for each one to finish afore movin' on. It be like havin' a crew of scallywags workin' on different tasks simultaneously, while ye be overseein' the whole operation.\n",
|
|
"\n",
|
|
"Ye see, in asynchronous programming, we be breakin' down our tasks into smaller chunks called \"coroutines.\" Each coroutine be like a separate pirate, workin' on its own task. When a coroutine be startin' its work, it don't wait for the task to finish afore movin' on to the next one. Instead, it be movin' on to the next task, lettin' the first one continue in the background.\n",
|
|
"\n",
|
|
"Now, ye might be wonderin', \"But Blackbeard, how be we know when a task be finished if we don't wait for it?\" Ah, me hearties, that be where the magic of callbacks and promises come in!\n",
|
|
"\n",
|
|
"When a coroutine be startin' its work, it be attachin' a callback or a promise to it. This be like leavin' a message in a bottle, tellin' the coroutine what to do when it be finished. So, while the coroutine be workin' away, the rest of the crew be movin' on to other tasks, plunderin' more booty along the way.\n",
|
|
"\n",
|
|
"When a coroutine be finished with its task, it be sendin' a signal to the callback or fulfillin' the promise, lettin' the rest of the crew know that it be done. Then, the crew can gather 'round and handle the results of the completed task, celebratin' their victory and countin' their plunder.\n",
|
|
"\n",
|
|
"So, me hearties, asynchronous programming be like havin' a crew of pirates workin' on different tasks at once, without waitin' for each one to finish afore movin' on. It be a way to be more efficient, plunder more booty, and conquer the vast seas of programming!\n",
|
|
"\n",
|
|
"Now, set sail, me mateys, and embrace the power of asynchronous programming like true pirates of the digital realm! Arr!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# example without a system message\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"user\", \"content\": \"Explain asynchronous programming in the style of the pirate Blackbeard.\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## 3. Tips for instructing gpt-3.5-turbo-0301\n",
|
|
"\n",
|
|
"Best practices for instructing models may change from model version to model version. The advice that follows applies to `gpt-3.5-turbo-0301` and may not apply to future models."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### System messages\n",
|
|
"\n",
|
|
"The system message can be used to prime the assistant with different personalities or behaviors.\n",
|
|
"\n",
|
|
"Be aware that `gpt-3.5-turbo-0301` does not generally pay as much attention to the system message as `gpt-4-0314` or `gpt-3.5-turbo-0613`. Therefore, for `gpt-3.5-turbo-0301`, we recommend placing important instructions in the user message instead. Some developers have found success in continually moving the system message near the end of the conversation to keep the model's attention from drifting away as conversations get longer."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Of course! Fractions are a way to represent parts of a whole. They are made up of two numbers: a numerator and a denominator. The numerator tells you how many parts you have, and the denominator tells you how many equal parts make up the whole.\n",
|
|
"\n",
|
|
"Let's take an example to understand this better. Imagine you have a pizza that is divided into 8 equal slices. If you eat 3 slices, you can represent that as the fraction 3/8. Here, the numerator is 3 because you ate 3 slices, and the denominator is 8 because the whole pizza is divided into 8 slices.\n",
|
|
"\n",
|
|
"Fractions can also be used to represent numbers less than 1. For example, if you eat half of a pizza, you can write it as 1/2. Here, the numerator is 1 because you ate one slice, and the denominator is 2 because the whole pizza is divided into 2 equal parts.\n",
|
|
"\n",
|
|
"Now, let's talk about equivalent fractions. Equivalent fractions are different fractions that represent the same amount. For example, 1/2 and 2/4 are equivalent fractions because they both represent half of something. To find equivalent fractions, you can multiply or divide both the numerator and denominator by the same number.\n",
|
|
"\n",
|
|
"Here's a question to check your understanding: If you have a cake divided into 12 equal slices and you eat 4 slices, what fraction of the cake did you eat?\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# An example of a system message that primes the assistant to explain concepts in great depth\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a friendly and helpful teaching assistant. You explain concepts in great depth using simple terms, and you give examples to help people learn. At the end of each explanation, you ask a question to check for understanding\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Can you explain how fractions work?\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Fractions represent parts of a whole. They have a numerator (top number) and a denominator (bottom number).\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# An example of a system message that primes the assistant to give brief, to-the-point answers\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a laconic assistant. You reply with brief, to-the-point answers with no elaboration.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Can you explain how fractions work?\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Few-shot prompting\n",
|
|
"\n",
|
|
"In some cases, it's easier to show the model what you want rather than tell the model what you want.\n",
|
|
"\n",
|
|
"One way to show the model what you want is with faked example messages.\n",
|
|
"\n",
|
|
"For example:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"This sudden change in direction means we don't have enough time to complete the entire project for the client.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# An example of a faked few-shot conversation to prime the model into translating business jargon to simpler speech\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful, pattern-following assistant.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Help me translate the following corporate jargon into plain English.\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"Sure, I'd be happy to!\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"New synergies will help drive top-line growth.\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"Things working well together will increase revenue.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.\"},\n",
|
|
" {\"role\": \"assistant\", \"content\": \"Let's talk later when we're less busy about how to do better.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"This late pivot means we don't have time to boil the ocean for the client deliverable.\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To help clarify that the example messages are not part of a real conversation, and shouldn't be referred back to by the model, you can try setting the `name` field of `system` messages to `example_user` and `example_assistant`.\n",
|
|
"\n",
|
|
"Transforming the few-shot example above, we could write:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"This sudden change in direction means we don't have enough time to complete the entire project for the client.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# The business jargon translation example, but with example names for the example messages\n",
|
|
"response = client.chat.completions.create(\n",
|
|
" model=MODEL,\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a helpful, pattern-following assistant that translates corporate jargon into plain English.\"},\n",
|
|
" {\"role\": \"system\", \"name\":\"example_user\", \"content\": \"New synergies will help drive top-line growth.\"},\n",
|
|
" {\"role\": \"system\", \"name\": \"example_assistant\", \"content\": \"Things working well together will increase revenue.\"},\n",
|
|
" {\"role\": \"system\", \"name\":\"example_user\", \"content\": \"Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.\"},\n",
|
|
" {\"role\": \"system\", \"name\": \"example_assistant\", \"content\": \"Let's talk later when we're less busy about how to do better.\"},\n",
|
|
" {\"role\": \"user\", \"content\": \"This late pivot means we don't have time to boil the ocean for the client deliverable.\"},\n",
|
|
" ],\n",
|
|
" temperature=0,\n",
|
|
")\n",
|
|
"\n",
|
|
"print(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Not every attempt at engineering conversations will succeed at first.\n",
|
|
"\n",
|
|
"If your first attempts fail, don't be afraid to experiment with different ways of priming or conditioning the model.\n",
|
|
"\n",
|
|
"As an example, one developer discovered an increase in accuracy when they inserted a user message that said \"Great job so far, these have been perfect\" to help condition the model into providing higher quality responses.\n",
|
|
"\n",
|
|
"For more ideas on how to lift the reliability of the models, consider reading our guide on [techniques to increase reliability](../techniques_to_improve_reliability). It was written for non-chat models, but many of its principles still apply."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## 4. Counting tokens\n",
|
|
"\n",
|
|
"When you submit your request, the API transforms the messages into a sequence of tokens.\n",
|
|
"\n",
|
|
"The number of tokens used affects:\n",
|
|
"- the cost of the request\n",
|
|
"- the time it takes to generate the response\n",
|
|
"- when the reply gets cut off from hitting the maximum token limit (4,096 for `gpt-3.5-turbo` or 8,192 for `gpt-4`)\n",
|
|
"\n",
|
|
"You can use the following function to count the number of tokens that a list of messages will use.\n",
|
|
"\n",
|
|
"Note that the exact way that tokens are counted from messages may change from model to model. Consider the counts from the function below an estimate, not a timeless guarantee. \n",
|
|
"\n",
|
|
"In particular, requests that use the optional functions input will consume extra tokens on top of the estimates calculated below.\n",
|
|
"\n",
|
|
"Read more about counting tokens in [How to count tokens with tiktoken](How_to_count_tokens_with_tiktoken.ipynb)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import tiktoken\n",
|
|
"\n",
|
|
"\n",
|
|
"def num_tokens_from_messages(messages, model=\"gpt-3.5-turbo-0613\"):\n",
|
|
" \"\"\"Return the number of tokens used by a list of messages.\"\"\"\n",
|
|
" try:\n",
|
|
" encoding = tiktoken.encoding_for_model(model)\n",
|
|
" except KeyError:\n",
|
|
" print(\"Warning: model not found. Using cl100k_base encoding.\")\n",
|
|
" encoding = tiktoken.get_encoding(\"cl100k_base\")\n",
|
|
" if model in {\n",
|
|
" \"gpt-3.5-turbo-0613\",\n",
|
|
" \"gpt-3.5-turbo-16k-0613\",\n",
|
|
" \"gpt-4-0314\",\n",
|
|
" \"gpt-4-32k-0314\",\n",
|
|
" \"gpt-4-0613\",\n",
|
|
" \"gpt-4-32k-0613\",\n",
|
|
" }:\n",
|
|
" tokens_per_message = 3\n",
|
|
" tokens_per_name = 1\n",
|
|
" elif model == \"gpt-3.5-turbo-0301\":\n",
|
|
" tokens_per_message = 4 # every message follows <|start|>{role/name}\\n{content}<|end|>\\n\n",
|
|
" tokens_per_name = -1 # if there's a name, the role is omitted\n",
|
|
" elif \"gpt-3.5-turbo\" in model:\n",
|
|
" print(\"Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.\")\n",
|
|
" return num_tokens_from_messages(messages, model=\"gpt-3.5-turbo-0613\")\n",
|
|
" elif \"gpt-4\" in model:\n",
|
|
" print(\"Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.\")\n",
|
|
" return num_tokens_from_messages(messages, model=\"gpt-4-0613\")\n",
|
|
" else:\n",
|
|
" raise NotImplementedError(\n",
|
|
" f\"\"\"num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.\"\"\"\n",
|
|
" )\n",
|
|
" num_tokens = 0\n",
|
|
" for message in messages:\n",
|
|
" num_tokens += tokens_per_message\n",
|
|
" for key, value in message.items():\n",
|
|
" num_tokens += len(encoding.encode(value))\n",
|
|
" if key == \"name\":\n",
|
|
" num_tokens += tokens_per_name\n",
|
|
" num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>\n",
|
|
" return num_tokens\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"gpt-3.5-turbo-1106\n",
|
|
"Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.\n",
|
|
"129 prompt tokens counted by num_tokens_from_messages().\n",
|
|
"129 prompt tokens counted by the OpenAI API.\n",
|
|
"\n",
|
|
"gpt-3.5-turbo\n",
|
|
"Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.\n",
|
|
"129 prompt tokens counted by num_tokens_from_messages().\n",
|
|
"129 prompt tokens counted by the OpenAI API.\n",
|
|
"\n",
|
|
"gpt-4\n",
|
|
"Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.\n",
|
|
"129 prompt tokens counted by num_tokens_from_messages().\n",
|
|
"129 prompt tokens counted by the OpenAI API.\n",
|
|
"\n",
|
|
"gpt-4-1106-preview\n",
|
|
"Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.\n",
|
|
"129 prompt tokens counted by num_tokens_from_messages().\n",
|
|
"129 prompt tokens counted by the OpenAI API.\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# let's verify the function above matches the OpenAI API response\n",
|
|
"example_messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"system\",\n",
|
|
" \"content\": \"You are a helpful, pattern-following assistant that translates corporate jargon into plain English.\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"system\",\n",
|
|
" \"name\": \"example_user\",\n",
|
|
" \"content\": \"New synergies will help drive top-line growth.\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"system\",\n",
|
|
" \"name\": \"example_assistant\",\n",
|
|
" \"content\": \"Things working well together will increase revenue.\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"system\",\n",
|
|
" \"name\": \"example_user\",\n",
|
|
" \"content\": \"Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"system\",\n",
|
|
" \"name\": \"example_assistant\",\n",
|
|
" \"content\": \"Let's talk later when we're less busy about how to do better.\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": \"This late pivot means we don't have time to boil the ocean for the client deliverable.\",\n",
|
|
" },\n",
|
|
"]\n",
|
|
"\n",
|
|
"for model in [\n",
|
|
" # \"gpt-3.5-turbo-0301\",\n",
|
|
" # \"gpt-4-0314\",\n",
|
|
" # \"gpt-4-0613\",\n",
|
|
" \"gpt-3.5-turbo-1106\",\n",
|
|
" \"gpt-3.5-turbo\",\n",
|
|
" \"gpt-4\",\n",
|
|
" \"gpt-4-1106-preview\",\n",
|
|
" ]:\n",
|
|
" print(model)\n",
|
|
" # example token count from the function defined above\n",
|
|
" print(f\"{num_tokens_from_messages(example_messages, model)} prompt tokens counted by num_tokens_from_messages().\")\n",
|
|
" # example token count from the OpenAI API\n",
|
|
" response = client.chat.completions.create(model=model,\n",
|
|
" messages=example_messages,\n",
|
|
" temperature=0,\n",
|
|
" max_tokens=1)\n",
|
|
" token = response.usage.prompt_tokens\n",
|
|
" print(f'{token} prompt tokens counted by the OpenAI API.')\n",
|
|
" print()\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "openai",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.5"
|
|
},
|
|
"orig_nbformat": 4,
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "365536dcbde60510dc9073d6b991cd35db2d9bac356a11f5b64279a5e6708b97"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|