"Messages can also contain an optional `name` field, which give the messenger a name. E.g., `example-user`, `Alice`, `BlackbeardBot`. Names may not contain spaces.\n",
"As of June 2023, you can also optionally submit a list of `functions` that tell GPT whether it can generate JSON to feed into a function. For details, see the [documentation](https://platform.openai.com/docs/guides/gpt/function-calling), [API reference](https://platform.openai.com/docs/api-reference/chat), or the Cookbook guide [How to call functions with chat models](How_to_call_functions_with_chat_models.ipynb).\n",
"Typically, a conversation will start with a system message that tells the assistant how to behave, followed by alternating user and assistant messages, but you are not required to follow this format.\n",
"Ahoy matey! Asynchronous programming be like havin' a crew o' pirates workin' on different tasks at the same time. Ye see, instead o' waitin' for one task to be completed before startin' the next, ye can assign tasks to yer crew and let 'em work on 'em simultaneously. This way, ye can get more done in less time and keep yer ship sailin' smoothly. It be like havin' a bunch o' pirates rowin' the ship at different speeds, but still gettin' us to our destination. Arrr!\n"
"Ahoy mateys! Let me tell ye about asynchronous programming, arrr! It be like havin' a crew of sailors workin' on different tasks at the same time, without waitin' for each other to finish. Ye see, in traditional programming, ye have to wait for one task to be completed before movin' on to the next. But with asynchronous programming, ye can start multiple tasks at once and let them run in the background while ye focus on other things.\n",
"It be like havin' a lookout keepin' watch for enemy ships while the rest of the crew be busy with their own tasks. They don't have to stop what they're doin' to keep an eye out, because the lookout be doin' it for them. And when the lookout spots an enemy ship, they can alert the crew and everyone can work together to defend the ship.\n",
"In the same way, asynchronous programming allows different parts of yer code to work together without gettin' in each other's way. It be especially useful for tasks that take a long time to complete, like loadin' large files or connectin' to a server. Instead of makin' yer program wait for these tasks to finish, ye can let them run in the background while yer program continues to do other things.\n",
"So there ye have it, me hearties! Asynchronous programming be like havin' a crew of sailors workin' together without gettin' in each other's way. It be a powerful tool for any programmer, and one that can help ye sail the seas of code with ease!\n"
"## 3. Tips for instructing gpt-3.5-turbo-0301\n",
"\n",
"Best practices for instructing models may change from model version to model version. The advice that follows applies to `gpt-3.5-turbo-0301` and may not apply to future models."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### System messages\n",
"\n",
"The system message can be used to prime the assistant with different personalities or behaviors.\n",
"Be aware that `gpt-3.5-turbo-0301` does not generally pay as much attention to the system message as `gpt-4-0314` or `gpt-3.5-turbo-0613`. Therefore, for `gpt-3.5-turbo-0301`, we recommend placing important instructions in the user message instead. Some developers have found success in continually moving the system message near the end of the conversation to keep the model's attention from drifting away as conversations get longer."
"Sure! Fractions are a way of representing a part of a whole. The top number of a fraction is called the numerator, and it represents how many parts of the whole we are talking about. The bottom number is called the denominator, and it represents how many equal parts the whole is divided into.\n",
"For example, if we have a pizza that is divided into 8 equal slices, and we take 3 slices, we can represent that as the fraction 3/8. The numerator is 3 because we took 3 slices, and the denominator is 8 because the pizza was divided into 8 slices.\n",
"To add or subtract fractions, we need to have a common denominator. This means that the denominators of the fractions need to be the same. To do this, we can find the least common multiple (LCM) of the denominators and then convert each fraction to an equivalent fraction with the LCM as the denominator.\n",
"To multiply fractions, we simply multiply the numerators together and the denominators together. To divide fractions, we multiply the first fraction by the reciprocal of the second fraction (flip the second fraction upside down).\n",
"\n",
"Now, here's a question to check for understanding: If we have a pizza that is divided into 12 equal slices, and we take 4 slices, what is the fraction that represents how much of the pizza we took?\n"
"# An example of a system message that primes the assistant to explain concepts in great depth\n",
"response = openai.ChatCompletion.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a friendly and helpful teaching assistant. You explain concepts in great depth using simple terms, and you give examples to help people learn. At the end of each explanation, you ask a question to check for understanding\"},\n",
" {\"role\": \"user\", \"content\": \"Can you explain how fractions work?\"},\n",
"Fractions represent a part of a whole. They consist of a numerator (top number) and a denominator (bottom number) separated by a line. The numerator represents how many parts of the whole are being considered, while the denominator represents the total number of equal parts that make up the whole.\n"
]
}
],
"source": [
"# An example of a system message that primes the assistant to give brief, to-the-point answers\n",
"response = openai.ChatCompletion.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a laconic assistant. You reply with brief, to-the-point answers with no elaboration.\"},\n",
" {\"role\": \"user\", \"content\": \"Can you explain how fractions work?\"},\n",
"To help clarify that the example messages are not part of a real conversation, and shouldn't be referred back to by the model, you can try setting the `name` field of `system` messages to `example_user` and `example_assistant`.\n",
"This sudden change in plans means we don't have enough time to do everything for the client's project.\n"
]
}
],
"source": [
"# The business jargon translation example, but with example names for the example messages\n",
"response = openai.ChatCompletion.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a helpful, pattern-following assistant that translates corporate jargon into plain English.\"},\n",
" {\"role\": \"system\", \"name\":\"example_user\", \"content\": \"New synergies will help drive top-line growth.\"},\n",
" {\"role\": \"system\", \"name\": \"example_assistant\", \"content\": \"Things working well together will increase revenue.\"},\n",
" {\"role\": \"system\", \"name\":\"example_user\", \"content\": \"Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.\"},\n",
" {\"role\": \"system\", \"name\": \"example_assistant\", \"content\": \"Let's talk later when we're less busy about how to do better.\"},\n",
" {\"role\": \"user\", \"content\": \"This late pivot means we don't have time to boil the ocean for the client deliverable.\"},\n",
"Not every attempt at engineering conversations will succeed at first.\n",
"\n",
"If your first attempts fail, don't be afraid to experiment with different ways of priming or conditioning the model.\n",
"\n",
"As an example, one developer discovered an increase in accuracy when they inserted a user message that said \"Great job so far, these have been perfect\" to help condition the model into providing higher quality responses.\n",
"\n",
"For more ideas on how to lift the reliability of the models, consider reading our guide on [techniques to increase reliability](../techniques_to_improve_reliability.md). It was written for non-chat models, but many of its principles still apply."
"Note that the exact way that tokens are counted from messages may change from model to model. Consider the counts from the function below an estimate, not a timeless guarantee. \n",
"\n",
"In particular, requests that use the optional functions input will consume extra tokens on top of the estimates calculated below.\n",
" f\"\"\"num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.\"\"\"\n",