mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-04 06:00:33 +00:00
updates rate limit notebook with links to parallel processing script
This commit is contained in:
parent
a52ecd28ee
commit
5c757fe90d
@ -1,6 +1,7 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
@ -8,24 +9,29 @@
|
||||
"\n",
|
||||
"When you call the OpenAI API repeatedly, you may encounter error messages that say `429: 'Too Many Requests'` or `RateLimitError`. These error messages come from exceeding the API's rate limits.\n",
|
||||
"\n",
|
||||
"This guide shares tips for avoiding and handling rate limit errors.\n",
|
||||
"\n",
|
||||
"To see an example script for throttling parallel requests to avoid rate limit errors, see [api_request_parallel_processor.py](api_request_parallel_processor.py).\n",
|
||||
"\n",
|
||||
"## Why rate limits exist\n",
|
||||
"\n",
|
||||
"Rate limits are a common practice for APIs, and they're put in place for a few different reasons.\n",
|
||||
"\n",
|
||||
"- First, they help protect against abuse or misuse of the API. For example, a malicious actor could flood the API with requests in an attempt to overload it or cause disruptions in service. By setting rate limits, OpenAI can prevent this kind of activity.\n",
|
||||
"- Second, rate limits help ensure that everyone has fair access to the API. If one person or organization makes an excessive number of requests, it could bog down the API for everyone else. By throttling the number of requests that a single user can make, OpenAI ensures that everyone has an opportunity to use the API without experiencing slowdowns.\n",
|
||||
"- Lastly, rate limits can help OpenAI manage the aggregate load on its infrastructure. If requests to the API increase dramatically, it could tax the servers and cause performance issues. By setting rate limits, OpenAI can help maintain a smooth and consistent experience for all users.\n",
|
||||
"\n",
|
||||
"Although hitting rate limits can be frustrating, rate limits exist to protect the reliable operation of the API for its users.\n",
|
||||
"\n",
|
||||
"In this guide, we'll share some tips for avoiding and handling rate limit errors."
|
||||
"Although hitting rate limits can be frustrating, rate limits exist to protect the reliable operation of the API for its users."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Default rate limits\n",
|
||||
"\n",
|
||||
"As of Sep 2022, the default rate limits are:\n",
|
||||
"As of Jan 2023, the default rate limits are:\n",
|
||||
"\n",
|
||||
"<table>\n",
|
||||
"<thead>\n",
|
||||
@ -56,7 +62,7 @@
|
||||
" <td>\n",
|
||||
" <ul>\n",
|
||||
" <li>60 requests / minute</li>\n",
|
||||
" <li>250,000 davinci tokens / minute (and proportionally more for smaller models)</li>\n",
|
||||
" <li>250,000 davinci tokens / minute (and proportionally more for cheaper models)</li>\n",
|
||||
" </ul>\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
@ -71,7 +77,7 @@
|
||||
" <td>\n",
|
||||
" <ul>\n",
|
||||
" <li>3,000 requests / minute</li>\n",
|
||||
" <li>250,000 davinci tokens / minute (and proportionally more for smaller models)</li>\n",
|
||||
" <li>250,000 davinci tokens / minute (and proportionally more for cheaper models)</li>\n",
|
||||
" </ul>\n",
|
||||
" </td>\n",
|
||||
" <td>\n",
|
||||
@ -88,16 +94,17 @@
|
||||
"\n",
|
||||
"### Other rate limit resources\n",
|
||||
"\n",
|
||||
"Read more about OpenAI's rate limits in the [OpenAI Help Center](https://help.openai.com/en/):\n",
|
||||
"Read more about OpenAI's rate limits in these other resources:\n",
|
||||
"\n",
|
||||
"- [Is API usage subject to any rate limits?](https://help.openai.com/en/articles/5955598-is-api-usage-subject-to-any-rate-limits)\n",
|
||||
"- [How can I solve 429: 'Too Many Requests' errors?](https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors)\n",
|
||||
"- [Guide: Rate limits](https://beta.openai.com/docs/guides/rate-limits/overview)\n",
|
||||
"- [Help Center: Is API usage subject to any rate limits?](https://help.openai.com/en/articles/5955598-is-api-usage-subject-to-any-rate-limits)\n",
|
||||
"- [Help Center: How can I solve 429: 'Too Many Requests' errors?](https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors)\n",
|
||||
"\n",
|
||||
"### Requesting a rate limit increase\n",
|
||||
"\n",
|
||||
"If you'd like your organization's rate limit increased, please fill out the following form:\n",
|
||||
"\n",
|
||||
"- [OpenAI Rate Limit Increase Request form](https://forms.gle/56ZrwXXoxAN1yt6i9)\n"
|
||||
"- [OpenAI Rate Limit Increase Request form](https://forms.gle/56ZrwXXoxAN1yt6i9)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -379,6 +386,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
@ -392,7 +400,7 @@
|
||||
"\n",
|
||||
"If you are constantly hitting the rate limit, then backing off, then hitting the rate limit again, then backing off again, it's possible that a good fraction of your request budget will be 'wasted' on requests that need to be retried. This limits your processing throughput, given a fixed rate limit.\n",
|
||||
"\n",
|
||||
"Here, one potential solution is to calculate your rate limit and add a delay equal to its reciprocal (e.g., if your rate limit 20 requests per minute, add a delay of 3 seconds to each request). This can help you operate near the rate limit ceiling without hitting it and incurring wasted requests.\n",
|
||||
"Here, one potential solution is to calculate your rate limit and add a delay equal to its reciprocal (e.g., if your rate limit 20 requests per minute, add a delay of 3–6 seconds to each request). This can help you operate near the rate limit ceiling without hitting it and incurring wasted requests.\n",
|
||||
"\n",
|
||||
"#### Example of adding delay to a request"
|
||||
]
|
||||
@ -570,6 +578,25 @@
|
||||
"for story in stories:\n",
|
||||
" print(story)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example parallel processing script\n",
|
||||
"\n",
|
||||
"We've written an example script for parallel processing large quantities of API requests: [api_request_parallel_processor.py](api_request_parallel_processor.py).\n",
|
||||
"\n",
|
||||
"The script combines some handy features:\n",
|
||||
"- Streams requests from file, to avoid running out of memory for giant jobs\n",
|
||||
"- Makes requests concurrently, to maximize throughput\n",
|
||||
"- Throttles both request and token usage, to stay under rate limits\n",
|
||||
"- Retries failed requests, to avoid missing data\n",
|
||||
"- Logs errors, to diagnose problems with requests\n",
|
||||
"\n",
|
||||
"Feel free to use it as is or modify it to suit your needs."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@ -588,7 +615,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.9"
|
||||
"version": "3.9.9 (main, Dec 7 2021, 18:04:56) \n[Clang 13.0.0 (clang-1300.0.29.3)]"
|
||||
},
|
||||
"orig_nbformat": 4,
|
||||
"vscode": {
|
||||
|
Loading…
Reference in New Issue
Block a user