Merge pull request #85 from openai/ted/various-updates-and-additions

Ted/various updates and additions
pull/86/head
Ted Sanders 1 year ago committed by GitHub
commit 57024c70cf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,653 +1,70 @@
# OpenAI Cookbook
This repository shares example code and example prompts for accomplishing common tasks with the [OpenAI API].
The OpenAI Cookbook shares example code for accomplishing common tasks with the [OpenAI API].
To try these examples yourself, youll need an OpenAI account. [Create a free account to get started.][API Signup]
To run these examples, you'll need an OpenAI account and associated API key ([create a free account][API Signup]).
Most code examples are written in Python, though the concepts can be applied in any language.
In the same way that a cookbook's recipes don't span all possible meals or techniques, these examples don't span all possible use cases or methods. Use them as starting points upon which to elaborate, discover, and invent.
## Guides & examples
* API usage
* [How to handle rate limits](examples/How_to_handle_rate_limits.ipynb)
* [Example parallel processing script that avoids hitting rate limits](examples/api_request_parallel_processor.py)
* [How to count tokens with tiktoken](examples/How_to_count_tokens_with_tiktoken.ipynb)
* [How to stream completions](examples/How_to_stream_completions.ipynb)
* GPT-3
* [Guide: How to work with large language models](how_to_work_with_large_language_models.md)
* [Guide: Techniques to improve reliability](techniques_to_improve_reliability.md)
* [How to use a multi-step prompt to write unit tests](examples/Unit_test_writing_using_a_multi-step_prompt.ipynb)
* [Text writing examples](text_writing_examples.md)
* [Text explanation examples](text_explanation_examples.md)
* [Text editing examples](text_editing_examples.md)
* [Code writing examples](code_writing_examples.md)
* [Code explanation examples](code_explanation_examples.md)
* [Code editing examples](code_editing_examples.md)
* Embeddings
* [Text comparison examples](text_comparison_examples.md)
* [How to get embeddings](examples/Get_embeddings.ipynb)
* [Question answering using embeddings](examples/Question_answering_using_embeddings.ipynb)
* [Semantic search using embeddings](examples/Semantic_text_search_using_embeddings.ipynb)
* [Recommendations using embeddings](examples/Recommendation_using_embeddings.ipynb)
* [Clustering embeddings](examples/Clustering.ipynb)
* [Visualizing embeddings in 2D](examples/Visualizing_embeddings_in_2D.ipynb) or [3D](examples/Visualizing_embeddings_in_3D.ipynb)
* [Embedding long texts](examples/Embedding_long_inputs.ipynb)
* Fine-tuning GPT-3
* [Guide: best practices for fine-tuning GPT-3 to classify text](https://docs.google.com/document/d/1rqj7dkuvl7Byd5KQPUJRxc19BJt8wo0yHNwK84KfU3Q/edit)
* [Fine-tuned classification](examples/Fine-tuned_classification.ipynb)
* DALL-E
* [How to generate and edit images with DALL-E](examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb)
* Azure OpenAI (alternative API from Microsoft Azure)
* [How to get completions from Azure OpenAI](examples/azure/completions.ipynb)
* [How to get embeddings from Azure OpenAI](examples/azure/embeddings.ipynb)
* [How to fine-tune GPT-3 with Azure OpenAI](examples/azure/finetuning.ipynb)
## Related resources
Beyond the code examples here, you can also learn about the [OpenAI API] from the following resources:
Beyond the code examples here, you can learn about the [OpenAI API] from the following resources:
* Try out GPT-3 in the [OpenAI Playground]
* Try out the API in the [OpenAI Playground]
* Read about the API in the [OpenAI Documentation]
* Discuss the API in the [OpenAI Community Forum]
* Look for help in the [OpenAI Help Center]
* See example prompts in the [OpenAI Examples]
* Play with a free research preview of [ChatGPT]
* Stay up to date with the [OpenAI Blog]
## Examples, organized by capability
<table id="verticalalign">
<thead>
<tr>
<th></th>
<th>Text</th>
<th>Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>Write</td>
<td>
<li><a href='#1-write-text'>Copywriting</a></li>
<li><a href='#1-write-text'>Blog posts</a></li>
<li><a href='#1-write-text'>Product descriptions</a></li>
<li><a href='#1-write-text'>Question generation</a></li>
</td>
<td>
<li><a href='#1-write-code'>Code completion (e.g., GitHub Copilot)</a></li>
<li><a href='#1-write-code'>Natural language software interfaces</a></li>
<li><a href='#1-write-code'>Text to code</a></li>
<li><a href='#1-write-code'>Unit tests</a></li>
</td>
</tr>
<tr>
<td>Explain</td>
<td>
<li><a href='#answering-questions-about-a-piece-of-text'>Q&A about a doc</a></li>
<li><a href='#entity-extraction'>Entity extraction</a></li>
<li><a href='#summarization'>Summarization</a></li>
<li><a href='#classification'>Classification</a></li>
</td>
<td>
<li><a href='#2-explain-code'>Code documentation</a></li>
<li><a href='#2-explain-code'>Code explanation</a></li>
<li><a href='#2-explain-code'>Docstrings</a></li>
</td>
</tr>
<tr>
<td>Edit</td>
<td>
<li><a href='#3-edit-text'>Editing</a></li>
<li><a href='#translation'>Translation</a></li>
</td>
<td>
<li><a href='#3-edit-code'>Conversion between languages or styles</a></li>
<li><a href='#3-edit-code'>Bug fixing</a></li>
</td>
</tr>
<tr>
<td>Compare</td>
<td>
<li><a href='#semantic-search'>Semantic search</a></li>
<li><a href='#recommendations'>Recommendations</a></li>
<li><a href='#4-compare-text'>Clustering</a></li>
<li><a href='#4-compare-text'>Near-duplicate detection</a></li>
</td>
<td>
<li><a href='#4-compare-code'>Code search</a></li>
<li><a href='#4-compare-code'>Code clustering</a></li>
</td>
</tr>
</tbody>
</table>
## How large language models work
[Large language models][Large language models Blog Post] are functions that map text to text. Given an input string of text, a large language model tries to predict the text that will come next.
The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn concepts like:
* how to spell
* how grammar works
* how to paraphrase
* how to answer questions
* how to hold a conversation
* how to write in many languages
* how to code
* etc.
None of these capabilities are explicitly programmed in - they all emerge as a result of training.
GPT-3's capabilities now power [hundreds of different software products][GPT3 Apps Blog Post], including productivity apps, education apps, games, and more.
## How to control a large language model
Of all the inputs to a large language model, by far the most influential is the text prompt.
Large language models can be prompted to produce output in a few ways:
* **Instruction**: Tell the model what you want
* **Completion**: Induce the model to complete the beginning of what you want
* **Demonstration**: Show the model what you want, with either:
* A few examples in the prompt
* Many hundreds or thousands of examples in a fine-tuning training dataset
An example of each is shown below.
### Instruction prompts
Instruction-following models (e.g., `text-davinci-003` or any model beginning with `text-`) are specially designed to follow instructions. Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. Instructions can be detailed, so don't be afraid to write a paragraph explicitly detailing the output you want.
Example instruction prompt:
```text
Extract the name of the author from the quotation below.
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
```
Output:
```text
Ted Chiang
```
### Completion prompt example
Completion-style prompts take advantage of how large language models try to write text they think is most likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output.
Example completion prompt:
```text
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
The author of this quote is
```
Output:
```text
Ted Chiang
```
### Demonstration prompt example (few-shot learning)
Similar to completion-style prompts, demonstrations can show the model what you want it to do. This approach is sometimes called few-shot learning, as the model learns from a few examples provided in the prompt.
Example demonstration prompt:
```text
Quote:
“When the reasoning mind is forced to confront the impossible again and again, it has no choice but to adapt.”
― N.K. Jemisin, The Fifth Season
Author: N.K. Jemisin
Quote:
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
Author:
```
Output:
```text
Ted Chiang
```
### Fine-tuned prompt example
With enough training examples, you can [fine-tune][Fine Tuning Docs] a custom model. In this case, instructions become unnecessary, as the model can learn the task from the training data provided. However, it can be helpful to include separator sequences (e.g., `->` or `###` or any string that doesn't commonly appear in your inputs) to tell the model when the prompt has ended and the output should begin. Without separator sequences, there is a risk that the model continues elaborating on the input text rather than starting on the answer you want to see.
Example fine-tuned prompt (for a model that has been custom trained on similar prompt-completion pairs):
```text
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
###
```
Output:
```text
Ted Chiang
```
### More prompt advice
For more prompt examples, visit [OpenAI Examples][OpenAI Examples].
In general, the input prompt is the best lever for improving model outputs. You can try tricks like:
* **Give more explicit instructions.** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.'
* **Supply better examples.** If you're demonstrating examples in your prompt, make sure that your examples are diverse and high quality.
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. E.g., "The following answer is correct, high-quality, and written by an expert."
* **Prompt the model to write down the series of steps explaining its reasoning.** E.g., prepend your answer with something like "[Let's think step by step](https://arxiv.org/pdf/2205.11916v1.pdf)." Prompting the model to give an explanation of its reasoning before its final answer can increase the likelihood that its final answer is consistent and correct.
## Text Capabilities
### 1. Write text
Large language models are excellent at writing text. They can assist with:
* Blog posts
* Email copy
* Ad copy
* Website copy
* Product descriptions
* Memos
* Storytelling
* Brainstorming
* Question generation
* etc.
An example prompt for an instruction-following model:
```text
Write an email to a colleague named Jill congratulating her on her promotion. The tone should be warm yet professional. Mention how you admire the work she's been putting in. Include a joke about how her pet lizard Max enjoys eating grasshoppers. Mention how you're looking forward to the team off-site next week.
```
Output:
```text
Dear Jill,
Congratulations on your promotion! I've been admiring the great work you've been putting in and I'm really happy for your success. Max the lizard must be thrilled too - I bet he's looking forward to feasting on even more grasshoppers!
I'm really looking forward to next week's team off-site. It's going to be great to catch up with everyone and brainstorm some new ideas.
Best,
[Your Name]
```
In general, writing can work with any style of prompt. Experiment to see what works best for your use case.
| | Advantages | Disadvantages |
| ---------------------------------------------------------- | ----------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| Instruction-following models<br>(e.g., `text-davinci-003`) | Easiest to use | Less creative; less diverse; harder to control tone, length, etc. |
| Base models<br>(e.g., `davinci`) | More creative | More expensive (as including examples demonstrations in prompt will cost tokens) |
| Fine-tuned models | Can train off of many examples; cheaper than including examples in the prompt | Hard to gather training data; training makes iteration slower and more expensive |
### 2. Explain text
One capability of large language models is distilling information from a piece of text. This can include:
* Answering questions about a piece of text, e.g.:
* Querying a knowledge base to help people look up things they don't know
* Querying an unfamiliar document to understand what it contains
* Querying a document with structured questions in order to extract tags, classes, entities, etc.
* Summarizing text, e.g.:
* Summarizing long documents
* Summarizing back-and-forth emails or message threads
* Summarizing detailed meeting notes with key points and next steps
* Classifying text, e.g.:
* Classifying customer feedback messages by topic or type
* Classifying documents by topic or type
* Classifying the tone or sentiment of text
* Extracting entities, e.g.:
* Extracting contact information from a customer message
* Extracting names of people or companies or products from a document
* Extracting things mentioned in customer reviews or feedback
#### Answering questions about a piece of text
Example prompt for answering questions about a piece of text:
```text
Using the following text, answer the following question. If the answer is not contained within the text, say "I don't know."
Text:
"""
Oklo Mine (sometimes Oklo Reactor or Oklo Mines), located in Oklo, Gabon on the west coast of Central Africa, is believed to be the only natural nuclear fission reactor. Oklo consists of 16 sites at which self-sustaining nuclear fission reactions are thought to have taken place approximately 1.7 billion years ago, and ran for hundreds of thousands of years. It is estimated to have averaged under 100 kW of thermal power during that time.
"""
Question: How many natural fission reactors have ever been discovered?
Answer:
```
Output:
```text
One
```
If the text you wish to ask about is longer than the token limit (~4,000 tokens for `text-davinci-003` and ~2,000 tokens for earlier models), we recommend splitting the text into smaller pieces, ranking them by relevance, and then asking the most-relevant-looking pieces.
#### Summarization
An example prompt for summarization:
```text
Summarize the following text.
Text:
"""
Two independent experiments reported their results this morning at CERN, Europe's high-energy physics laboratory near Geneva in Switzerland. Both show convincing evidence of a new boson particle weighing around 125 gigaelectronvolts, which so far fits predictions of the Higgs previously made by theoretical physicists.
"As a layman I would say: 'I think we have it'. Would you agree?" Rolf-Dieter Heuer, CERN's director-general, asked the packed auditorium. The physicists assembled there burst into applause.
"""
Summary:
```
Output:
```text
CERN has announced the discovery of a new particle, the Higgs boson. This particle has been predicted by theoretical physicists and is a major step forward in our understanding of the universe.
```
#### Classification
The best approach for classifying text depends on whether the classes are known in advance or not.
If your classes are known in advance, classification is best done with a fine-tuned model, as demonstrated in [Fine-tuned_classification.ipynb](examples/Fine-tuned_classification.ipynb).
If your classes are not known in advance (e.g., they are set by a user or generated on the fly), you can try zero-shot classification by either giving an instruction containing the classes or even by using embeddings to see which class label (or other classified texts) are most similar to the text ([Zero-shot_classification.ipynb](examples/Zero-shot_classification_with_embeddings.ipynb)).
#### Entity extraction
An example prompt for entity extraction:
```text
From the text below, extract the following entities in the following format:
Companies: <comma-separated list of companies mentioned>
People & titles: <comma-separated list of people mentioned (with their titles or roles appended in parentheses)>
Text:
"""
In March 1981, United States v. AT&T came to trial under Assistant Attorney General William Baxter. AT&T chairman Charles L. Brown thought the company would be gutted. He realized that AT&T would lose and, in December 1981, resumed negotiations with the Justice Department. Reaching an agreement less than a month later, Brown agreed to divestiture—the best and only realistic alternative. AT&T's decision allowed it to retain its research and manufacturing arms. The decree, titled the Modification of Final Judgment, was an adjustment of the Consent Decree of 14 January 1956. Judge Harold H. Greene was given the authority over the modified decree....
In 1982, the U.S. government announced that AT&T would cease to exist as a monopolistic entity. On 1 January 1984, it was split into seven smaller regional companies, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, and Pacific Telesis, to handle regional phone services in the U.S. AT&T retains control of its long distance services, but was no longer protected from competition.
"""
```
Output:
```text
Companies: United States v. AT&T, AT&T, Justice Department, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, Pacific Telesis
People & titles: William Baxter (Assistant Attorney General), Charles L. Brown (AT&T chairman), Harold H. Greene (Judge)
```
### 3. Edit text
In addition to the [completion API endpoint][Completion API Docs], OpenAI now offers an [edit API endpoint][Edit API Docs] ([blog post][GPT3 Edit Blog Post]). In contrast to completions, which only take a single text input, edits take two text inputs: the instruction and the text to be modified.
An example edit prompt:
Instruction input:
```text
Fix the OCR errors
```
Text input:
```text
Therewassomehostilityntheenergybehindthe researchreportedinPerceptrons....Part of ourdrivecame,aswequiteplainlyacknoweldgednourbook,fromhe facthatfundingndresearchnergywerebeingdissipatedon. . .misleadingttemptsouseconnectionistmethodsnpracticalappli-cations.
```
Output:
```text
There was some hostility in the energy behind the research reported in Perceptrons....Part of our drive came, as we quite plainly acknowledged in our book, from the fact that funding and research energy were being dissipated on...misleading attempts to use connectionist methods in practical applications.
```
#### Translation
Translation is another emergent capability of large language models. In 2021, [GPT-3 was used](https://arxiv.org/abs/2110.05448) to set a new state-of-the-art record in unsupervised translation on the WMT14 English-French benchmark.
Example translation prompt using the edits endpoint:
Instruction input:
```text
translation into French
```
Text input:
```text
That's life.
```
Output:
```text
C'est la vie.
```
Example translation prompt using the completions endpoint:
```text
Translate the following text from English to French.
English: That's life.
French:
```
Output:
```text
C'est la vie.
```
Tips for translation:
* Performance is best in the most common languages
* We've seen better performance when the instruction is given in the final language (so if translating into French, give the instruction `Traduire le texte de l'anglais au français.` rather than `Translate the following text from English to French.`)
* Backtranslation (as described [here](https://arxiv.org/abs/2110.05448)) can also increase performance
* Text with colons and heavy punctuation can trip up the instruction-following models, especially if the instruction is using colons (e.g., `English: {english text} French:`)
* The edits endpoint has been seen to sometimes repeat the text input alongside the translation
When it comes to translation, large language models particularly shine at combining other instructions alongside translation. For example, you can ask GPT-3 to translate Slovenian to English but keep all LaTeX typesetting commands unchanged. The following notebook details how we translated a Slovenian math book into English:
[Translation of a Slovenian math book into English](examples/book_translation/translate_latex_book.ipynb)
### 4. Compare text
The [OpenAI API embeddings endpoint][Embeddings Docs] can be used to measure similarity between pieces of text ([blog post][Embeddings Blog Post]). By leveraging GPT-3's understanding of text, these embeddings [achieved state-of-the-art results](https://arxiv.org/abs/2201.10005) on benchmarks in both unsupervised learning and transfer learning settings.
Embeddings can be used for semantic search, recommendations, cluster analysis, near-duplicate detection, and more.
#### Semantic search
Embeddings can be used for search either by themselves or as a feature in a larger system.
The simplest way to use embeddings for search is as follows:
* Before the search (precompute):
* Split your text corpus into chunks smaller than the token limit (e.g., <8,000 tokens)
* Embed each chunk
* Store those embeddings in your own database or in a vector search provider like [Pinecone](https://www.pinecone.io) or [Weaviate](https://weaviate.io)
* At the time of the search (live compute):
* Embed the search query
* Find the closest embeddings in your database
* Return the top results, ranked by cosine similarity
An example of how to use embeddings for search is shown in [Semantic_text_search_using_embeddings.ipynb](examples/Semantic_text_search_using_embeddings.ipynb).
In more advanced search systems, the cosine similarity of embeddings can be used as one feature among many in ranking search results.
#### Recommendations
Recommendations are quite similar to search, except that instead of a free-form text query, the inputs are items in a set.
An example of how to use embeddings for recommendations is shown in [Recommendation_using_embeddings.ipynb](examples/Recommendation_using_embeddings.ipynb).
Similar to search, these cosine similarity scores can either be used on their own to rank items or as features in larger ranking algorithms.
#### Customizing Embeddings
Although OpenAI's embedding model weights cannot be fine-tuned, you can still use training data to customize embeddings to your application.
In the following notebook, we provide an example method for customizing your embeddings using training data. The idea of the method is to train a custom matrix to multiply embedding vectors by in order to get new customized embeddings. With good training data, this custom matrix will highlight the features relevant to your training labels and suppress the rest. You can equivalently consider the matrix multiplication as (a) a modification of the embeddings or (b) a modification of the distance function used to measure the distances between embeddings.
* [Customizing_embeddings.ipynb](examples/Customizing_embeddings.ipynb)
## Code Capabilities
Large language models aren't only great at text - they can be great at code too. OpenAI's specialized code model is called [Codex].
Codex powers [more than 70 products][Codex Apps Blog Post], including:
* [GitHub Copilot] (autocompletes code in VS Code and other IDEs)
* [Pygma](https://pygma.app/) (turns Figma designs into code)
* [Replit](https://replit.com/) (has an 'Explain code' button and other features)
* [Warp](https://www.warp.dev/) (a smart terminal with AI command search)
* [Machinet](https://machinet.net/) (writes Java unit test templates)
Note that unlike instruction-following text models (e.g., `text-davinci-003`), Codex is *not* trained to follow instructions. As a result, designing good prompts can take more care.
### 1. Write code
An example prompt for writing code with `code-davinci-002`:
````text
SQL tables (and columns):
* Customers(customer_id, signup_date)
* Streaming(customer_id, video_id, watch_date, watch_minutes)
A well-written SQL query that lists customers who signed up during March 2020 and watched more than 50 hours of video in their first 30 days:
```
````
Output:
```sql
SELECT c.customer_id
FROM Customers c
JOIN Streaming s
ON c.customer_id = s.customer_id
WHERE c.signup_date BETWEEN '2020-03-01' AND '2020-03-31'
AND s.watch_date BETWEEN c.signup_date AND DATE_ADD(c.signup_date, INTERVAL 30 DAY)
GROUP BY c.customer_id
HAVING SUM(s.watch_minutes) > 50 * 60
```
`code-davinci-002` is able to make inferences from variable names; for example, it infers that `watch_minutes` has units of minutes and therefore needs to be converted by a factor of 60 before being compared with 50 hours.
### 2. Explain code
Code explanation can be applied to many use cases:
* Generating in-code documentation (e.g., Python docstrings, git commit messages)
* Generating out-of-code documentation (e.g., man pages)
* In an interactive code exploration tool
* Communicating program results back to users via a natural language interface
An example prompt for explaining code with `code-davinci-002`:
````text
A SQL query:
```
SELECT c.customer_id
FROM Customers c
JOIN Streaming s
ON c.customer_id = s.customer_id
WHERE c.signup_date BETWEEN '2020-03-01' AND '2020-03-31'
AND s.watch_date BETWEEN c.signup_date AND DATE_ADD(c.signup_date, INTERVAL 30 DAY)
GROUP BY c.customer_id
HAVING SUM(s.watch_minutes) > 50 * 60
```
Questions:
1. What does the SQL query do?
2. Why might someone be interested in this time period?
3. Why might a company be interested in this SQL query?
Answers:
````
Output:
```text
1. The SQL query finds all customers who signed up in March 2020 and watched more than 50 hours of content in the first 30 days after signing up.
2. The time period is interesting because it is the first month of the COVID-19 pandemic.
3. A company might be interested in this SQL query because it can help them understand how the pandemic has affected their business.
```
### 3. Edit code
OpenAI's edit endpoint is particularly useful for editing code.
Example text input to `code-davinci-edit-001`:
```python
def tribonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
elif n == 2:
return 1
elif n == 3:
return 2
else:
return tribonacci(n-1) + tribonacci(n-2) + tribonacci(n-3)
```
Example instruction inputs:
```text
Add a docstring
```
```text
Add typing
```
```text
Improve the runtime
```
```text
Add a test
```
```text
Translate to JavaScript (or Rust or Lisp or any language you like)
```
Example output after improving the runtime and translating to JavaScript:
```JavaScript
function tribonacci(n) {
let a = 0;
let b = 1;
let c = 1;
for (let i = 0; i < n; i++) {
[a, b, c] = [b, c, a + b + c];
}
return a;
}
```
As you can see, `code-davinci-edit-001` was able to successfully reduce the function's runtime from exponential down to linear, as well as convert from Python to JavaScript.
### 4. Compare code
The OpenAI API also features code search embeddings, which can measure the relevance of a section of code to a text query, or the similarity between two sections of code.
OpenAI code search embeddings significantly improved the state-of-the-art on the [CodeSearchNet] evaluation suite, scoring 93.5% versus the previous record of 77.4%.
Read more about OpenAI's code embeddings in the [blog post announcement][Embeddings Blog Post] or [documentation][Embeddings Docs].
Code embeddings can be useful for use cases such as:
* Code search
* Codebase clustering & analysis
An example of code search is shown in [Code_search.ipynb](examples/Code_search.ipynb).
We haven't written an example of code clustering, but the idea is the same as the text clustering in [Clustering.ipynb](examples/Clustering.ipynb).
## Contributing
If there are examples or guides you'd like to see, feel free to suggest them on the [issues page].
[ChatGPT]: https://chat.openai.com/
[OpenAI API]: https://openai.com/api/
[Embeddings Docs]: https://beta.openai.com/docs/guides/embeddings
[Edit API Docs]: https://beta.openai.com/docs/api-reference/edits
[Completion API Docs]: https://beta.openai.com/docs/api-reference/completions
[Fine Tuning Docs]: https://beta.openai.com/docs/guides/fine-tuning
[CodeSearchNet]: https://github.com/github/CodeSearchNet
[Embeddings Blog Post]: https://openai.com/blog/introducing-text-and-code-embeddings/
[Codex Apps Blog Post]: https://openai.com/blog/codex-apps/
[GPT3 Edit Blog Post]: https://openai.com/blog/gpt-3-edit-insert/
[Large language models Blog Post]: https://openai.com/blog/better-language-models/
[GitHub Copilot]: https://copilot.github.com/
[Codex]: https://openai.com/blog/openai-codex/
[API Signup]: https://beta.openai.com/signup
[GPT3 Apps Blog Post]: https://openai.com/blog/gpt-3-apps/
[OpenAI Playground]: https://beta.openai.com/playground
[OpenAI Documentation]: https://beta.openai.com/docs/introduction
[OpenAI Community Forum]: https://community.openai.com/top?period=monthly
[OpenAI Help Center]: https://help.openai.com/en/
[OpenAI Examples]: https://beta.openai.com/examples
[OpenAI Blog]: https://openai.com/blog/
[issues page]: https://github.com/openai/openai-cookbook/issues

@ -0,0 +1,63 @@
# Code editing example
OpenAI's [edits](https://openai.com/blog/gpt-3-edit-insert/) endpoint is particularly useful for editing code.
Unlike completions, edits takes two inputs: the text to edit and an instruction.
For example, if you wanted to edit a Python function, you could supply the text of the function and an instruction like "add a docstring".
Example text input to `code-davinci-edit-001`:
```python
def tribonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
elif n == 2:
return 1
elif n == 3:
return 2
else:
return tribonacci(n-1) + tribonacci(n-2) + tribonacci(n-3)
```
Example instruction inputs:
```text
add a docstring
```
```text
Add typing, using Python 3.9 conventions
```
```text
improved the runtime
```
```text
Add a test.
```
```text
Translate to JavaScript (or Rust or Lisp or any language you like)
```
Example output after improving the runtime and translating to JavaScript:
```JavaScript
function tribonacci(n) {
let a = 0;
let b = 1;
let c = 1;
for (let i = 0; i < n; i++) {
[a, b, c] = [b, c, a + b + c];
}
return a;
}
```
As you can see, `code-davinci-edit-001` was able to successfully reduce the function's runtime from exponential down to linear, as well as convert from Python to JavaScript.
Experiment with code editing using `code-davinci-edit-001` in the [OpenAI Playground](https://beta.openai.com/playground?mode=edit&model=code-davinci-edit-001).

@ -0,0 +1,41 @@
# Code explanation examples
GPT's understanding of code can be applied to many use cases, e.g.:
* Generating in-code documentation (e.g., Python docstrings, git commit messages)
* Generating out-of-code documentation (e.g., man pages)
* An interactive code exploration tool
* Communicating program results back to users via a natural language interface
For example, if you wanted to understand a SQL query, you could give `code-davinci-002` the following example prompt:
````text
A SQL query:
```
SELECT c.customer_id
FROM Customers c
JOIN Streaming s
ON c.customer_id = s.customer_id
WHERE c.signup_date BETWEEN '2020-03-01' AND '2020-03-31'
AND s.watch_date BETWEEN c.signup_date AND DATE_ADD(c.signup_date, INTERVAL 30 DAY)
GROUP BY c.customer_id
HAVING SUM(s.watch_minutes) > 50 * 60
```
Questions:
1. What does the SQL query do?
2. Why might someone be interested in this time period?
3. Why might a company be interested in this SQL query?
Answers:
````
[Output]((https://beta.openai.com/playground/p/Sv1VQKbJV1TZKmiTK9r6nlj3)):
```text
1. The SQL query finds all customers who signed up in March 2020 and watched more than 50 hours of content in the first 30 days after signing up.
2. The time period is interesting because it is the first month of the COVID-19 pandemic.
3. A company might be interested in this SQL query because it can help them understand how the pandemic has affected their business.
```
Note that `code-davinci-002` is not trained to follow instructions and therefore usually needs examples or other structure to help steer its output, as well as stop sequences to stop generating. For easier prompting, try `text-davinci-003`.

@ -0,0 +1,31 @@
# Code writing examples
GPT-3 is able to write code as well as text.
Here's an example of `code-davinci-002` writing a SQL query:
````text
SQL tables (and columns):
* Customers(customer_id, signup_date)
* Streaming(customer_id, video_id, watch_date, watch_minutes)
A well-written SQL query that lists customers who signed up during March 2020 and watched more than 50 hours of video in their first 30 days:
```
````
[Output](https://beta.openai.com/playground/p/r2mw99cANoa0TJHok725CeaC):
```sql
SELECT c.customer_id
FROM Customers c
JOIN Streaming s
ON c.customer_id = s.customer_id
WHERE c.signup_date BETWEEN '2020-03-01' AND '2020-03-31'
AND s.watch_date BETWEEN c.signup_date AND DATE_ADD(c.signup_date, INTERVAL 30 DAY)
GROUP BY c.customer_id
HAVING SUM(s.watch_minutes) > 50 * 60
```
Helpfully, `code-davinci-002` is able to make inferences from variable names; for example, it infers that `watch_minutes` has units of minutes and therefore needs to be converted by a factor of 60 before being compared with 50 hours.
For easier prompting, you can also try `text-davinci-003`.

@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -8,24 +9,29 @@
"\n",
"When you call the OpenAI API repeatedly, you may encounter error messages that say `429: 'Too Many Requests'` or `RateLimitError`. These error messages come from exceeding the API's rate limits.\n",
"\n",
"This guide shares tips for avoiding and handling rate limit errors.\n",
"\n",
"To see an example script for throttling parallel requests to avoid rate limit errors, see [api_request_parallel_processor.py](api_request_parallel_processor.py).\n",
"\n",
"## Why rate limits exist\n",
"\n",
"Rate limits are a common practice for APIs, and they're put in place for a few different reasons.\n",
"\n",
"- First, they help protect against abuse or misuse of the API. For example, a malicious actor could flood the API with requests in an attempt to overload it or cause disruptions in service. By setting rate limits, OpenAI can prevent this kind of activity.\n",
"- Second, rate limits help ensure that everyone has fair access to the API. If one person or organization makes an excessive number of requests, it could bog down the API for everyone else. By throttling the number of requests that a single user can make, OpenAI ensures that everyone has an opportunity to use the API without experiencing slowdowns.\n",
"- Lastly, rate limits can help OpenAI manage the aggregate load on its infrastructure. If requests to the API increase dramatically, it could tax the servers and cause performance issues. By setting rate limits, OpenAI can help maintain a smooth and consistent experience for all users.\n",
"\n",
"Although hitting rate limits can be frustrating, rate limits exist to protect the reliable operation of the API for its users.\n",
"\n",
"In this guide, we'll share some tips for avoiding and handling rate limit errors."
"Although hitting rate limits can be frustrating, rate limits exist to protect the reliable operation of the API for its users."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Default rate limits\n",
"\n",
"As of Sep 2022, the default rate limits are:\n",
"As of Jan 2023, the default rate limits are:\n",
"\n",
"<table>\n",
"<thead>\n",
@ -56,7 +62,7 @@
" <td>\n",
" <ul>\n",
" <li>60 requests / minute</li>\n",
" <li>250,000 davinci tokens / minute (and proportionally more for smaller models)</li>\n",
" <li>250,000 davinci tokens / minute (and proportionally more for cheaper models)</li>\n",
" </ul>\n",
" </td>\n",
" <td>\n",
@ -71,7 +77,7 @@
" <td>\n",
" <ul>\n",
" <li>3,000 requests / minute</li>\n",
" <li>250,000 davinci tokens / minute (and proportionally more for smaller models)</li>\n",
" <li>250,000 davinci tokens / minute (and proportionally more for cheaper models)</li>\n",
" </ul>\n",
" </td>\n",
" <td>\n",
@ -88,16 +94,17 @@
"\n",
"### Other rate limit resources\n",
"\n",
"Read more about OpenAI's rate limits in the [OpenAI Help Center](https://help.openai.com/en/):\n",
"Read more about OpenAI's rate limits in these other resources:\n",
"\n",
"- [Is API usage subject to any rate limits?](https://help.openai.com/en/articles/5955598-is-api-usage-subject-to-any-rate-limits)\n",
"- [How can I solve 429: 'Too Many Requests' errors?](https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors)\n",
"- [Guide: Rate limits](https://beta.openai.com/docs/guides/rate-limits/overview)\n",
"- [Help Center: Is API usage subject to any rate limits?](https://help.openai.com/en/articles/5955598-is-api-usage-subject-to-any-rate-limits)\n",
"- [Help Center: How can I solve 429: 'Too Many Requests' errors?](https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors)\n",
"\n",
"### Requesting a rate limit increase\n",
"\n",
"If you'd like your organization's rate limit increased, please fill out the following form:\n",
"\n",
"- [OpenAI Rate Limit Increase Request form](https://forms.gle/56ZrwXXoxAN1yt6i9)\n"
"- [OpenAI Rate Limit Increase Request form](https://forms.gle/56ZrwXXoxAN1yt6i9)\n"
]
},
{
@ -379,6 +386,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -392,7 +400,7 @@
"\n",
"If you are constantly hitting the rate limit, then backing off, then hitting the rate limit again, then backing off again, it's possible that a good fraction of your request budget will be 'wasted' on requests that need to be retried. This limits your processing throughput, given a fixed rate limit.\n",
"\n",
"Here, one potential solution is to calculate your rate limit and add a delay equal to its reciprocal (e.g., if your rate limit 20 requests per minute, add a delay of 3 seconds to each request). This can help you operate near the rate limit ceiling without hitting it and incurring wasted requests.\n",
"Here, one potential solution is to calculate your rate limit and add a delay equal to its reciprocal (e.g., if your rate limit 20 requests per minute, add a delay of 36 seconds to each request). This can help you operate near the rate limit ceiling without hitting it and incurring wasted requests.\n",
"\n",
"#### Example of adding delay to a request"
]
@ -570,6 +578,25 @@
"for story in stories:\n",
" print(story)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example parallel processing script\n",
"\n",
"We've written an example script for parallel processing large quantities of API requests: [api_request_parallel_processor.py](api_request_parallel_processor.py).\n",
"\n",
"The script combines some handy features:\n",
"- Streams requests from file, to avoid running out of memory for giant jobs\n",
"- Makes requests concurrently, to maximize throughput\n",
"- Throttles both request and token usage, to stay under rate limits\n",
"- Retries failed requests, to avoid missing data\n",
"- Logs errors, to diagnose problems with requests\n",
"\n",
"Feel free to use it as is or modify it to suit your needs."
]
}
],
"metadata": {
@ -588,7 +615,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.9"
"version": "3.9.9 (main, Dec 7 2021, 18:04:56) \n[Clang 13.0.0 (clang-1300.0.29.3)]"
},
"orig_nbformat": 4,
"vscode": {

@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -14,8 +15,14 @@
"\n",
"To stream completions, set `stream=True` when calling the Completions endpoint. This will return an object that streams back text as [data-only server-sent events](https://app.mode.com/openai/reports/4fce5ba22b5b/runs/f518a0be4495).\n",
"\n",
"## Downsides\n",
"\n",
"Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, which has implications for [approved usage](https://beta.openai.com/docs/usage-guidelines).\n",
"\n",
"Another small drawback of streaming responses is that the response no longer includes the `usage` field to tell you how many tokens were consumed. After receiving and combining all of the responses, you can calculate this yourself using [`tiktoken`](How_to_count_tokens_with_tiktoken.ipynb).\n",
"\n",
"## Example code\n",
"\n",
"Below is a Python code example of how to receive streaming completions."
]
},
@ -31,10 +38,11 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## A typical completion request\n",
"### A typical completion request\n",
"\n",
"With a typical Completions API call, the text is first computed and then returned all at once."
]
@ -80,10 +88,11 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## A streaming completion request\n",
"### A streaming completion request\n",
"\n",
"With a streaming Completions API call, the text is sent back via a series of events. In Python, you can iterate over these events with a `for` loop."
]
@ -328,10 +337,11 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Time comparison\n",
"### Time comparison\n",
"\n",
"In the example above, both requests took about 7 seconds to fully complete.\n",
"\n",
@ -355,7 +365,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.9"
"version": "3.9.9 (main, Dec 7 2021, 18:04:56) \n[Clang 13.0.0 (clang-1300.0.29.3)]"
},
"orig_nbformat": 4,
"vscode": {

@ -0,0 +1,428 @@
"""
API REQUEST PARALLEL PROCESSOR
Using the OpenAI API to process lots of text quickly takes some care.
If you trickle in a million API requests one by one, they'll take days to complete.
If you flood a million API requests in parallel, they'll exceed the rate limits and fail with errors.
To maximize throughput, parallel requests need to be throttled to stay under rate limits.
This script parallelizes requests to the OpenAI API while throttling to stay under rate limits.
Features:
- Streams requests from file, to avoid running out of memory for giant jobs
- Makes requests concurrently, to maximize throughput
- Throttles request and token usage, to stay under rate limits
- Retries failed requests up to {max_attempts} times, to avoid missing data
- Logs errors, to diagnose problems with requests
Example command to call script:
```
python examples/api_request_parallel_processor.py \
--requests_filepath examples/data/example_requests_to_parallel_process.jsonl \
--save_filepath examples/data/example_requests_to_parallel_process_results.jsonl \
--request_url https://api.openai.com/v1/embeddings \
--max_requests_per_minute 1500 \
--max_tokens_per_minute 6250000 \
--token_encoding_name cl100k_base \
--max_attempts 5 \
--logging_level 20
```
Inputs:
- requests_filepath : str
- path to the file containing the requests to be processed
- file should be a jsonl file, where each line is a json object with API parameters
- e.g., {"model": "text-embedding-ada-002", "input": "embed me"}
- as with all jsonl files, take care that newlines in the content are properly escaped (json.dumps does this automatically)
- an example file is provided at examples/data/example_requests_to_parallel_process.jsonl
- the code to generate the example file is appended to the bottom of this script
- save_filepath : str, optional
- path to the file where the results will be saved
- file will be a jsonl file, where each line is an array with the original request plus the API response
- e.g., [{"model": "text-embedding-ada-002", "input": "embed me"}, {...}]
- if omitted, results will be saved to {requests_filename}_results.jsonl
- request_url : str, optional
- URL of the API endpoint to call
- if omitted, will default to "https://api.openai.com/v1/embeddings"
- api_key : str, optional
- API key to use
- if omitted, the script will attempt to read it from an environment variable {os.getenv("OPENAI_API_KEY")}
- max_requests_per_minute : float, optional
- target number of requests to make per minute (will make less if limited by tokens)
- leave headroom by setting this to 50% or 75% of your limit
- if requests are limiting you, try batching multiple embeddings or completions into one request
- if omitted, will default to 1,500
- max_tokens_per_minute : float, optional
- target number of tokens to use per minute (will use less if limited by requests)
- leave headroom by setting this to 50% or 75% of your limit
- if omitted, will default to 125,000
- token_encoding_name : str, optional
- name of the token encoding used, as defined in the `tiktoken` package
- if omitted, will default to "cl100k_base" (used by `text-embedding-ada-002`)
- max_attempts : int, optional
- number of times to retry a failed request before giving up
- if omitted, will default to 5
- logging_level : int, optional
- level of logging to use; higher numbers will log fewer messages
- 40 = ERROR; will log only when requests fail after all retries
- 30 = WARNING; will log when requests his rate limits or other errors
- 20 = INFO; will log when requests start and the status at finish
- 10 = DEBUG; will log various things as the loop runs to see when they occur
- if omitted, will default to 20 (INFO).
The script is structured as follows:
- Imports
- Define main()
- Initialize things
- In main loop:
- Get next request if one is not already waiting for capacity
- Update available token & request capacity
- If enough capacity available, call API
- The loop pauses if a rate limit error is hit
- The loop breaks when no tasks remain
- Define dataclasses
- StatusTracker (stores script metadata counters; only one instance is created)
- APIRequest (stores API inputs, outputs, metadata; one method to call API)
- Define functions
- api_endpoint_from_url (extracts API endpoint from request URL)
- append_to_jsonl (writes to results file)
- num_tokens_consumed_from_request (bigger function to infer token usage from request)
- task_id_generator_function (yields 1, 2, 3, ...)
- Run main()
"""
# imports
import aiohttp # for making API calls concurrently
import argparse # for running script from command line
import asyncio # for running API calls concurrently
import json # for saving results to a jsonl file
import logging # for logging rate limit warnings and other messages
import os # for reading API key
import tiktoken # for counting tokens
import time # for sleeping after rate limit is hit
from dataclasses import dataclass # for storing API inputs, outputs, and metadata
async def process_api_requests_from_file(
requests_filepath: str,
save_filepath: str,
request_url: str,
api_key: str,
max_requests_per_minute: float,
max_tokens_per_minute: float,
token_encoding_name: str,
max_attempts: int,
logging_level: int,
):
"""Processes API requests in parallel, throttling to stay under rate limits."""
# constants
seconds_to_pause_after_rate_limit_error = 15
seconds_to_sleep_each_loop = 0.001 # 1 ms limits max throughput to 1,000 requests per second
# initialize logging
logging.basicConfig(level=logging_level)
logging.debug(f"Logging initialized at level {logging_level}")
# infer API endpoint and construct request header
api_endpoint = api_endpoint_from_url(request_url)
request_header = {"Authorization": f"Bearer {api_key}"}
# initialize trackers
queue_of_requests_to_retry = asyncio.Queue()
task_id_generator = task_id_generator_function() # generates integer IDs of 1, 2, 3, ...
status_tracker = StatusTracker() # single instance to track a collection of variables
next_request = None # variable to hold the next request to call
# initialize available capacity counts
available_request_capacity = max_requests_per_minute
available_token_capacity = max_tokens_per_minute
last_update_time = time.time()
# intialize flags
file_not_finished = True # after file is empty, we'll skip reading it
logging.debug(f"Initialization complete.")
# initialize file reading
with open(requests_filepath) as file:
# `requests` will provide requests one at a time
requests = file.__iter__()
logging.debug(f"File opened. Entering main loop")
while True:
# get next request (if one is not already waiting for capacity)
if next_request is None:
if queue_of_requests_to_retry.empty() is False:
next_request = queue_of_requests_to_retry.get_nowait()
logging.debug(f"Retrying request {next_request.task_id}: {next_request}")
elif file_not_finished:
try:
# get new request
request_json = eval(next(requests))
next_request = APIRequest(
task_id=next(task_id_generator),
request_json=request_json,
token_consumption=num_tokens_consumed_from_request(request_json, api_endpoint, token_encoding_name),
attempts_left=max_attempts,
)
status_tracker.num_tasks_started += 1
status_tracker.num_tasks_in_progress += 1
logging.debug(f"Reading request {next_request.task_id}: {next_request}")
except StopIteration:
# if file runs out, set flag to stop reading it
logging.debug("Read file exhausted")
file_not_finished = False
# update available capacity
current_time = time.time()
seconds_since_update = current_time - last_update_time
available_request_capacity = min(
available_request_capacity + max_requests_per_minute * seconds_since_update / 60.0,
max_requests_per_minute,
)
available_token_capacity = min(
available_token_capacity + max_tokens_per_minute * seconds_since_update / 60.0,
max_tokens_per_minute,
)
last_update_time = current_time
# if enough capacity available, call API
if next_request:
next_request_tokens = next_request.token_consumption
if (
available_request_capacity >= 1
and available_token_capacity >= next_request_tokens
):
# update counters
available_request_capacity -= 1
available_token_capacity -= next_request_tokens
next_request.attempts_left -= 1
# call API
asyncio.create_task(
next_request.call_API(
request_url=request_url,
request_header=request_header,
retry_queue=queue_of_requests_to_retry,
save_filepath=save_filepath,
status_tracker=status_tracker,
)
)
next_request = None # reset next_request to empty
# if all tasks are finished, break
if status_tracker.num_tasks_in_progress == 0:
break
# main loop sleeps briefly so concurrent tasks can run
await asyncio.sleep(seconds_to_sleep_each_loop)
# if a rate limit error was hit recently, pause to cool down
seconds_since_rate_limit_error = (time.time() - status_tracker.time_of_last_rate_limit_error)
if seconds_since_rate_limit_error < seconds_to_pause_after_rate_limit_error:
remaining_seconds_to_pause = (seconds_to_pause_after_rate_limit_error - seconds_since_rate_limit_error)
await asyncio.sleep(remaining_seconds_to_pause)
# ^e.g., if pause is 15 seconds and final limit was hit 5 seconds ago
logging.warn(f"Pausing to cool down until {time.ctime(status_tracker.time_of_last_rate_limit_error + seconds_to_pause_after_rate_limit_error)}")
# after finishing, log final status
logging.info(f"""Parallel processing complete. Results saved to {save_filepath}""")
if status_tracker.num_tasks_failed > 0:
logging.warning(f"{status_tracker.num_tasks_failed} / {status_tracker.num_tasks_started} requests failed. Errors logged to {save_filepath}.")
if status_tracker.num_rate_limit_errors > 0:
logging.warning(f"{status_tracker.num_rate_limit_errors} rate limit errors received. Consider running at a lower rate.")
# dataclasses
@dataclass
class StatusTracker:
"""Stores metadata about the script's progress. Only one instance is created."""
num_tasks_started: int = 0
num_tasks_in_progress: int = 0 # script ends when this reaches 0
num_tasks_succeeded: int = 0
num_tasks_failed: int = 0
num_rate_limit_errors: int = 0
num_api_errors: int = 0 # excluding rate limit errors, counted above
num_other_errors: int = 0
time_of_last_rate_limit_error: int = 0 # used to cool off after hitting rate limits
@dataclass
class APIRequest:
"""Stores an API request's inputs, outputs, and other metadata. Contains a method to make an API call."""
task_id: int
request_json: dict
token_consumption: int
attempts_left: int
result = []
async def call_API(
self,
request_url: str,
request_header: dict,
retry_queue: asyncio.Queue,
save_filepath: str,
status_tracker: StatusTracker,
):
"""Calls the OpenAI API and saves results."""
logging.info(f"Starting request #{self.task_id}")
error = None
try:
async with aiohttp.ClientSession() as session:
async with session.post(
url=request_url, headers=request_header, json=self.request_json
) as response:
response = await response.json()
if "error" in response:
logging.warning(
f"Request {self.task_id} failed with error {response['error']}"
)
status_tracker.num_api_errors += 1
error = response
if "Rate limit" in response["error"].get("message", ""):
status_tracker.time_of_last_rate_limit_error = time.time()
status_tracker.num_rate_limit_errors += 1
status_tracker.num_api_errors -= 1 # rate limit errors are counted separately
except Exception as e: # catching naked exceptions is bad practice, but in this case we'll log & save them
logging.warning(f"Request {self.task_id} failed with Exception {e}")
status_tracker.num_other_errors += 1
error = e
if error:
self.result.append(error)
if self.attempts_left:
retry_queue.put_nowait(self)
else:
logging.error(f"Request {self.request_json} failed after all attempts. Saving errors: {self.result}")
append_to_jsonl([self.request_json, self.result], save_filepath)
status_tracker.num_tasks_in_progress -= 1
status_tracker.num_tasks_failed += 1
else:
append_to_jsonl([self.request_json, response], save_filepath)
status_tracker.num_tasks_in_progress -= 1
status_tracker.num_tasks_succeeded += 1
logging.debug(f"Request {self.task_id} saved to {save_filepath}")
# functions
def api_endpoint_from_url(request_url):
"""Extract the API endpoint from the request URL."""
return request_url.split("/")[-1]
def append_to_jsonl(data, filename: str) -> None:
"""Append a json payload to the end of a jsonl file."""
json_string = json.dumps(data)
with open(filename, "a") as f:
f.write(json_string + "\n")
def num_tokens_consumed_from_request(
request_json: dict,
api_endpoint: str,
token_encoding_name: str,
):
"""Count the number of tokens in the request. Only supports completion and embedding requests."""
encoding = tiktoken.get_encoding(token_encoding_name)
# if completions request, tokens = prompt + n * max_tokens
if api_endpoint == "completions":
prompt = request_json["prompt"]
max_tokens = request_json.get("max_tokens", 15)
n = request_json.get("n", 1)
completion_tokens = n * max_tokens
if isinstance(prompt, str): # single prompt
prompt_tokens = len(encoding.encode(prompt))
num_tokens = prompt_tokens + completion_tokens
return num_tokens
elif isinstance(prompt, list): # multiple prompts
prompt_tokens = sum([len(encoding.encode(p)) for p in prompt])
num_tokens = prompt_tokens + completion_tokens
return num_tokens
else:
raise TypeError('Expecting either string or list of strings for "prompt" field in completion request')
# if embeddings request, tokens = input tokens
elif api_endpoint == "embeddings":
input = request_json["input"]
if isinstance(input, str): # single input
num_tokens = len(encoding.encode(input))
return num_tokens
elif isinstance(input, list): # multiple inputs
num_tokens = sum([len(encoding.encode(i)) for i in input])
return num_tokens
else:
raise TypeError('Expecting either string or list of strings for "inputs" field in embedding request')
# more logic needed to support other API calls (e.g., edits, inserts, DALL-E)
else:
raise NotImplementedError(f'API endpoint "{api_endpoint}" not implemented in this script')
def task_id_generator_function():
"""Generate integers 0, 1, 2, and so on."""
task_id = 0
while True:
yield task_id
task_id += 1
# run script
if __name__ == "__main__":
# parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument("--requests_filepath")
parser.add_argument("--save_filepath", default=None)
parser.add_argument("--request_url", default="https://api.openai.com/v1/embeddings")
parser.add_argument("--api_key", default=os.getenv("OPENAI_API_KEY"))
parser.add_argument("--max_requests_per_minute", type=int, default=3_000 * 0.5)
parser.add_argument("--max_tokens_per_minute", type=int, default=250_000 * 0.5)
parser.add_argument("--token_encoding_name", default="cl100k_base")
parser.add_argument("--max_attempts", type=int, default=5)
parser.add_argument("--logging_level", default=logging.INFO)
args = parser.parse_args()
if args.save_filepath is None:
args.save_filepath = args.requests_filepath.replace(".jsonl", "_results.jsonl")
# run script
asyncio.run(
process_api_requests_from_file(
requests_filepath=args.requests_filepath,
save_filepath=args.save_filepath,
request_url=args.request_url,
api_key=args.api_key,
max_requests_per_minute=float(args.max_requests_per_minute),
max_tokens_per_minute=float(args.max_tokens_per_minute),
token_encoding_name=args.token_encoding_name,
max_attempts=int(args.max_attempts),
logging_level=int(args.logging_level),
)
)
"""
APPENDIX
The example requests file at openai-cookbook/examples/data/example_requests_to_parallel_process.jsonl contains 10,000 requests to text-embedding-ada-002.
It was generated with the following code:
```python
import json
filename = "data/example_requests_to_parallel_process.jsonl"
n_requests = 10_000
jobs = [{"model": "text-embedding-ada-002", "input": str(x) + "\n"} for x in range(n_requests)]
with open(filename, "w") as f:
for job in jobs:
json_string = json.dumps(job)
f.write(json_string + "\n")
```
As with all jsonl files, take care that newlines in the content are properly escaped (json.dumps does this automatically).
"""

File diff suppressed because it is too large Load Diff

@ -0,0 +1,152 @@
# How to work with large language models
## How large language models work
[Large language models][Large language models Blog Post] are functions that map text to text. Given an input string of text, a large language model predicts the text that should come next.
The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn:
* how to spell
* how grammar works
* how to paraphrase
* how to answer questions
* how to hold a conversation
* how to write in many languages
* how to code
* etc.
None of these capabilities are explicitly programmed in—they all emerge as a result of training.
GPT-3 powers [hundreds of software products][GPT3 Apps Blog Post], including productivity apps, education apps, games, and more.
## How to control a large language model
Of all the inputs to a large language model, by far the most influential is the text prompt.
Large language models can be prompted to produce output in a few ways:
* **Instruction**: Tell the model what you want
* **Completion**: Induce the model to complete the beginning of what you want
* **Demonstration**: Show the model what you want, with either:
* A few examples in the prompt
* Many hundreds or thousands of examples in a fine-tuning training dataset
An example of each is shown below.
### Instruction prompts
Instruction-following models (e.g., `text-davinci-003` or any model beginning with `text-`) are specially designed to follow instructions. Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. Instructions can be detailed, so don't be afraid to write a paragraph explicitly detailing the output you want.
Example instruction prompt:
```text
Extract the name of the author from the quotation below.
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
```
Output:
```text
Ted Chiang
```
### Completion prompt example
Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output.
Example completion prompt:
```text
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
The author of this quote is
```
Output:
```text
Ted Chiang
```
### Demonstration prompt example (few-shot learning)
Similar to completion-style prompts, demonstrations can show the model what you want it to do. This approach is sometimes called few-shot learning, as the model learns from a few examples provided in the prompt.
Example demonstration prompt:
```text
Quote:
“When the reasoning mind is forced to confront the impossible again and again, it has no choice but to adapt.”
― N.K. Jemisin, The Fifth Season
Author: N.K. Jemisin
Quote:
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
Author:
```
Output:
```text
Ted Chiang
```
### Fine-tuned prompt example
With enough training examples, you can [fine-tune][Fine Tuning Docs] a custom model. In this case, instructions become unnecessary, as the model can learn the task from the training data provided. However, it can be helpful to include separator sequences (e.g., `->` or `###` or any string that doesn't commonly appear in your inputs) to tell the model when the prompt has ended and the output should begin. Without separator sequences, there is a risk that the model continues elaborating on the input text rather than starting on the answer you want to see.
Example fine-tuned prompt (for a model that has been custom trained on similar prompt-completion pairs):
```text
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
###
```
Output:
```text
Ted Chiang
```
## Code Capabilities
Large language models aren't only great at text - they can be great at code too. OpenAI's specialized code model is called [Codex].
Codex powers [more than 70 products][Codex Apps Blog Post], including:
* [GitHub Copilot] (autocompletes code in VS Code and other IDEs)
* [Pygma](https://pygma.app/) (turns Figma designs into code)
* [Replit](https://replit.com/) (has an 'Explain code' button and other features)
* [Warp](https://www.warp.dev/) (a smart terminal with AI command search)
* [Machinet](https://machinet.net/) (writes Java unit test templates)
Note that unlike instruction-following text models (e.g., `text-davinci-002`), Codex is *not* trained to follow instructions. As a result, designing good prompts can take more care.
### More prompt advice
For more prompt examples, visit [OpenAI Examples][OpenAI Examples].
In general, the input prompt is the best lever for improving model outputs. You can try tricks like:
* **Give more explicit instructions.** E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when the it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.'
* **Supply better examples.** If you're demonstrating examples in your prompt, make sure that your examples are diverse and high quality.
* **Ask the model to answer as if it was an expert.** Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. E.g., "The following answer is correct, high-quality, and written by an expert."
* **Prompt the model to write down the series of steps explaining its reasoning.** E.g., prepend your answer with something like "[Let's think step by step](https://arxiv.org/pdf/2205.11916v1.pdf)." Prompting the model to give an explanation of its reasoning before its final answer can increase the likelihood that its final answer is consistent and correct.
[Fine Tuning Docs]: https://beta.openai.com/docs/guides/fine-tuning
[Codex Apps Blog Post]: https://openai.com/blog/codex-apps/
[Large language models Blog Post]: https://openai.com/blog/better-language-models/
[GitHub Copilot]: https://copilot.github.com/
[Codex]: https://openai.com/blog/openai-codex/
[GPT3 Apps Blog Post]: https://openai.com/blog/gpt-3-apps/
[OpenAI Examples]: https://beta.openai.com/examples

@ -0,0 +1,49 @@
# Text comparison examples
The [OpenAI API embeddings endpoint](https://beta.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text.
By leveraging GPT-3's understanding of text, these embeddings [achieved state-of-the-art results](https://arxiv.org/abs/2201.10005) on benchmarks in unsupervised learning and transfer learning settings.
Embeddings can be used for semantic search, recommendations, cluster analysis, near-duplicate detection, and more.
For more information, read OpenAI's blog post announcements:
* [Introducing Text and Code Embeddings (Jan 2022)](https://openai.com/blog/introducing-text-and-code-embeddings/)
* [New and Improved Embedding Model (Dec 2022)](https://openai.com/blog/new-and-improved-embedding-model/)
## Semantic search
Embeddings can be used for search either by themselves or as a feature in a larger system.
The simplest way to use embeddings for search is as follows:
* Before the search (precompute):
* Split your text corpus into chunks smaller than the token limit (8,191 tokens for `text-embedding-ada-002`)
* Embed each chunk of text
* Store those embeddings in your own database or in a vector search provider like [Pinecone](https://www.pinecone.io) or [Weaviate](https://weaviate.io)
* At the time of the search (live compute):
* Embed the search query
* Find the closest embeddings in your database
* Return the top results
An example of how to use embeddings for search is shown in [Semantic_text_search_using_embeddings.ipynb](examples/Semantic_text_search_using_embeddings.ipynb).
In more advanced search systems, the the cosine similarity of embeddings can be used as one feature among many in ranking search results.
## Question answering
The best way to get reliably honest answers from GPT-3 is to give it source documents in which it can locate correct answers. Using the semantic search procedure above, you can cheaply search a corpus of documents for relevant information and then give that information to GPT-3, via the prompt, to answer a question. We demonstrate in [Question_answering_using_embeddings.ipynb](examples/Question_answering_using_embeddings.ipynb).
## Recommendations
Recommendations are quite similar to search, except that instead of a free-form text query, the inputs are items in a set.
An example of how to use embeddings for recommendations is shown in [Recommendation_using_embeddings.ipynb](examples/Recommendation_using_embeddings.ipynb).
Similar to search, these cosine similarity scores can either be used on their own to rank items or as features in larger ranking algorithms.
## Customizing Embeddings
Although OpenAI's embedding model weights cannot be fine-tuned, you can nevertheless use training data to customize embeddings to your application.
In [Customizing_embeddings.ipynb](examples/Customizing_embeddings.ipynb), we provide an example method for customizing your embeddings using training data. The idea of the method is to train a custom matrix to multiply embedding vectors by in order to get new customized embeddings. With good training data, this custom matrix will help emphasize the features relevant to your training labels. You can equivalently consider the matrix multiplication as (a) a modification of the embeddings or (b) a modification of the distance function used to measure the distances between embeddings.

@ -0,0 +1,86 @@
# Text editing examples
In addition to the [completions API endpoint][Completion API Docs], OpenAI offers an [edits API endpoint][Edit API Docs]. Read more at:
* [Blog post announcement (Mar 2022)][GPT3 Edit Blog Post]
* [Edit API documentation][Edit API Docs]
In contrast to completions, which only take a single text input, edits take two text inputs: the instruction and the text to be modified. For example:
Instruction input:
```text
Fix the OCR errors
```
Text input:
```text
Therewassomehostilityntheenergybehindthe researchreportedinPerceptrons....Part of ourdrivecame,aswequiteplainlyacknoweldgednourbook,fromhe facthatfundingndresearchnergywerebeingdissipatedon. . .misleadingttemptsouseconnectionistmethodsnpracticalappli-cations.
```
[Output](https://beta.openai.com/playground/p/5W5W6HHlHrGsLu1cpx0VF4qu):
```text
There was some hostility in the energy behind the research reported in Perceptrons....Part of our drive came, as we quite plainly acknowledged in our book, from the fact that funding and research energy were being dissipated on...misleading attempts to use connectionist methods in practical applications.
```
In general, instructions can be imperative, present tense, or past tense. Experiment to see what works best for your use case.
## Translation
One application of the edit API is translation.
Large language models are excellent at translating across common languages. In 2021, [GPT-3 set](https://arxiv.org/abs/2110.05448) a new state-of-the-art record in unsupervised translation on the WMT14 English-French benchmark.
Here's an example of how to translate text using the edits endpoint:
Instruction input:
```text
translation into French
```
Text input:
```text
That's life.
```
[Output](https://beta.openai.com/playground/p/6JWAH8a4ZbEafSDyRsSVdgKr):
```text
C'est la vie.
```
Of course, many tasks that can be accomplished with the edits endpoint can also be done by the completions endpoint too. For example, you can request a translate by prepending an instruction as follows:
```text
Translate the following text from English to French.
English: That's life.
French:
```
[Output](https://beta.openai.com/playground/p/UgaPfgjBNTRRPeNcMSNtGzcu):
```text
C'est la vie.
```
Tips for translation:
* Performance is best on the most common languages
* We've seen better performance when the instruction is given in the final language (so if translating into French, give the instruction `Traduire le texte de l'anglais au français.` rather than `Translate the following text from English to French.`)
* Backtranslation (as described [here](https://arxiv.org/abs/2110.05448)) can also increase performance
* Text with colons and heavy punctuation can trip up the instruction-following models, especially if the instruction uses colons (e.g., `English: {english text} French:`)
* The edits endpoint sometimes repeats the original text input alongside the translation, which can be monitored and filtered
When it comes to translation, large language models particularly shine at combining other instructions alongside translation. For example, you can ask GPT-3 to translate Slovenian to English but keep all LaTeX typesetting commands unchanged. The following notebook details how we translated a Slovenian math book into English:
[Translation of a Slovenian math book into English](examples/book_translation/translate_latex_book.ipynb)
[Edit API Docs]: https://beta.openai.com/docs/api-reference/edits
[Completion API Docs]: https://beta.openai.com/docs/api-reference/completions
[GPT3 Edit Blog Post]: https://openai.com/blog/gpt-3-edit-insert/

@ -0,0 +1,108 @@
# Text explanation examples
Large language models are useful for distilling information from long texts. Applications include:
* Answering questions about a piece of text, e.g.:
* Querying an knowledge base to help people look up things they don't know
* Querying an unfamiliar document to understand what it contains
* Querying a document with structured questions in order to extract tags, classes, entities, etc.
* Summarizing text, e.g.:
* Summarizing long documents
* Summarizing back-and-forth emails or message threads
* Summarizing detailed meeting notes with key points and next steps
* Classifying text, e.g.:
* Classifying customer feedback messages by topic or type
* Classifying documents by topic or type
* Classifying the tone or sentiment of text
* Extracting entities, e.g.:
* Extracting contact information from a customer message
* Extracting names of people or companies or products from a document
* Extracting things mentioned in customer reviews or feedback
Below are some simple examples of each.
## Answering questions about a piece of text
Here's an example prompt for answering questions about a piece of text:
```text
Using the following text, answer the following question. If the answer is not contained within the text, say "I don't know."
Text:
"""
Oklo Mine (sometimes Oklo Reactor or Oklo Mines), located in Oklo, Gabon on the west coast of Central Africa, is believed to be the only natural nuclear fission reactor. Oklo consists of 16 sites at which self-sustaining nuclear fission reactions are thought to have taken place approximately 1.7 billion years ago, and ran for hundreds of thousands of years. It is estimated to have averaged under 100 kW of thermal power during that time.
"""
Question: How many natural fission reactors have ever been discovered?
Answer:
```
[Output](https://beta.openai.com/playground/p/c8ZL7ioqKK7zxrMT2T9Md3gJ):
```text
One. Oklo Mine is believed to be the only natural nuclear fission reactor.
```
If the text you wish to ask about is longer than the token limit (~4,000 tokens for `text-davinci-002`/`-003` and ~2,000 tokens for earlier models), you can split the text into smaller pieces, rank them by relevance, and then ask your question only using the most-relevant-looking pieces. This is demonstrated in [Question_answering_using_embeddings.ipynb](examples/Question_answering_using_embeddings.ipynb).
In the same way that students do better on tests when allowed to access notes, GPT-3 does better at answering questions when it's given text containing the answer.
Without notes, GPT-3 has to rely on its own long-term memory (i.e., internal weights), which are more prone to result in confabulated or hallucinated answers.
## Summarization
Here's a simple example prompt to summarize a piece of text:
```text
Summarize the following text.
Text:
"""
Two independent experiments reported their results this morning at CERN, Europe's high-energy physics laboratory near Geneva in Switzerland. Both show convincing evidence of a new boson particle weighing around 125 gigaelectronvolts, which so far fits predictions of the Higgs previously made by theoretical physicists.
"As a layman I would say: 'I think we have it'. Would you agree?" Rolf-Dieter Heuer, CERN's director-general, asked the packed auditorium. The physicists assembled there burst into applause.
"""
Summary:
```
[Output](https://beta.openai.com/playground/p/pew7DNB908TkUYiF0ZOdaIGc):
```text
CERN's director-general asked a packed auditorium if they agreed that two independent experiments had found convincing evidence of a new boson particle that fits predictions of the Higgs, to which the physicists assembled there responded with applause.
```
The triple quotation marks `"""` used in these example prompts aren't special; GPT-3 can recognize most delimiters, including `<>`, `{}`, or `###`. For long pieces of text, we recommend using some kind of delimiter to help disambiguate where one section of text ends and the next begins.
## Classification
If you want to classify the text, the best approach depends on whether the classes are known in advance.
If your classes _are_ known in advance, classification is often best done with a fine-tuned model, as demonstrated in [Fine-tuned_classification.ipynb](examples/Fine-tuned_classification.ipynb).
If your classes are not known in advance (e.g., they are set by a user or generated on the fly), you can try zero-shot classification by either giving an instruction containing the classes or even by using embeddings to see which class label (or other classified texts) are most similar to the text (as demonstrated in [Zero-shot_classification.ipynb](examples/Zero-shot_classification_with_embeddings.ipynb)).
## Entity extraction
Here's an example prompt for entity extraction:
```text
From the text below, extract the following entities in the following format:
Companies: <comma-separated list of companies mentioned>
People & titles: <comma-separated list of people mentioned (with their titles or roles appended in parentheses)>
Text:
"""
In March 1981, United States v. AT&T came to trial under Assistant Attorney General William Baxter. AT&T chairman Charles L. Brown thought the company would be gutted. He realized that AT&T would lose and, in December 1981, resumed negotiations with the Justice Department. Reaching an agreement less than a month later, Brown agreed to divestiture—the best and only realistic alternative. AT&T's decision allowed it to retain its research and manufacturing arms. The decree, titled the Modification of Final Judgment, was an adjustment of the Consent Decree of 14 January 1956. Judge Harold H. Greene was given the authority over the modified decree....
In 1982, the U.S. government announced that AT&T would cease to exist as a monopolistic entity. On 1 January 1984, it was split into seven smaller regional companies, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, and Pacific Telesis, to handle regional phone services in the U.S. AT&T retains control of its long distance services, but was no longer protected from competition.
"""
```
[Output](https://beta.openai.com/playground/p/of47T7N5CtHF4RlvwFkTu3pN):
```text
Companies: AT&T, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, Pacific Telesis
People & titles: William Baxter (Assistant Attorney General), Charles L. Brown (AT&T chairman), Harold H. Greene (Judge)
```

@ -0,0 +1,48 @@
# Text writing examples
Large language models are excellent at writing. They can assist with all sorts of tasks:
* Blog posts
* Email copy
* Ad copy
* Website copy
* Product descriptions
* Memos
* Storytelling
* Brainstorming
* Question generation
* etc.
The easiest way to prompt GPT-3 is to tell it what you'd like. For example, if you want it to write an email, you could use a prompt like this:
```text
Write an email to a colleague named Jill congratulating her on her promotion. The tone should be warm yet professional. Mention how you admire the work she's been putting in. Include a joke about how her pet lizard Max enjoys eating grasshoppers. Mention how you're looking forward to the team off-site next week.
```
[Output](https://beta.openai.com/playground/p/fUeh5saVLI2dhU0WU3nZtIiU):
```text
Subject: Congratulations on Your Promotion, Jill!
Dear Jill,
Congratulations on your promotion! I'm so proud of you and all the hard work you've been putting in. It's been inspiring to watch you grow in your career.
I'm sure Max the lizard is also celebrating with you. I hear he loves eating grasshoppers, so you should treat him to a special snack!
I'm looking forward to seeing you at the team off-site next week. I'm sure it will be a great opportunity to celebrate your success.
All the best,
[Your Name]
```
In general, writing can work with any style of prompt, whether it's an instruction or a series of examples to follow. Experiment to see what works best for your use case.
Writing also works with any type of model, though they each have strengths and weaknesses.
| | Advantages | Disadvantages |
| ---------------------------------------------------------- | ----------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
| Instruction-following models<br>(e.g., `text-davinci-003`) | Easiest to use | Less diverse; less creative; sometimes harder to steer tone, style, etc. |
| Base models<br>(e.g., `davinci`) | Potentially more creative and diverse | Harder to prompt well, more expensive (as examples in the prompt cost extra tokens) |
| Fine-tuned models | Can train off of many examples; cheaper than including examples in the prompt | Hard to gather training data; training makes iteration slower and more expensive |
Loading…
Cancel
Save