mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-10 01:13:36 +00:00
finetuning gpt4o
This commit is contained in:
parent
8a6a3c32a1
commit
9c35ed44ee
@ -1,4 +1,5 @@
|
||||
{
|
||||
"finetuning-gpt4o": "Fine-tuning GPT-4o",
|
||||
"function_calling": "Function Calling",
|
||||
"context-caching": "Context Caching with LLMs",
|
||||
"generating": "Generating Data",
|
||||
|
31
pages/applications/finetuning-gpt4o.en.mdx
Normal file
31
pages/applications/finetuning-gpt4o.en.mdx
Normal file
@ -0,0 +1,31 @@
|
||||
# Fine-Tuning with GPT-4o Models
|
||||
|
||||
OpenAI recently [announced](https://openai.com/index/gpt-4o-fine-tuning/) the availability of fine-tuning for its latest models, GPT-4o and GPT-4o mini. This new capability enables developers to customize the GPT-4o models for specific use cases, enhancing performance and tailoring outputs.
|
||||
|
||||
## Fine-Tuning Details and Costs
|
||||
|
||||
Developers can now access the `GPT-4o-2024-08-06` checkpoint for fine-tuning through the dedicated [fine-tuning dashboard](https://platform.openai.com/finetune). This process allows for customization of response structure, tone, and adherence to complex, domain-specific instructions.
|
||||
|
||||
The cost for fine-tuning GPT-4o is \$25 per million tokens for training and \$3.75 per million input tokens and \$15 per million output tokens for inference. This feature is exclusively available to developers on paid usage tiers.
|
||||
|
||||
## Free Training Tokens for Experimentation
|
||||
|
||||
To encourage exploration of this new feature, OpenAI is offering a limited-time promotion until September 23rd. Developers can access 1 million free training tokens per day for GPT-4o and 2 million free training tokens per day for GPT-4o mini. This provides a good opportunity to experiment and discover innovative applications for fine-tuned models.
|
||||
|
||||
## Use Case: Emotion Classification
|
||||
|
||||
<iframe width="100%"
|
||||
height="415px"
|
||||
src="https://www.youtube.com/embed/UJ7ry7Qp2Js?si=ZU3K0ZVNfQjnlZgo" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowFullScreen
|
||||
/>
|
||||
|
||||
In the above guide, we showcase a practical example of fine-tuning which involves training a model for emotion classification. Using a [JSONL formatted dataset](https://github.com/dair-ai/datasets/tree/main/openai) containing text samples labeled with corresponding emotions, GPT-4o mini can be fine-tuned to classify text based on emotional tone.
|
||||
|
||||
This demonstration highlights the potential of fine-tuning in enhancing model performance for specific tasks, achieving significant improvements in accuracy compared to standard models.
|
||||
|
||||
## Accessing and Evaluating Fine-Tuned Models
|
||||
|
||||
Once the fine-tuning process is complete, developers can access and evaluate their custom models through the OpenAI playground. The playground allows for interactive testing with various inputs and provides insights into the model's performance. For more comprehensive evaluation, developers can integrate the fine-tuned model into their applications via the OpenAI API and conduct systematic testing.
|
||||
|
||||
OpenAI's introduction of fine-tuning for GPT-4o models unlocks new possibilities for developers seeking to leverage the power of LLMs for specialized tasks.
|
Loading…
Reference in New Issue
Block a user