OpenAI has announced the highly anticipated launch of fine-tuning capabilities for its powerful GPT-4o AI model. Developers can now customise GPT-4o using their own datasets to achieve superior performance and accuracy tailored to specific use cases. To further encourage adoption, OpenAI is offering 1 million free training tokens per day for every organisation until September 23rd.
Fine-tuning empowers developers to refine the model's responses, adapt its tone and structure, and even train it to follow complex, domain-specific instructions. Early results indicate that significant improvements can be achieved with relatively small training datasets.
"From coding to creative writing, fine-tuning can have a large impact on model performance across a variety of domains," OpenAI highlighted in its announcement. "This is just the start—we’ll continue to invest in expanding our model customisation options for developers."
Accessing Fine-Tuning
GPT-4o fine-tuning is available to all developers on paid usage tiers through OpenAI's fine-tuning dashboard. Training costs are set at $25 per million tokens, with inference priced at $3.75 per million input tokens and $15 per million output tokens.
Fine-tuning is also available for the GPT-4o mini model, with 2 million free training tokens per day offered until September 23rd.
Success Stories: Cosine and Distyl Achieve State-of-the-Art Results
OpenAI showcased two impressive examples of GPT-4o fine-tuning in action:
Cosine's Genie: An AI software engineering assistant powered by a fine-tuned GPT-4o model, Genie achieves state-of-the-art results on the SWE-bench benchmark for code generation and bug fixing.
Distyl's Text-to-SQL Model: By fine-tuning GPT-4o, Distyl secured first place on the BIRD-SQL benchmark, demonstrating exceptional performance in converting natural language to SQL queries.