OpenAI’s latest model, o3-mini, is now available in GitHub Copilot and GitHub Models, bringing OpenAI’s newest reasoning model to your coding workflow.
The o3-mini reasoning model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, meaning you’ll get improved quality at nearly the same latency.
This cutting-edge model is rolling out gradually and will be available to GitHub Copilot Pro, Business, and Enterprise users today via the model picker in Visual Studio Code and github.com chat (support in Visual Studio, and JetBrains are coming soon). To accelerate your workflow, whether you’re debugging, refactoring, modernizing, testing, or just getting started, simply select “o3-mini (Preview)” to begin using it.
Paid Copilot subscribers get up to 50 messages every 12 hours. Business or Enterprise admins can enable o3-mini access for org members through their org and enterprise admin settings pages.
GitHub Models users with a paid Copilot plan will also be able to leverage the o3-mini model to enhance their AI applications and projects later today. In the GitHub Models playground, you can explore o3-mini’s versatility as you experiment with sample prompts, refine your ideas, and iterate as you build. You can also try it alongside other models available on GitHub Models including models from Cohere, DeepSeek, Meta, and Mistral.
To learn more, check out product documentation on GitHub Models. You can also join our community discussions.