GPT-5 mini, OpenAI’s faster, more cost-efficient variant of GPT-5, is now rolling out in public preview in GitHub Copilot.

It’s designed to provide quick and accurate responses for a variety of coding tasks. It’s optimized for precise prompts and well-defined tasks, and it delivers lower latency and lower cost while maintaining strong performance on focused coding and editing tasks. For more details about the model’s capabilities and recommended usage patterns, see OpenAI’s model documentation.

Availability in GitHub Copilot

GPT-5 mini is rolling out to all Copilot plans, including Copilot Free. You’ll be able to access the model in GitHub Copilot Chat on github.com, in Visual Studio Code through the chat model picker, and in GitHub Mobile on iOS and Android. Support for other IDEs is coming soon.

GPT-5 mini will be offered as an included model and will not consume premium requests on paid plans. For more details, see our documentation on model multipliers.

Enabling access

Copilot Enterprise and Business administrators must opt in by enabling the new GPT-5 mini policy in Copilot settings. Once enabled, users in that organization will see GPT-5 mini (Preview) in the model picker in VS Code, on github.com, and in GitHub Mobile.

To learn more about the models available in Copilot, see our documentation on models and get started with Copilot today.

Share your feedback

Have feedback or questions? Join the community discussion to share feedback and tips.