Copilot Chat and pull request summary generation now use GPT-4o, bringing the performance of OpenAI’s latest flagship model to all developers.
Copilot Chat is available in Visual Studio, VS Code, JetBrains IDEs, GitHub Mobile apps, and GitHub.com.
To use the new GPT-4o model in your IDE, ensure you are using at least the minimum version of Copilot Chat specified here:
What this means for Copilot users
With this upgrade to GPT-4o, Copilot users will experience the following benefits:
- Faster response times – up to 55% faster TTFT (time to first byte)
- More accurate and reliable Copilot Chat responses – our testing showed a 60% increase in user satisfaction.
Commitment to quality
The upgrade process focused on our unwavering commitment to quality, safety, and security. Here’s what that entailed:
- Offline and online evaluation: We performed rigorous offline and online testing to ensure the model brings tangible benefits to users. This involved thorough benchmarking and running simulations of real-world software development scenarios to validate the improved performance and accuracy of GPT-4o.
- Red teaming: To preemptively address any potential safety issues, we conducted extensive red teaming exercises. These tests challenged the model to ensure it meets our high standards for safety and reliability in diverse coding environments.
We can’t wait to see what you create with the new GPT-4o-powered Copilot!
Let us know your feedback and join the discussion within the GitHub Community.
Happy coding!