Interested in trying GitHub Copilot? Read the Docs to learn more about Copilot features or get started today.
Under the hood: Exploring the AI models powering GitHub Copilot
Learn how GitHub Copilot’s evolving models and infrastructure center developer choice and power agentic workflows.

Since its initial launch in 2021, GitHub Copilot has evolved a lot — and so have the AI models that power it.
When we first announced GitHub Copilot as a technical preview, OpenAI hadn’t yet launched ChatGPT. Today, AI dominates headlines and workflows alike. Amid this rapid change, our focus has remained the same: help developers stay in flow and get more done.
Amidst all of that, we have been focused on continually improving GitHub Copilot with developers in mind. That’s meant re-evaluating what models power it, and building agentic workflows into its core experience, too.
In this article, we’ll look at the models that drive different parts of GitHub Copilot and the powerful infrastructure that supports Copilot’s agentic capabilities. We’ll also discuss how model selection works across various features, like agent mode, code completions, and chat.
Now, let’s take a look under the hood. ✨
From Codex to multi-model: The evolution of GitHub Copilot
When GitHub Copilot launched in 2021, it was powered by a single model: Codex, a descendant of GPT-3.
At the time, Codex was a revelation. Capable of understanding and generating code in the IDE with surprising fluency, Codex helped prove that AI could be a valuable tool for developers and showed a future where AI could potentially become a true coding companion.
Since then, Copilot has transitioned away from Codex and now defaults to the latest frontier models, while also giving developers access to their choice of advanced models.
Where it once lived firmly in the IDE as an extension to help developers with autocomplete and code generation, Copilot has evolved to become part of the GitHub platform available across developer workflows.
Copilot can answer questions, generate tests, debug code, get assigned an issue, generate a pull request, assist with code reviews, analyze codebases, and even fix security vulnerabilities, among other things.
Throughout all of these changes, we have focused on helping developers accomplish more, do less boilerplate work, stay in the flow, focus on the big picture, and ship higher-quality code faster.
Why offer multiple models?
Moving Copilot to a multi-model architecture wasn’t just about keeping up with AI advancements. It was about allowing developers to choose their preferred LLM for the task at hand, giving them flexibility in a rapidly changing environment.
Different models excel at different tasks, and by integrating a variety of them, GitHub Copilot can now deliver more tailored, powerful experiences through features like these:
- Baseline intelligence: GitHub Copilot now defaults to GPT-4.1 across chat, agent mode, and code completions. Optimized for speed, reasoning, and context handling, GPT-4.1 is tuned for developer workflows and supports more than 30 programming languages.
- Frontier model access: In Pro+, Business, and Enterprise tiers, developers can choose from a wide range of advanced models via the model picker, including:
- Anthropic: Claude Sonnet 3.5, Claude Sonnet 3.7, Claude Sonnet 3.7 Thinking, Claude Sonnet 4, Claude Opus 4 (preview), Claude Opus 4.1 (preview)
- OpenAI: GPT-4.1, GPT-5 (preview), GPT-5 mini (preview), o3 (preview), o3-mini, o4-mini (preview)
- Google: Gemini 2.5 Pro
Each option offers different trade-offs between speed, reasoning depth, and multimodal capabilities.
Why developer choice matters in agentic workflows
Because Copilot supports multiple models, developers have the autonomy to choose exactly how they build, whether they’re prioritizing speed, precision, or creativity. This flexibility lets developers tailor their experience based on their preferences — and these developer experience (DevEx) improvements translate into real productivity gains.
Copilot’s agentic capabilities mean that:
- Developers no longer need to switch editors or even leave GitHub. Copilot is GitHub native, so it operates directly inside your IDE and in GitHub, which makes it easy to delegate tasks without breaking your flow.
- Developers can work exactly how they prefer, whether that means automating tasks with Copilot, accepting suggested fixes, or stepping back and letting coding agent take over.
- Copilot can operate with full context into your repositories, analyze and index your codebases, respect branch protections, and fit seamlessly into your existing review cycles.
- Copilot handles the busywork — from triaging comments to patching vulnerabilities or chasing down cross-repo blockers — so developers can stay focused on what matters most.
Agentic workflows help reduce complexity and prioritize developer choice at every step, leading to higher-quality code and fewer to-dos. This empowers developers to work the way they want: faster, safer, and with more confidence.
Delivering real-world impact through better DevEx
As AI continues to evolve, its role in shaping the developer experience will only grow. From reducing context switching to automating repetitive tasks, AI tools like Copilot are increasingly becoming a “second brain” for developers.
Having a choice of models lets developers customize exactly how they work. This lets them build with confidence, drive even more impact, and find greater satisfaction in their work.
How model selection works in Copilot
GitHub Copilot is more than just one single AI model. It’s a dynamic platform that uses intelligence to match the right model with the right task. This flexibility is central to delivering a seamless DevEx, and it’s guided by a deep understanding of how developers work, what they need, and when they need it.
Choosing the right model for the job
Development tasks vary in complexity and context. That’s why GitHub Copilot empowers users to select the model that best suits their needs, especially in Chat and agent mode.
Whether you’re optimizing for speed, reasoning depth, or multimodal input, there’s a model for you:
Model | Best for: |
o4-mini (OpenAI) | Speed, low-latency completions |
GPT-4.1 (OpenAI) | Balanced performance and multimodal support |
GPT-5 mini (OpenAI) | Lightweight reasoning |
GPT-5 (OpenAI) | High-end reasoning for complex tasks |
o3 (OpenAI) | Advanced planning and multi-step reasoning |
Claude Sonnet 3.5 | Reliable, everyday coding tasks |
Claude Sonnet 3.7 | Deeper reasoning for large codebases |
Claude Sonnet 3.7 Thinking | Long-horizon, structured problem-solving |
Claude Sonnet 4 | Higher reasoning depth |
Claude Opus 4 | Premium reasoning power |
Claude Opus 4.1 | Most advanced Anthropic option |
Gemini 2.5 Pro | Advanced multimodal reasoning |
Take this with you
As the world of AI keeps evolving, so will the models that power GitHub Copilot. We’re committed to continuously refining and updating our AI infrastructure to provide you with the best possible developer experience.
We encourage you to explore all the different models available within Copilot and discover how they can enhance your coding journey. Happy building!
Tags:
Written by
Related posts

5 ways to integrate GitHub Copilot coding agent into your workflow
Already know the basics of GitHub Copilot coding agent? Here are five ways to offload chores, tackle tech debt, and keep your workflow moving fast.

Meet the GitHub MCP Registry: The fastest way to discover MCP Servers
This is your new home base for discovering MCP servers. Learn how we’re working with the broader community on MCP publication and discovery.

GitHub Copilot coding agent 101: Getting started with agentic workflows on GitHub
Delegate it a task, and coding agent can independently write, run, and test code. Here’s how you can make the most of it.