The Llama 4 herd is now generally available in GitHub Models

Llama 4 release on GitHub Models

The latest AI models from Meta, Llama-4-Scout-17B-16E-Instruct and Llama-4-Maverick-17B-128E-Instruct-FP8, are now available on GitHub Models.

Llama-4-Scout-17B is a 17B parameter Mixture-of-Experts (MOE) model optimized for tasks like summarization, personalization, and reasoning. Its ability to handle extensive context makes it well-suited for tasks that require complex and detailed reasoning.

Llama-4-Maverick-17B is a 17B parameter Mixture-of-Experts (MOE) model designed for high-quality chat, creative writing, and precise image analysis. With its conversational fine-tuning and support for text and image understanding, Maverick is ideal for creating AI assitants and applications.

Try, compare, and implement these models in your code for free in the playground (Llama-4-Scout-17B-16E-Instruct and Llama-4-Maverick-17B-128E-Instruct-FP8) or through the GitHub API.

To learn more about GitHub Models, check out the docs. You can also join our community discussions.

GitHub Codespaces has introduced a new Agentic AI feature—you can now open a Codespace running VSCode’s Copilot agent mode, directly from a GitHub issue. With a single click, you can go from issue to implementation!

When you’re in a GitHub issue, the right-hand side of the view now displays a Code with Copilot Agent Mode button in the Development section. Clicking this button initializes a new Codespace, opens the Codespace in a new tab, and enables VSCode’s Copilot agent mode, using the issue body as context. Copilot will then get to work on the issue, thoroughly analyzing the codebase and considering dependencies to suggest appropriate file changes. You can then work with Copilot to fine tune your code and make modifications as required.

VSCode Agent Mode in Codespaces is in public preview, and we’ll be iterating on the experience over the upcoming months. Stay tuned for updates!

See more

Gemini 2.5 Pro is now available to all GitHub Copilot customers. The latest Gemini model from Google is their most advanced model for complex tasks. It shows strong reasoning and code capabilities. It also leads on common coding, math, and science benchmarks.

Google Gemini 2.5 Pro announcement

Get started today!

Copilot Pro/Pro+ users

You can start using the new Gemini 2.5 Pro model today through the model selectors in Copilot Chat in VS Code and immersive chat on github.com.

Copilot Business or Enterprise users

Copilot Business and Enterprise organization administrators will need to grant access to Gemini 2.5 Pro in Copilot through a new policy in Copilot settings. Once enabled, users will see the model selector in VS Code and chat on github.com. You can confirm the model’s availability by checking individual Copilot settings and confirming the policy for Gemini 2.5 Pro is set to enabled.

Share your feedback

Join the Community discussion to share feedback and tips.

For additional information, check out the docs on Gemini 2.5 Pro in Copilot.

Learn more about the models available in Copilot in our documentation on models and get started with Copilot today.

See more