Windows arm64 hosted runners now available in public preview

Now in public preview, Windows arm64 hosted runners are available for free in public repositories. This runner comes with a Windows 11 Desktop image, fully equipped with all the tooling you need to quickly get started running your workflows. Following the release of linux arm64 hosted runners in January, this now extends to Windows support for the open source-community. These four vCPU runners provide a power-efficient compute layer for your Windows workloads. Arm-native developers can now build, test, and deploy entirely within the arm64 architecture without the need for virtualization on your actions runs.

How to use the runners

To leverage the arm64 hosted runners, add the following labels in your public repository workflow runs:

  • windows-11-arm

Please note that this label will not work in private repositories—the workflow will fail if you add it. All runs in public repositories will adhere to our standard runners usage limits, with maximum concurrencies based on your plan type. While the arm64 runners are in public preview, you may experience longer queue times during peak usage hours.

Images for arm64 larger runners

In partnership with Arm, there is now a Windows 11 desktop arm64 image with preinstalled tools available for all GitHub runner sizes, including both the new free offering and our existing arm64 larger runners. To use the new image on larger runners, you can create a new runner and select the Microsoft Windows 11 Desktop by Arm Limited image in the Images console.

To view the list of installed software, give feedback on the image, or report issues, visit the partner-runner-images repository.

Get started today!

To get started building windows on arm64 for free, simply add the new label to the runs-on syntax in your public actions workflow file. For more information on arm64 runners and how to use them, see our documentation and join the conversation in the Community discussion.

GPT-4.1 release in GitHub Copilot and GitHub Models

OpenAI’s latest model, GPT-4.1, is now available in GitHub Copilot and GitHub Models, bringing OpenAI’s newest model to your coding workflow. This model outperforms GPT-4o across the board, with major gains in coding, instruction following, and long-context understanding. It has a larger context window and features a refreshed knowledge cutoff of June 2024.

OAI has optimized GPT-4.1 for real-world use based on direct developer feedback about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more. This model is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning.

Copilot

OpenAI GPT-4.1 is rolling out for all Copilot Plans, including Copilot Free. You can access it through the model picker in Visual Studio Code and on github.com chat. To accelerate your workflow, whether you’re debugging, refactoring, modernizing, testing, or just getting started, select “GPT-4.1 (Preview)” to begin using it.

Enabling access

Copilot Enterprise administrators will need to enable access to GPT-4.1 through a new policy in Copilot settings. As an administrator, you can verify availability by checking your individual Copilot settings and confirming the policy for GPT-4.1 is set to enabled. Once enabled, users will see GPT-4.1 in the Copilot Chat model selector in VS Code and on github.com.

To learn more about the models available in Copilot, see our documentation on models and get started with Copilot today.

GitHub Models

GitHub Models users can now harness the power of GPT-4.1 to enhance their AI applications and projects. In the GitHub Models playground, you can experiment with sample prompts, refine your ideas, and iterate as you build. You can also try it alongside other models including those from Cohere, DeepSeek, Meta, and Microsoft.

To learn more about GitHub Models, check out the GitHub Models documentation.

Share your feedback

Join the Community discussion to share feedback and tips.

See more

Llama 4 release on GitHub Models

The latest AI models from Meta, Llama-4-Scout-17B-16E-Instruct and Llama-4-Maverick-17B-128E-Instruct-FP8, are now available on GitHub Models.

Llama-4-Scout-17B is a 17B parameter Mixture-of-Experts (MOE) model optimized for tasks like summarization, personalization, and reasoning. Its ability to handle extensive context makes it well-suited for tasks that require complex and detailed reasoning.

Llama-4-Maverick-17B is a 17B parameter Mixture-of-Experts (MOE) model designed for high-quality chat, creative writing, and precise image analysis. With its conversational fine-tuning and support for text and image understanding, Maverick is ideal for creating AI assitants and applications.

Try, compare, and implement these models in your code for free in the playground (Llama-4-Scout-17B-16E-Instruct and Llama-4-Maverick-17B-128E-Instruct-FP8) or through the GitHub API.

To learn more about GitHub Models, check out the docs. You can also join our community discussions.

See more