Advancing responsible practices for open source AI
Outcomes from the Partnership on AI and GitHub workshop.

Today, the Partnership on AI (PAI) published a report, Risk Mitigation Strategies for the Open Foundation Model Value Chain. The report provides guidance for actors building, hosting, adapting, and serving AI that relies on open source and other weights-available foundation models. It is an important step forward for responsible practices in the open source AI value chain.
The report is based on a workshop that GitHub recently co-hosted with PAI, as part of our work to support a vibrant and responsible open source ecosystem. Developers build and share open source components at every level of the AI stack on GitHub, amounting to some 1.6 million repositories. These projects range from foundational frameworks like PyTorch, to agent orchestration software LangChain, to models like Grok and responsible AI tooling like AI Verify. Our platform and open data efforts work to make this innovation more accessible and understandable to developers, researchers, and policymakers alike. We evaluate and periodically update our platform policies to encourage responsible development, and we recently joined the Munich Tech Accord to address AI risks in this year’s elections. We work to educate policymakers on the practices, risks, and benefits of open source AI, including in the United States to inform implementation of the Biden Administration’s Executive Order and in the EU to secure an improved AI Act.
Reports like Risk Mitigation Strategies for the Open Foundation Model Value Chain are important resources to inform policy and practice. Policymakers often have a better understanding of vertically integrated AI stacks and the governance affordances of API access than they do of open source and distributed AI collaborations. In addition to beginning to consolidate best practices, the report delineates the open value chain (as pictured below) to provide policymakers a clearer understanding of the distribution of roles and responsibilities in the creation of AI systems today. We look forward to continuing to support responsible open source development and informed AI policy.
Tags:
Written by
Related posts

From pair to peer programmer: Our vision for agentic workflows in GitHub Copilot
AI agents in GitHub Copilot don’t just assist developers but actively solve problems through multi-step reasoning and execution. Here’s what that means.

GitHub Availability Report: May 2025
In May, we experienced three incidents that resulted in degraded performance across GitHub services.

GitHub Universe 2025: Here’s what’s in store at this year’s developer wonderland
Sharpen your skills, test out new tools, and connect with people who build like you.