Advancing responsible practices for open source AI
Outcomes from the Partnership on AI and GitHub workshop.
Today, the Partnership on AI (PAI) published a report, Risk Mitigation Strategies for the Open Foundation Model Value Chain. The report provides guidance for actors building, hosting, adapting, and serving AI that relies on open source and other weights-available foundation models. It is an important step forward for responsible practices in the open source AI value chain.
The report is based on a workshop that GitHub recently co-hosted with PAI, as part of our work to support a vibrant and responsible open source ecosystem. Developers build and share open source components at every level of the AI stack on GitHub, amounting to some 1.6 million repositories. These projects range from foundational frameworks like PyTorch, to agent orchestration software LangChain, to models like Grok and responsible AI tooling like AI Verify. Our platform and open data efforts work to make this innovation more accessible and understandable to developers, researchers, and policymakers alike. We evaluate and periodically update our platform policies to encourage responsible development, and we recently joined the Munich Tech Accord to address AI risks in this year’s elections. We work to educate policymakers on the practices, risks, and benefits of open source AI, including in the United States to inform implementation of the Biden Administration’s Executive Order and in the EU to secure an improved AI Act.
Reports like Risk Mitigation Strategies for the Open Foundation Model Value Chain are important resources to inform policy and practice. Policymakers often have a better understanding of vertically integrated AI stacks and the governance affordances of API access than they do of open source and distributed AI collaborations. In addition to beginning to consolidate best practices, the report delineates the open value chain (as pictured below) to provide policymakers a clearer understanding of the distribution of roles and responsibilities in the creation of AI systems today. We look forward to continuing to support responsible open source development and informed AI policy.
Tags:
Written by
Related posts
Inside the research: How GitHub Copilot impacts the nature of work for open source maintainers
An interview with economic researchers analyzing the causal effect of GitHub Copilot on how open source maintainers work.
OpenAI’s latest o1 model now available in GitHub Copilot and GitHub Models
The December 17 release of OpenAI’s o1 model is now available in GitHub Copilot and GitHub Models, bringing advanced coding capabilities to your workflows.
Announcing 150M developers and a new free tier for GitHub Copilot in VS Code
Come and join 150M developers on GitHub that can now code with Copilot for free in VS Code.