Skip to content

Helping policymakers weigh the benefits of open source AI

GitHub enables developer collaboration on innovative software projects, and we’re committed to ensuring policymakers understand developer needs when crafting AI regulation.

Helping policymakers weigh the benefits of open source AI
Author

Policymakers are increasingly focusing on software components of AI systems, and how developers are making AI model weights available for downstream use. GitHub enables developer collaboration on innovative software projects, and we’re committed to ensuring policymakers understand developer needs when crafting AI regulation. We support AI governance that empowers developers to build more responsibly, securely, and effectively, to accelerate human progress.

GitHub submitted a filing in response to the U.S. NTIA’s request for comment on the potential risks, benefits, and policy implications of widely available model weightsand of open source AI, which makes not only weights available to developers, but also code and other components under terms allowing developers to inspect, modify, (re)distribute, and use AI components for any purpose. Our submission can be found here, but there are a few important ideas we want to highlight.

Open source AI presents clear benefits

It is important to consider the myriad benefits of open source AI. Open source is a public good, designed for all to use: hobbyists, professional developers, companies, governments, and anyone looking to make an impact with code. The broadly available nature of open source has already generated tremendous value to society accelerating innovation, competition, and the wide use of software and AI across the global economy. Open source AI advances the responsible development of AI systems, use of AI in research across disciplines, developer education, and government capacity.

Evaluation and regulation should prioritize AI systems–not models

Evaluation and regulation are better focused on the full AI system and policies governing use, rather than subcomponents, including AI models. Policies that focus on restricting the model are likely to inhibit beneficial use more than prevent criminal abuse. It also risks missing the forest for the tree: orchestration and safety software included in AI systems can expand or constrain AI capabilities. Current evidence does not support government restrictions on sharing AI models. Policymakers should instead, irrespective of model type, prioritize AI regulation for high-risk AI systems and prepare plans to address abuse by bad actors. Security through obscurity is not a winning strategy.

The path to societal resilience is not open or closed

Governments have an important role to play in steering the technological frontier and building societal resilience that allows us to seize the benefits enabled by AI while reducing its risks. From accelerating needed AI measurement science and safety research, to supporting public education and protective measures, civic institutions are well-positioned to usher in a new era of AI governed by our values. The open availability, diversity, and diffusion of AI models can support this societal resilience and flourishing. With this in mind, GitHub looks forward to continuing policy collaboration to accelerate human progress.


Explore more from GitHub

Policy

Policy

Shaping laws, regulations, norms, and standard practices.
The ReadME Project

The ReadME Project

Stories and voices from the developer community.
GitHub Copilot

GitHub Copilot

Don't fly solo. Try 30 days for free.
Work at GitHub!

Work at GitHub!

Check out our current job openings.