Safe harbors for software collaboration, part 2

Image of Abby Vollmer

The modern internet was built on a legal framework of safe harbors for user-generated content. These safe harbors are widely credited with having enabled global internet innovation by protecting online platforms from liability for user content they host. Despite this, they are increasingly at risk around the world as regulators look to curb objectionable content online and otherwise regulate some platforms.

In a previous post, we described the history of these safe harbors in the U.S. because they were among the earliest examples of safe harbors, and share a common approach that exists in many other countries today. This time, we’ll share some current examples of policy and regulatory developments from around the world, where developers have an opportunity to help shape the outcome.

Although the specific proposals we mention below vary, they pose some of the same risks to developers’ ability to collaborate online. Most proposals are geared toward traditional social media platforms that have a more general purpose than software development platforms. Overbroad regulation that fails to recognize the special role of software development platforms can harm software development in three ways:

  • Despite the much lower risk profile that software collaboration platforms have for most kinds of objectionable content, regulation of online platforms rarely differentiates software collaboration platforms from other kinds of user-generated content platforms.
  • Narrowing safe harbors leads to less access to software code, which not only undermines developers’ ability to build and ship software, but also developers’ rights to free expression, assembly, and association.
  • Preventing access also endangers the software ecosystem as the unavailability of one software project can break all the other software projects or solutions that use that project.

Given these risks to online collaborative development, we need to ensure that as policymakers work to address important societal challenges like disinformation, they consider the lower risk profile of software hosting platforms and ensure that changes to safe harbors do not unintentionally harm software collaboration.

We support regulation that makes the digital ecosystem safer, healther, more inclusive, and more constructive—while also respecting fundamental human rights. With that in mind, regulation should be calibrated to the different levels and types of risks on the wide variety of platforms that involve user-generated content.

How you can contribute

Developers’ voices are important as numerous countries consider policy reforms that may impact software collaboration. Technically-informed perspectives are particularly valuable to the policymakers behind these proposals. Policymakers rarely hear from people who are familiar with the specifics of distributed software collaboration, but when they do, we have seen that they welcome the technical perspectives that developers can offer. Real impact can come out of this type of dialogue. For example, developers have been successful in shaping new laws to protect developer collaboration on software platforms. If you live somewhere that is looking into intermediary liability reform, please consider reaching out to your elected representatives and encouraging them to support rules that do not disproportionately or unnecessarily affect developers’ rights to innovation, collaboration, and opportunity.

To help ensure that regulations do not harm collaborative software development, we need to work together to educate policymakers:

  1. Not every problem related to online content is best addressed by focusing on platform liability. Many have root causes and activities offline that need attention. If there’s a policy solution that could address the problem more directly than platform liability, it is more effective to focus on that underlying issue.
  2. Where platform liability is relevant, regulatory solutions should avoid one-size-fits-all solutions and take into account a platform’s relative risk profile for that particular kind of content.

For example, in some cases, like the EU’s Copyright Directive, software collaboration platforms are irrelevant to the stated purpose of the legislation (in that case because software developers who post their code publicly want it to be shared, and neither they nor the hosting platforms make money when it’s shared). In other cases, objectionable content may be possible, but unlikely, on a platform, given the purpose of the platform and the potential for content to spread virally. For instance, disinformation poses a higher risk on general purpose consumer media platforms than on code collaboration platforms. It therefore often makes more sense to focus reform on the kinds of platforms where that content is more likely to be an issue.

Let us know if you’d like help looking at a particular proposal and thinking about talking points that could help your policymakers better understand the implications for you as a developer. You can get in touch with our Developer Policy team on Twitter and via email.

Current platform policy proposals

Australia: Online Safety Bill

Australia introduced the Online Safety Bill 2021 in February, after a consultation period on the bill from December 2020 to February 2021. The legislation could pass in the coming weeks. More from Digital Rights Watch.

Brazil: Internet Freedom, Responsibility, and Transparency Act

Last June, Brazil’s Senate passed the Internet Freedom, Responsibility, and Transparency Act, known as the “fake news” bill. In July, the Senate sent it to the House (Câmara dos Deputados), where it remains under consideration. More from Derechos Digitales.

EU: Digital Services Act

The EU’s main intermediary liability protection, the E-Commerce Directive, which dates back 20 years, is now under review in the form of the proposed Digital Services Act (DSA). GitHub contributed to OpenForum Europe’s submission regarding a consultation on the DSA last year and on the Commission’s proposal this year.

EU Member States: Laws on hosting unlawful content and hate speech

Several countries in Europe have passed or have proposed laws with their approach to narrowing platform liability for hate speech and other unlawful content.

Austria: The Austrian Parliament released the draft Hate on the Net law last September with a public consultation that ended October 15. More from Epicenter.Works.

France: France’s Constitutional Court struck down part of the law to combat hateful content on the internet last June. The bill’s sponsor (Laetitia Avia) considered how to revive it, with another terror attack near Paris in October. It is now being taken up as part of the bill against separatism, which was presented to the Council of Ministers in December and adopted by the National Assembly in February.

Germany: Germany’s parliament is seeking to amend the Network Enforcement Act (NetzDG) but Germany’s Federal President has voiced concerns about new requirements to hand over sensitive user information and has not yet signed the new bill.

South Africa: Films and Publications Act

South Africa opened a public comment period last August on proposed amendments to the Films and Publications Amendment Act 11 of 2019, which has been dubbed the “Internet Censorship” bill. More from Randles and Lexology.

U.K.: Online Safety Bill

The United Kingdom is expected to publish an Online Safety Bill this summer, stemming from an Online Harms White Paper it published in 2019 and public consultations. More from Global Partners Digital.

U.S.

We mentioned two US laws in our previous post, the Digital Millennium Copyright Act (DMCA) Section 512 and Communications Decency Act (CDA) Section 230. As the United States inaugurated a new president and new Congress in 2021, we’re monitoring new bills and staying tuned to see how new developments may unfold.

Recent developments in platform regulation

Here are some recently passed laws to be aware of:

India: Information Technology Act

In February, India published new rules for online intermediaries. These rules, Intermediary Guidelines and Digital Media Ethics Code, are already in effect, with additional obligations going into force at the end of May for online platforms with five million or more registered users in India (which qualifies them as “significant social media intermediaries”). GitHub joined with Mozilla and Wikipedia in 2019 and Mozilla and Cloudflare in 2020 in voicing concerns and making recommendations to the Ministry of Electronics and IT.

Pakistan: Citizens Protection (Against Online Harm) Rules 2020

Pakistan passed the Citizens Protection (Against Online Harm) Rules in January 2020 and published a revised version, the Removal and Blocking of Unlawful Online Content, in October 2020. More from Digital Rights Foundation on the initial and revised rules.

Singapore: Protection from Online Falsehoods and Manipulation Bill

In 2019, Singapore passed a “fake news” law, called the Protection from Online Falsehoods and Manipulation Bill. More from Human Rights Watch.

Turkey: Social Media Law, 2020

Turkey’s Social Media Law, Organizing Publications made in the Internet Environment and Combating Crime Committed through Publications about the Law, took effect in October after its parliament passed the law in July. More from the Electronic Frontier Foundation.

In order to develop effective and technologically-informed regulation, policymakers need to hear your perspectives. Developers can help policymakers better understand the impacts that new rules can have on software development and developer collaboration. As you think about how you can get involved, ping our Developer Policy team on Twitter or via email. We’re happy to help!


Follow GitHub Policy on Twitter for updates about the laws and regulations that impact developers.