GitHub Actions gives teams access to powerful, native CI/CD capabilities right next to their code hosted in GitHub. Starting today, GitHub will send a Dependabot alert for vulnerable GitHub Actions, making it even easier to stay up to date and fix security vulnerabilities in your actions workflows.
In celebrating GitHub Security Lab’s one-year anniversary, we explained that we’re expanding our research focus. Why did we make this decision? The decision stemmed from our work with the Open Source Security Coalition (OSSC) where we worked alongside other industry partners to help secure open source software. In understanding the shared challenges in security research across the industry and the proposals to rectify these challenges, we realized there is a gap in the great work being executed within the coalition—a communication gap.
This communication gap piqued our interest in socio-technical research not only as it pertains to industry at large but also to the Security Lab’s research processes. As a senior security program manager and researcher within the Security Lab, I will be working closely with open source maintainers and researchers as we investigate how to encourage and foster effective communication between the security research community and open source maintainers.
We’re interested in speaking with open source maintainers and security researchers to gather information about the security vulnerability disclosure process. We are currently working on a project where we are interviewing a number of different stakeholders in the vulnerability disclosure process, including open source maintainers and security researchers, with the goal of finding ways to improve the process for everyone involved.
Security research, which is a sprawling field with various subsets, including secure code, is not just technical but also inherently social. As an example, let’s take a look at the Security Lab’s own research:
The Security Lab’s research focuses predominantly on identifying vulnerabilities in open source projects and executing a four-step remediation process to address the issues. In our recently released Octoverse 2020 report, we detail this process, which includes:
- Identifying and reporting a vulnerability to open source maintainers
- Maintainers fixing the vulnerability
- Security tooling alerting end users of a security update
- Developers updating to the fixed version
While communication plays an integral role in the entire life cycle, the Security Lab is honing in on the first step, which involves interactions between security researchers and open source maintainers. At a fundamental level, this first step hinges on effective communication between security researchers and software developers, which are oftentimes two isolated communities that have different perspectives and speak different languages. Therefore, it is natural for misalignments and misunderstandings to arise between these two groups as they communicate and interact with each other. Another example is the US National Telecommunications and Information Administration’s 2016 report highlighting how the quality of communication between a security researcher and vendor could be a determining factor as to when a researcher chose to disclose a vulnerability publicly among other findings.
We intend to build on this work and similar efforts as we begin to explore this gap and ways to encourage effective communication in the disclosure process. If you are an open source maintainer or a security researcher and you’re interested in sharing your experience with vulnerability disclosure processes with us, please reach out to us at email@example.com.
While the Security Lab is moving towards socio-technical research to complement its existing vulnerability research, we understand the importance of collective efforts in this work. So, we are continuing to engage with key industry stakeholders to understand their views on socio-technical research as it relates to security. Their positions underline how socio-technical research in the security space can help leverage social constructs, stakeholder motivations, and human elements to create a safer and healthier software ecosystem.
Jennifer Fernick, global head of research at NCC Group, shares her take on how socio-technical work directly relates to people:
“The socio-technical work that I have done with members of the GitHub Security Lab is motivated by the important reality that computer systems obey the laws of physics but are ultimately created by and for humans, and therefore are always a little bit approximate and imperfect and broken in the ways that humans, the things they engineer, and human-in-the-loop systems can be.
Considering the human element is not a foray into mere social engineering – the human aspect of security runs far deeper than that. Strong encryption questions are ones of policy and institutional weaknesses. Supply chain security problems are ones of geopolitics and economics. Usable security problems are ones of accessibility, communication, and cognitive engineering. Protecting vulnerable populations online invokes sociology, history, and social justice, to help realize that “your threat model is not my threat model.
When you work to protect users at scale, what you are really doing is accepting the responsibility to consider the needs of billions of people who are in many ways unlike you. We must humble ourselves with the prospect of this before we can achieve meaningful online security – and the related psychological and geopolitical safety therein – at global scale.”
To help protect society against future threats, Reed Loden, chief open source security evangelist at HackerOne, explains that socio-technical research will play a key role in how security researchers partner with organizations:
“Against a backdrop of unparalleled obstacles, it is more important than ever to look into the socio-technical aspects of security research to ensure the partnership between hackers and organizations can help our connected society face evolving threats.
Today, miscommunication in the vulnerability disclosure process can lead to early disclosures, revealing unresolved security issues to the world. On the other hand, the consequences of ignoring security research out of fear aren’t measured in downtime, they’re measured in billions of dollars and reputational damage and result in CSOs being shown the door. To reap the benefits of proactive security research, it is vital to provide best practices, improve communication, and align expectations between both groups. This socio-technical research can make the internet a safer place.”
Art Manion of the Cert Coordination Center recognizes that socio-technical research will help improve the coordinated disclosure process, which may decrease the likelihood of disclosures occurring through other means:
“While coordinated vulnerability disclosure (CVD) is a response to largely technical problems, the outcomes of CVD are heavily influenced by the social constructs and motivations of stakeholders. More work is needed to make CVD a safe, effective, and attractive alternative to higher risk disclosure options.”
Securing software, particularly, open source software, will require the integration of technical and socio-technical research to improve both understanding as well as existing security research processes. We want to better understand the challenges and communication gaps faced by stakeholders in the vulnerability disclosure process as a first step towards implementing improvements in the Security Lab’s disclosure processes. If you’re an open source maintainer or security researcher and you want to engage with us in this conversation, please reach out to firstname.lastname@example.org.