Skip to content

What we learned from the Security Lab’s Community Office Hours

The GitHub Security Lab provided office hours for open source projects looking to improve their security posture and reduce the risk of breach. Here’s what we learned and how you can also participate.

What we learned from the Security Lab’s Community Office Hours
Author

Earlier this year, the GitHub Security Lab kicked off an initiative to provide office hours for open source projects looking to improve their security posture and reduce the risk of breach. The initiative aligned with our mission to inspire and enable the community to secure the open source software we all depend on. It also helped to address a well-expressed need in the open source community for security expertise.

After extending an invitation to the community, we were able to connect with six open source projects, and the results were incredible! Maintainers who participated in the initiative saw several immediate benefits, which drastically improved their security.

For example, following our discussion, Guzzle, a widely-used PHP HTTP client with 22k stars and 2.3k forks on GitHub, reported significant reduction to the processing time of vulnerability reports. In the past, acknowledging and confirming the bug, implementing and reviewing the fix, and notifying their user base took the Guzzle team several weeks. With the Security Lab’s help, they were able to manage five separate vulnerabilities in just a few hours!

Another team of open source maintainers was inspired by the conversation to write an article about third-party GitHub Actions. This provided an opportunity for collaboration and allowed the team to share best practices regarding permission escalation with other maintainers facing similar concerns.


They kindly reviewed our workflow configuration and collaborated with us by reviewing an article we wrote to help other people with the same doubts. Thank you very much for helping to make the whole OS ecosystem safer. It was a pleasure to work with you!

- Team, OS Ecosystems Facilitator Project

The chat helped me to identify some ‘low effort–quick wins’ in security, through code scanning, and security advisories.

- Maintainer, High-Profile Testing Framework

How the Community Office Hours worked

We first asked interested maintainers to complete a short questionnaire with more information about their projects, including any security concerns. We then matched the projects up with internal security experts from GitHub based on the topics mentioned in the questionnaire and the programming languages used.

Leading up to the conversations with the project teams, the experts were asked to get familiar with the codebase and prepare initial thoughts about their security practices. This preparation phase allowed the team to jump right into useful conversations to maximize the value for each of the maintainers.

Common patterns that we observed

1. Maintainers struggle to define their attack surface

The top concern maintainers expressed was about their ability to define their project’s attack surface. Simply put, everyone asked how we would hack them. The attack surface is a collection of areas in a project where there is a higher risk of attack or malicious activity. This includes places where user input enters the codebase or where user-controlled data can be used in a critical way, such as code execution or a file system operation.

Everyone should also be aware that attack surfaces can also extend well beyond code. We found countless examples of attack vectors consisting of a project’s supply chain, confidential information, or CI/CD pipeline. Even maintainers can be attack vectors through cleverly designed social engineering attacks and account takeovers.

Developers without cybersecurity knowledge may not be aware of all of the examples above. That’s why we’re here! Since every project is different, the first step is to define the most pressing attack vectors in order to identify weak points and design theoretical attacks against them. This practice is known as a threat modeling exercise. Through a threat modeling exercise, team members brainstorm ideas, present evidence of weak points, and explain how they would exploit these weaknesses.

It’s also important to understand that each attack differs in terms of impact on users, amount of effort, time required, likelihood of success, and skills needed to execute. By considering these factors, maintainers can prioritize their mitigation efforts and address the most pressing dangers through a data-driven approach. This cheat sheet from OWASP is a great starting point for any readers interested in learning more.

2. Adopting a few simple practices can significantly improve your project’s security

Some maintainers weren’t aware of some simple best practices that bring huge benefits and are easy to implement. The following five practices provided the most value to our participants.

  1. Enable an additional factor (or several factors) of authentication (2FA or MFA) to safeguard against account takeover and impersonation.
  2. Activate automated code scanning in your CI/CD workflow to quickly flag bugs in your code during the software development lifecycle.
  3. Activate Dependabot to keep dependencies up to date.
  4. Publish a security policy to instruct users on how to disclose vulnerabilities in your code.
  5. Create security advisories to alert users about vulnerabilities in your code and the safest, updated version following a vulnerability disclosure.

The advisory process was surprisingly easy. We also had a misconception that a CVE needs three days to be issued, which wasn’t true.

- Graham Campbell, Guzzle

3. Imbalance between functionality testing and security testing

The final pattern that we observed was that some projects were way more focused (or solely focused) on functionality testing instead of balancing with security testing. In one case, a project had implemented input sanitation to prevent injections, but was never tested to ensure it was working properly. This non-functional requirement could have easily been tested through unit tests by intentionally entering malicious inputs. Another option would have been fuzzing, which involves leveraging a dynamic, automated security testing method that runs the program with invalid, malformed, or unexpected inputs to reveal vulnerabilities via crashes and information leakage.

If you are an open source maintainer, you can immediately improve the security of your project by following the tips above. We encourage you to also consider participating in our Community Office Hours. If you are interested, please fill out this form and we’ll get back to you!

Explore more from GitHub

Open Source

Open Source

Gaming, Git, new releases, and more.
GitHub Universe ‘22 Dev

GitHub Universe ‘22

The global developer event for cloud, security, community, and AI
GitHub Actions

GitHub Actions

Native CI/CD alongside code hosted in GitHub.
Work at GitHub!

Work at GitHub!

Check out our current job openings.