How we threat model
At GitHub, we spend a lot of time thinking about and building secure products—and one key facet of that is threat modeling. This practice involves bringing security and engineering teams…
At GitHub, we spend a lot of time thinking about and building secure products—and one key facet of that is threat modeling. This practice involves bringing security and engineering teams together to discuss systems, ultimately generating action items that improve the security of the system. Threat modeling has helped us better communicate between security and engineering teams, has shifted the security review process to be more proactive, and has led to more reliable and more secure system designs.
What is threat modeling?
Before we dive into how we approach threat modeling at GitHub, let’s first agree on what threat modeling is. Defining the goals of the process helps everyone involved set expectations for what comes out.
At GitHub, threat modeling isn’t necessarily a specific tool or set of deliverables—it’s a process to help foster ongoing discussions between security and engineering teams around a new or existing system. A threat model is a collaborative security exercise where we evaluate and validate the design and task planning for a new or existing service. This exercise involves structured thinking about potential security vulnerabilities that could adversely affect your service.
Every threat modeling conversation should have at least the following goals:
- First, dig into the proposed or existing architecture, ensuring everyone understands how the system works. (If nothing else, now you know! While you’re at it, document it.)
- Then, holistically evaluate the entire surface area and develop the most likely points of compromise. This is the key deliverable.
- Develop mitigation strategies to be implemented for each point of compromise. Since no one has infinite resources, these are probably prioritized.
What is our threat modeling process?
Decide when to threat model
At GitHub, we typically do threat modeling on a set cadence with each of the feature teams, and before the release of any new features that make major changes to the architecture. Depending on the amount of engineering taking place on a feature, you may need a faster cadence (every couple months) or a slower one (once per year). If you have an existing cadence of software review, we’ve found that integrating it with those existing processes helps everyone to adapt to adding a new security process. Regardless of your timing, set guidelines, and be flexible.
Build the threat model
Threat modeling is usually a collaborative exercise, so the engineering team for the product and the security team will get together to talk through the architecture and potential security concerns. Ahead of time, our security team will provide documentation and examples to the engineering teams on effective threat modeling. We typically ask each engineering team to generate a model in advance, covering a significant part of a system to review as part of a single threat modeling conversation. Setting these expectations early (and doing the homework) helps to ensure that the meeting is effective.
Though the process and discussions are what matter more than the specific output, at GitHub, we ask the engineering team to bring a threat model developed either in Microsoft’s Threat Modeling Tool or OWASP’s Threat Dragon (both are entirely free!). These tools enable the teams to clearly present important information for the threat model such as APIs, trust boundaries, dependencies, datastores, authentication mechanisms, etc. In addition to providing some consistency between teams, these files will also act as important collateral to share with any auditors if you need to meet various security compliance requirements.
Review the threat model
When it’s time to review the threat model, we typically schedule a one hour session, broken into two parts. The first five to 10 minutes of every session is spent with the engineering team understanding the design of the system that is being reviewed. This time ensures that everyone is on the same page and helps clarify any ambiguities from the previously prepared threat mode—including which technologies are being used and any design quirks. After everyone is aligned, we can jump right into the security discussion.
At this point in the security discussion, we’ve found it helpful to use a framework to methodically address different vulnerability classes. One of the methodologies we frequently use is Microsoft’s STRIDE model—Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Escalation of Privilege—a mnemonic covering common attack vectors which may be found in an application. Stepping through these classes while looking at the overarching system enables the security teams to holistically look at a system being analyzed and ensure that they cover the most likely threats. Following STRIDE fills the remainder of the hour as conversation expands and more parts of the system get unpacked.
As potential security vulnerabilities or design flaws are found, the security team takes note of them as well as potential remediations. This is so we can generate a list of potential changes for the engineering team to consider making after the session. We found that as threat modeling became more common across GitHub, teams learned to engage the security team while developing the system—which is better, as it fostered getting ahead of potential issues and addressing major architectural changes before hands hit keyboards. This in turn helped the security teams deploy better defense in depth through secure design principles.
As the session draws to an end, we recount the key findings and improvements that the teams should make and generate tracking items for those. A summary is distributed to the participants where anyone is free to ask follow up questions to better flesh out the action items.
What we learned from threat modeling
As I mentioned above, we’ve found a number of benefits to threat modeling that drive an organization’s security-minded culture. In our case, we’ve seen three notable benefits:
System-wide security improvements
The simple act of sitting down and discussing the system holistically provided a great opportunity for everyone to discuss the underlying system. Knowledge sharing between the teams helped everyone grow in their knowledge of the systems in the environment. This also contributed in the development of vulnerability mitigation strategies for issues discovered during the threat model review, which improved the security posture of the entire organization.
Proactive design guidance
As threat modeling matured, we worked to “shift left,” or do it earlier in the development process and set up sessions before a product ships. Often, security teams need to respond to issues as they are discovered. But as an organization shifts the timing of threat modeling to be earlier in the process, security teams can, at times, help guide engineering teams from system designs that might present future vulnerabilities.
Improved communication between security and engineering teams
This benefit has been an incredible shift for engineering and security teams. Since it brings them together on a regular cadence, threat modeling has helped build connections between these teams that make it easier to reach out— in both directions.
Bringing it all together
To recap, we’ve covered a number of key activities that helped improve threat modeling at GitHub. Here’s a quick summary of our process:
- Merge threat modeling processes into existing development lifecycle processes and automate if possible
- Ensure everyone comes prepared
- Use a defined mechanism to methodically cover risk areas, such as STRIDE
- Leave with specific action items
- Follow up
By following these steps, we found that we could improve the security of the system being developed, proactively engage with engineering teams (before shipping!), and forge strong connections between the teams involved in developing software.
How do you threat model?
We’d love to hear from you about your best (or worst) threat modeling practices! Tweet us your ideas with the #GitHubThreatModeling hashtag.
Written by
Related posts
Breaking down CPU speed: How utilization impacts performance
The Performance Engineering team at GitHub assessed how CPU performance degrades as utilization increases and how this relates to capacity.
How to make Storybook Interactions respect user motion preferences
With this custom addon, you can ensure your workplace remains accessible to users with motion sensitivities while benefiting from Storybook’s Interactions.
GitHub Enterprise Cloud with data residency: How we built the next evolution of GitHub Enterprise using GitHub
How we used GitHub to build GitHub Enterprise Cloud with data residency.