Partnering with EU policymakers to ensure the Cyber Resilience Act works for developers
We’re looking forward to working with policymakers to improve cybersecurity and support developers.
Defining your security requirements is the most important proactive control you can implement for your project. Here's how.
This is part two of GitHub Security Lab’s series on the OWASP Top 10 Proactive Controls, where I provide practical guidance for OSS developers and maintainers on improving your security posture.
Defining your security requirements is the most important proactive control you can implement for your project. This prompts you to establish a base standard for your project to comply with and helps you get into a security mindset even before writing a single line of code.
“Security requirements are derived from industry standards, applicable laws, and a history of past vulnerabilities.”
– OWASP C1
If you’re writing software as part of a bigger project or organization, project management and security teams might define security requirements for you, so you probably won’t need to worry about them. But if you’re the one in charge of your code from start-to-finish, then read on!
Are you still with me? You’re most likely working on open source projects on your own or with the help of other OSS contributors. Since most projects start small, it’s tempting to skip the planning process and jump right into an IDE. Yet taking time to plan can save you a lot of pain in the long run. You’re less likely to miss essential steps, to perform steps out of order, or to leave them until a point in time where they become more costly to implement.
The thing is, we often associate standards and controls with a task that is hard and boring, and well, that’s not cool, right? However, standards can help you more than you think, especially in producing robust and secure code, and this could in turn mean time-saving efforts later on down the line.
For many project maintainers, identifying specific standards you should comply with (or uncovering individual security requirements for your application) can be daunting. Fortunately, OWASP comes to the rescue. You don’t have to create a custom approach to security from the ground up for every application. OWASP has developed a community-driven collection of security requirements in the form of an open standard. Using this standard makes it a lot easier for you to identify which security controls and best practices apply to your project.
The OWASP Application Security Verification Standard (ASVS) is a catalog of security requirements and verification criteria that you can use as a source of detailed requirements for yourself or your development team, no matter their size.
The sole aim of the ASVS was to show you what a security modern application looks like and to take some of the ambiguity out of things you need to do to make it secure.
Security requirements are categorized into 14 different domains based on a shared higher-order security function. But don’t panic! For most projects, you won’t need to pay attention to all 14 domains:
Each of the 14 domains contain a collection of requirements that represent the best practices for that category, drafted as verifiable statements. For example, from V5 (Validation, Sanitization, and Encoding Verification Requirements):
Verify that URL redirects and forwards only allow known destinations, or show a warning when redirecting to potentially untrusted content. Across all 14 domains, there are more than 200 requirements, which is daunting for a small-to-medium-sized project. Yet you can use the domains as a blueprint to create a Secure Coding Checklist specific to your project.
Each domain is further divided into three levels, where each level increases the number of controls required for your project:
This tiering makes it easier to find the right requirements for your project and enables you to start small and grow your requirements alongside the scope of your project.
Together, domains and risk levels allow you to filter out requirements based on your project characteristics in a straightforward way.
Let’s walk through a concrete example: a small node library to perform URL parsing. You can immediately strike out any domains that don’t apply for the library at hand. In this case, you can strike out V2, V3, V4, V6, V8, V9, V11, V12, V13 and V14, which reduces the initial number of more than 200 requirements to only 89. Now, if you decide to start with risk level 1, these 89 requirements can be further reduced to 30, which is a more bearable amount. Let’s see what kind of requirements we’ll find in the selected domains, as follows:
This domain covers the primary aspects of any sound security architecture: availability, confidentiality, processing integrity, non-repudiation, and privacy. In order to identify which requirements are applicable to your application, you need to think about how your library could be used. Take for granted that your library will be used to parse user-controlled, and therefore, untrusted URLs. What are the possible threats and misuses? What can go wrong if your library fails to identify URL parts, such as the host? The process of identifying these threats is called threat modeling.
Threat modeling is a complex discipline that is out of scope for this blog post, but at the highest levels, it tries to answer the following four questions:
You can read more in the Threat Modeling Manifesto, or learn about specific methodologies, such as STRIDE.
Since you’re parsing a URL, you need to understand the URL specifications and carefully identify the different URL components, as well as which metacharacters could break out of them. As an example, your library might expose a function to craft a URL out of the most basic components, such as scheme, user, password, host, port, path, and query parameters. Now, let’s say that an application using your library sets a hardcoded hostname (
goodhost) but allows users to specify the authority component (username and password). What would be the effect of a user specifying a username, such as
badhost#? Should your library encode
# in any special way? Appendix A of the RFC 3986 specifies that the only allowed characters for the authority component are:
If your application does not correctly encode the username and password for the particular authority component context, you may end up not encoding
# and allowing a user to effectively change the meaning of the final URL (
Errors parsing the URL should not be silently ignored, since they can change the meaning of the URL.
This domain is about malicious code that may be introduced to your project. Take into account that even if your project starts off as a simple library that only you use, it can quickly gain traction. As such, you’ll probably get contributions from external developers.You can also become a link in a larger supply chain attack. For example, you need to account for these potential attacks by:
Identifying security requirements doesn’t need to be a daunting task. OWASP ASVS offers a list of curated security requirements for you to choose from and create your own list. By following this model, you no longer have to lie awake at night worrying that you’ve overlooked something essential.
Once these requirements are identified and set, you will need to be able to test that they are satisfied. In following blog posts, we will see some examples of how static or dynamic analysis can be used to verify the requirements are satisfied. In the next post, I’ll talk about security libraries and frameworks, along with the importance of setting secure defaults. Until then, stay secure!
Follow GitHub Security Lab on Twitter for the latest in security research.