How to define security requirements for your OSS project
Defining your security requirements is the most important proactive control you can implement for your project. Here’s how.
This is part two of GitHub Security Lab’s series on the OWASP Top 10 Proactive Controls, where I provide practical guidance for OSS developers and maintainers on improving your security posture.
Defining your security requirements is the most important proactive control you can implement for your project. This prompts you to establish a base standard for your project to comply with and helps you get into a security mindset even before writing a single line of code.
“Security requirements are derived from industry standards, applicable laws, and a history of past vulnerabilities.” – OWASP C1 |
If you’re writing software as part of a bigger project or organization, project management and security teams might define security requirements for you, so you probably won’t need to worry about them. But if you’re the one in charge of your code from start-to-finish, then read on!
Are you still with me? You’re most likely working on open source projects on your own or with the help of other OSS contributors. Since most projects start small, it’s tempting to skip the planning process and jump right into an IDE. Yet taking time to plan can save you a lot of pain in the long run. You’re less likely to miss essential steps, to perform steps out of order, or to leave them until a point in time where they become more costly to implement.
The thing is, we often associate standards and controls with a task that is hard and boring, and well, that’s not cool, right? However, standards can help you more than you think, especially in producing robust and secure code, and this could in turn mean time-saving efforts later on down the line.
Use the OWASP ASVS to identify the right security requirements for your project
For many project maintainers, identifying specific standards you should comply with (or uncovering individual security requirements for your application) can be daunting. Fortunately, OWASP comes to the rescue. You don’t have to create a custom approach to security from the ground up for every application. OWASP has developed a community-driven collection of security requirements in the form of an open standard. Using this standard makes it a lot easier for you to identify which security controls and best practices apply to your project.
The OWASP Application Security Verification Standard (ASVS) is a catalog of security requirements and verification criteria that you can use as a source of detailed requirements for yourself or your development team, no matter their size.
The sole aim of the ASVS was to show you what a security modern application looks like and to take some of the ambiguity out of things you need to do to make it secure.
Security requirements are categorized into 14 different domains based on a shared higher-order security function. But don’t panic! For most projects, you won’t need to pay attention to all 14 domains:
- V1: Architecture, Design, and Threat Modeling Requirements
- V2: Authentication Verification Requirements
- V3: Session Management Verification Requirements
- V4: Access Control Verification Requirements
- V5: Validation, Sanitization and Encoding Verification Requirements
- V6: Stored Cryptography Verification Requirements
- V7: Error Handling and Logging Verification Requirements
- V8: Data Protection Verification Requirements
- V9: Communications Verification Requirements
- V10: Malicious Code Verification Requirements
- V11: Business Logic Verification Requirements
- V12: File and Resources Verification Requirements
- V13: API and Web Service Verification Requirements
- V14: Configuration Verification Requirements
Filter your list based on domain relevance and risk level of your project.
Each of the 14 domains contain a collection of requirements that represent the best practices for that category, drafted as verifiable statements. For example, from V5 (Validation, Sanitization, and Encoding Verification Requirements): Verify that URL redirects and forwards only allow known destinations, or show a warning when redirecting to potentially untrusted content
. Across all 14 domains, there are more than 200 requirements, which is daunting for a small-to-medium-sized project. Yet you can use the domains as a blueprint to create a Secure Coding Checklist specific to your project.
Each domain is further divided into three levels, where each level increases the number of controls required for your project:
- Level 1: For low assurance levels, and a good starting point for most projects that are not handling sensitive data.
- Level 2: For applications that contain sensitive data, which require protection.
- Level 3: For applications that perform high-value transactions, contain sensitive medical data, or any application that requires the highest level of trust. Such applications could be power control plants responsible for delivering energy to millions, or waste treatment plants. Level 3 is the highest and as such, is not needed for most applications.
This tiering makes it easier to find the right requirements for your project and enables you to start small and grow your requirements alongside the scope of your project.
Together, domains and risk levels allow you to filter out requirements based on your project characteristics in a straightforward way.
Example scenario: Identifying security requirements for a node library to perform URL parsing
Let’s walk through a concrete example: a small node library to perform URL parsing. You can immediately strike out any domains that don’t apply for the library at hand. In this case, you can strike out V2, V3, V4, V6, V8, V9, V11, V12, V13 and V14, which reduces the initial number of more than 200 requirements to only 89. Now, if you decide to start with risk level 1, these 89 requirements can be further reduced to 30, which is a more bearable amount. Let’s see what kind of requirements we’ll find in the selected domains, as follows:
V1: Architecture, Design and Threat Modeling Requirements
This domain covers the primary aspects of any sound security architecture: availability, confidentiality, processing integrity, non-repudiation, and privacy. In order to identify which requirements are applicable to your application, you need to think about how your library could be used. Take for granted that your library will be used to parse user-controlled, and therefore, untrusted URLs. What are the possible threats and misuses? What can go wrong if your library fails to identify URL parts, such as the host? The process of identifying these threats is called threat modeling.
Threat modeling is a complex discipline that is out of scope for this blog post, but at the highest levels, it tries to answer the following four questions:
- What are you working on?
- What can go wrong? (What’s the worst that could happen?)
- What are you going to do about it?
- Did you do a good enough job?
You can read more in the Threat Modeling Manifesto, or learn about specific methodologies, such as STRIDE.
V5: Validation, Sanitization, and Encoding Verification Requirements
Since you’re parsing a URL, you need to understand the URL specifications and carefully identify the different URL components, as well as which metacharacters could break out of them. As an example, your library might expose a function to craft a URL out of the most basic components, such as scheme, user, password, host, port, path, and query parameters. Now, let’s say that an application using your library sets a hardcoded hostname (goodhost
) but allows users to specify the authority component (username and password). What would be the effect of a user specifying a username, such as badhost#
? Should your library encode #
in any special way? Appendix A of the RFC 3986 specifies that the only allowed characters for the authority component are:
- a-z, A-Z
- 0-9
-
- . _ ~ ! $ & ‘ ( ) * + , ; = :
- any percent-encoded character
If your application does not correctly encode the username and password for the particular authority component context, you may end up not encoding #
and allowing a user to effectively change the meaning of the final URL (https://badhost#@goodhost
).
V7: Error Handling and Logging Verification Requirements
Errors parsing the URL should not be silently ignored, since they can change the meaning of the URL.
V10: Malicious Code Verification Requirements
This domain is about malicious code that may be introduced to your project. Take into account that even if your project starts off as a simple library that only you use, it can quickly gain traction. As such, you’ll probably get contributions from external developers.You can also become a link in a larger supply chain attack. For example, you need to account for these potential attacks by:
- Implementing branch protection in your repository.
- Requiring verified commits.
- Securing your CI/CD workflows.
- Reviewing every external contribution for potential vulnerabilities or code smells. This can be partially automated with SAST tools integrated in your workflow.
- Aim for at least two independent code reviews for commits coming from new contributors to the project.
Wrap up
Identifying security requirements doesn’t need to be a daunting task. OWASP ASVS offers a list of curated security requirements for you to choose from and create your own list. By following this model, you no longer have to lie awake at night worrying that you’ve overlooked something essential.
Once these requirements are identified and set, you will need to be able to test that they are satisfied. In following blog posts, we will see some examples of how static or dynamic analysis can be used to verify the requirements are satisfied. In the next post, I’ll talk about security libraries and frameworks, along with the importance of setting secure defaults. Until then, stay secure!
Follow GitHub Security Lab on Twitter for the latest in security research.
Tags:
Written by
Related posts
What the EU’s new software legislation means for developers
The EU Cyber Resilience Act will introduce new cybersecurity requirements for software released in the EU. Learn what it means for your open source projects and what GitHub is doing to ensure the law will be a net win for open source maintainers.
Game Off 2024 theme announcement
GitHub’s annual month-long game jam, where creativity knows no limits! Throughout November, dive into your favorite game engines, libraries, and programming languages to bring your wildest game ideas to life. Whether you’re a seasoned dev or just getting started, it’s all about having fun and making something awesome!
Highlights from Git 2.47
Git 2.47 is here, with features like incremental multi-pack indexes and more. Check out our coverage of some of the highlights here.