How to leverage security frameworks and libraries for secure code
In this post, I’ll discuss how to apply OWASP Proactive Control C2: Leverage security frameworks and libraries.
This is part three of GitHub Security Lab’s series on the OWASP Top 10 Proactive Controls, where I provide practical guidance for OSS developers and maintainers on improving your security posture.
You should normally avoid implementing security-related controls from scratch unless you really know what you’re doing—doing so requires deep knowledge and expertise to implement them in a reliable and secure manner. Attackers targeting your application or library will use techniques that can abuse tiny issues in your code. Even if you get it right for 99% of abuse cases and known payloads, that small 1% can make your application as vulnerable as not implementing any protection at all.
Fortunately, there’s an alternative. In this third post, I’ll discuss OWASP Proactive Control C2: Leverage security frameworks and libraries. Here’s the official description:
“Secure coding libraries and software frameworks with embedded security help software developers guard against security-related design and implementation flaws. A developer writing an application from scratch might not have sufficient knowledge, time, or budget to properly implement or maintain security features. Leveraging security frameworks helps accomplish security goals more efficiently and accurately.” – OWASP Proactive Control C2 |
This proactive control is about using libraries and frameworks to implement security features. This includes not just things such as the authentication and authorization of your application, but also the libraries to protect against common types of attacks.
Leveraging security-brewed libraries and frameworks lets you benefit from established security expertise and failure-based improvements, which will make your code more sound and harder to bypass. A single security-focused library with a large user base across many applications will likely be exercised much more than a single, purpose-written solution for a specific application.
How to decide what libraries to use
I’ve seen many developers use security-related libraries just because they were among the first hits on a Web search. However, that only indicates that at some point the library became popular. But how can you be sure it’s still a good solution for your current use case? When choosing a security library or framework, don’t just use the first one that pops up in your search results. Carefully look at how actively it is being maintained and how complete it is by evaluating the following characteristics:
1. Is the package broadly used?
A package that is broadly used likely has been audited by multiple members of the community, and so it has a better standard of trust than one that is not broadly used. You can audit usage based on the number of stars on GitHub or number of downloads on the package manager’s website.
2. Does the package have a good reputation?
“Reputation” is a subjective metric that can be garnered from past experience, online sentiment, or rankings from other reviewers in the industry (such as Snyk and Microsoft).
3. Is the package actively maintained?
A package that is not actively maintained cannot be trusted to push out security fixes in a timely manner.
While this measure is subjective, a good standard is whether issues and pull requests against the dependency have been closed or merged within the last nine months.
4. Is the package mature?
An immature project may indicate that certain functionality, especially security related, isn’t implemented yet. While an immature project is not always a hard no, this is a criteria that you should consider when evaluating different packages. Some questions that you may want to ask:
- Does the package appear to be mature, with most features implemented, and with the most current RFCs/specs implemented?
- Or, is it experimental? Are there many TODOs in the codebase?
- Also, bonus points if the package has a clear roadmap. If it does, how far along in the roadmap is it?
5. Has the package had good security stewardship historically?
Good security stewardship means that a package has maintainers that fix security issues in a timely manner and notify users of the issues in vulnerable versions.
In order to ascertain this, look through issues on the source repository and/or Security Advisories to see whether maintainers are actively closing security findings and publishing them to users somewhere. You can find Security Advisories in a variety of sources, such as the package providers (npm audit, Dependabot, etc.), as well as vulnerability tracking services, like MITRE and GitHub Advisory Database.
Another good signal is whether GitHub Dependabot has many findings against the source repository.
Web frameworks with security batteries included
When talking about security libraries and frameworks, I’m not just referring to authentication/authorization libraries or libraries that perform security-specific tasks, such as Cross-Site Scripting (XSS) output encoding, or input validation. It is also critical to choose the right framework to build your applications on top of. When dealing with XSS issues, for example, you can address them in a number of ways. You could use an encoding library, as I mentioned above, to encode a given set of user-controlled data before it gets rendered in an HTTP response. This is a correct approach, and if used consistently and correctly, it can help you mitigate XSS issues. However, you may end up forgetting to apply this encoding for some data for which it was not clear that it could be controlled by a user. Alternatively, you could fail to apply the right encoding. For example, nested contexts, such as <a onclick=”<user-controlled data here>” href=”/foo”>link</a>
(that is, a JavaScript event context within an HTML attribute context) can be tricky. Should you encode for a JS context? How about for the HTML attribute context? In these cases, your application will become insecure no matter how many other places you have applied it correctly.
A much better approach is to use a web framework that transparently handles the data encoding for you. For example, if you are using Ruby on Rails, then you can use the ActiveView helpers (eg: link_to
) that will automatically encode the data for you. Is XSS still possible for those frameworks? Well, yes, you can still use directives, such as Rails’ html_safe
or raw
to disable automatic context-aware encoding. However, then it is much simpler to review and audit those cases where these dangerous directives are used than it is to review all those cases where user-controlled data is rendered to an HTTP response. The same goes with SQL injection, an approach based on ORM frameworks that will make all queries secure except for those cases using native queries and therefore bypassing the ORM abstraction layer, which again can be audited and reviewed more carefully and easily. Whenever possible, potentially insecure behavior should be an explicit and carefully considered choice and not a default.
Secure defaults + invariant enforcement
When writing secure code, you should define a clear list of things that should always be true for your code. This allows you to clearly define the security barriers of your codebase, as well as which behaviors you need to check and test for on new commits. We refer to this process as defining secure invariants, that is, the security properties that must always hold for your code. For example, for the cases mentioned above, we could define the following invariants:
- Rails
html_safe
orraw
should not be used -
ActiveRecord’s
ActiveRecord::Base.connection.execute
should not be used
These invariants can also be defined as part of your security requirements and can include things, such as:
- The framework
CSRF
protection should not be disabled -
Standard security headers such as
X-Frame-Options
,X-XSS-Protection
andX-Content-Type-Options
) are set -
Communication must occur over TLS
-
dangerouslySetInnerHTML
should not be used in React applications -
…
Then, you can use lightweight static analysis, such as semantic CodeQL queries directly built into your CI/CD pipeline (for example, GitHub code scanning) to enforce those invariants and look for anti-patterns.
CodeQL is a very fast and flexible static analysis solution that can operate in different parts of the SAST spectrum. When needed, it can perform full program analysis by running complex data flow plus control flow queries, but on the other end of the spectrum, it can also run simple and blazingly fast semantic queries that will match against the AST of your application.
For example, a query to detect uses of html_safe
or raw
in Ruby on Rails code using CodeQL would look like the following:
import ruby
from MethodCall call
where call.getMethodName() = ["html_safe", "raw"]
select call
And a query to look for calls to ActiveRecord::ConnectionAdapters.execute
would look like:
import ruby
import codeql.ruby.ApiGraphs
select API::root()
.getMember("ActiveRecord")
.getMember("Base")
.getReturn("connection")
.getReturn("execute")
Running these queries on every commit or pull request, will promptly raise an alarm 🚨 if any of your defined security invariants are violated.
Note: these queries are just examples and do not account for all the XSS-friendly or ActiveRecord methods that could lead to XSS or SQL injection.
Security headers
Most of the web frameworks with secure defaults will add additional security headers to every response to add another layer of security almost for free. For example, Ruby on Rails will add the following ones:
X-Frame-Options | SAMEORIGIN |
X-XSS-Protection | 1; mode=block |
X-Content-Type-Options | nosniff |
Note that X-Xss-Protection
is questionable since it adds client-side XSS filters that have proven to be complicated in the past to the point of them being near useless or even used to enable other attacks. I would suggest not enabling this header and to rely on the server-side, context-aware, content-encoding instead.
If you are not using such frameworks, make sure to add at least the X-Frame-Options
to DENY
or SAMEORIGIN
to prevent UI redress attacks and X-Content-Type-Options
to nosniff
to prevent MIME sniffing and hotlinking.
You should definitely take the time to read more about security headers to better understand their meaning, use cases, and implications. Ultimately, security headers should be treated as yet another layer in your security-in-depth approach to secure development. Like most security barriers, you should not rely on them exclusively.
How to use these libraries
It’s a good idea to encapsulate these libraries by defining your own API wrappers around the library use. This way, libraries can be easily enforced, and they can be easily replaced if needed (for example, if they become unmaintained).
How to keep them up-to-date
As explained before, small flaws in the implementation of these security controls or features can lead to them becoming completely ineffective. Using an OSS library with a history of many reported vulnerabilities and associated CVEs may seem like a bad idea at first. However, you can also think of this history of CVEs as expertise gained, which will not just fix those particular issues, but also give better understanding to their developers to prevent similar variants in their codebase going forward. Instead of focusing on the number of past CVES, focus on how those CVES were actually addressed and if there is a history of repetition for the same types of vulnerabilities. Security-focused projects will consistently check for known vulnerability patterns in their code to prevent the reintroduction of these patterns. They will also communicate in a transparent manner about any issues that were fixed via public security advisories and CVE-IDs to ensure upstream awareness of these issues. If a project has a history of not marking its security updates as security-related, this can be a sign of a less-than-mature security posture, and you may want to consider alternatives for your security-critical dependencies.
You will want to make sure that you keep your security dependencies up-to-date using some form of software composition analysis (SCA) tool, such as GitHub Dependabot.
Wrap up
In summary, this OWASP proactive control is mostly about not reinventing the wheel. Use well-established frameworks that come with “security batteries” included and, if needed, complement them with existing proven components and libraries wherever possible. Encapsulate those libraries in your own classes, and use static analysis to find violations of your security requirement invariants.
Stay secure!
Follow GitHub Security Lab on Twitter for the latest in security research.
Tags:
Written by
Related posts
Uncovering GStreamer secrets
In this post, I’ll walk you through the vulnerabilities I uncovered in the GStreamer library and how I built a custom fuzzing generator to target MP4 files.
CodeQL zero to hero part 4: Gradio framework case study
Learn how I discovered 11 new vulnerabilities by writing CodeQL models for Gradio framework and how you can do it, too.
Attacking browser extensions
Learn about browser extension security and secure your extensions with the help of CodeQL.