Skip to content

AppSec is harder than you think. Here’s how AI can help.

In practice, shifting left has been more about shifting the burden rather than the ability. But AI is bringing its promise closer to reality. Here’s how.

AppSec is harder than you think. Here’s how AI can help.
Author

Find vulnerabilities earlier, ship software faster. These are the good intentions behind the drive to shift application security workflows from security teams to developers: a “shift left” move in the software development lifecycle. But does it really work?

In practice, shifting left has been more about shifting the burden rather than the ability. Most AppSec tools, even those that claim to be “developer-first,” require a certain degree of security expertise to deploy and use. By interrupting the coding process and taking developers out of their flow, shifting left can exacerbate the very problems it was meant to address. Here’s a sobering statistic: 81% of developers admit to shipping software with vulnerabilities just to meet a deadline. When human nature and business pressures align, good intentions can hardly compete.

“Most developers are not trained security experts,” says GitHub’s Chief Security Officer and SVP of Engineering Mike Hanley. “But with AI, we’re radically transforming the traditional definition of shift left by bringing security directly to developers as they’re introducing their ideas to code, fundamentally preventing vulnerabilities from ever being written.”

In this post, we’ll explore the challenges in application security today and where AI can make a significant impact in keeping software secure from day one.

👀 Are you a visual learner? Don’t worry, we have you covered. 👇

Shifting the burden, not the benefit

Director of Field Services for GitHub Advanced Security, Nick Liffen, explored this challenge with an audience of developers at GitHub Universe. First, he asked everyone in the room to stand. Then, he gave his audience a choice: stay standing if developers at their organizations loved remediating vulnerabilities, or resume their seats if they didn’t. As you might expect, almost everyone sat down. Coders prefer to write code.

While development teams sprint to ship new features that solve customer problems, open new markets, and leapfrog competitors, security teams are charged with protecting an organization’s data and reputation. Go fast, but reduce risk. It’s hard enough for an entire organization to do both, but is it reasonable to ask that of a development team or of a single developer?

Let’s dig a little deeper into those pain points.

  • When developers and security teams clash, everyone loses. The goal for most developers is to ship great products by getting code into production as fast as they can. The goal for the security team is to make sure their organization’s software is as secure as possible by addressing high-severity and high-impact vulnerabilities quickly, then prioritizing other vulnerabilities as they arise. These dueling incentives often lead to friction between the two teams, as developers want to get their code out and security engineers want to ensure that code changes are fully vetted before they’re shipped.
  • Shifting left shifts responsibility, but not expertise. Integrating a security tool into a developer’s workflow doesn’t always mean developers will use them (or even know how to use them). In fact, an abundance of false positive alerts can make it less likely that developers pay attention to potential vulnerabilities. This means missed deadlines, unfixed vulnerabilities, and a fruitless blame game that estranges developers and security teams.
  • Unaffordable context switching is inherent in the security process. Oftentimes, developers don’t have the knowledge or full context to fix the vulnerabilities and bugs they’re receiving. This means they have to leave their environment, go to Google, and break their flow to figure out what to do. This is the type of context-switching that creates a terrible developer experience, which slows down overall productivity and velocity across engineering teams.

Security at the expense of usability comes at the expense of security.

- Avi Douglen // OWASP Board of Directors

With constantly evolving threats and not enough time or documentation for training to manage them—not to mention the monotony of addressing vulnerabilities, as opposed to the creativity and freedom of writing code—it’s no wonder so many developers sat down after Liffen’s question.

When it comes to organizations as a whole, there are two pivotal challenges. The first is that applications are the number one attack vector for malicious actors. The second is that security breaches are getting more and more expensive, growing 15% over the last three years, according to a recent IBM report. That same report says enterprises that use security AI and automation extensively can save $1.76 million when compared to organizations that don’t.

In order for your enterprise to innovate at scale and save money, developers can’t be forced to choose between security and velocity. And increasingly with AI, they don’t have to.

AI to the rescue?

AI tools like GitHub Copilot already make security more developer-centric with code suggestions and context around vulnerabilities within the developer workflow. Even though this is the beginning of our journey with AI, new products, tools, and platforms are already helping developers write more secure code from the start—and allow issues to be more easily addressed as they come up.

“Developers need the ability to proactively secure their code right where it’s created—instead of testing for and remediating vulnerabilities after the fact,” says GitHub’s Director of Product Marketing, Laura Paine. “Embedded security is critical to delivering secure applications.”

Let’s take a look at where AI can help embed security within the developer workflow.

Improved detection

Because almost 80% of code in today’s applications relies on open source packages, current SCA tools need to scan and understand third-party packages. When vendors don’t add modeling information for your packages, developers need to manually build out the modeling information. But with a tool like CodeQL that’s used in tandem with AI tools, developers can automate the threat-modeling process, saving them time and energy, and ensuring compliance with industry standards.

When it comes to secrets, for instance, we recently integrated AI at GitHub into our secret scanning technology to help detect unstructured and human-generated secrets like passwords and credentials. Plus, if you enable secret scanning’s AI-powered features, GitHub can generate custom patterns for you—and you can test these patterns before saving to make sure they work. Once secrets are detected, security managers and repository owners can view the alerts, and if they determine that the alert is legitimate, they can work with developers to resolve the issue. This will save developers time, make collaboration much more seamless, and ensure your secrets stay safe.

Found means fixed

“Picture this,” says GitHub’s VP of Product Management, Asha Chakrabarty. “You receive a security alert, but instead of just getting guidance on how to do the fix yourself, you get an AI-generated fix right in your pull request. And this isn’t just any fix, but a precise actionable suggestion that helps you resolve the issue faster and prevent new vulnerabilities from creeping into your codebase.”

Developers can try this kind of AI-powered remediation with the public beta of code scanning autofix. What makes it so powerful—aside from the fact that it supports over 90% of the queries we have—is that it provides both the findings and the fix, so developers can remediate vulnerabilities as they code. This means faster fixes, less context-switching, and more productivity. Not to mention more secure code.

Doing application security at scale with AI

We often hear that AI will democratize software development by making it easier for more people to write and understand code. The truth is, it already is. With AI-powered tools, more developers can write secure code faster.

That democratization starts with learning, and AI can provide developers of all levels with AppSec knowledge. For example, when a developer gets a security alert, they can use AI coding tools in their IDE to figure out the issue (instead of having to interrupt their flow and search online for the answer). They can also learn how a particular security issue might arise by using AI tools to generate vulnerability examples tailored to their codebase.

And new AI-powered AppSec tools aren’t just helping developers—they’re helping security professionals, too. There are new products, for example, that offer overviews of repository and project security with actionable insights for administrators and simple ways to assign work across engineering and security teams. The more knowledge that can be shared and the more data your teams can study, the easier it’ll be to address security findings and fine-tune your AppSec program as your organization grows.

At GitHub, our newly released security overview dashboards make it simple for everyone from developers to administrators to get a clear view into their organization’s security efforts, from historical trend analysis to your overall mean time to remediation. With these dashboards, you can easily gauge your security posture and filter data to find trends in dates, repositories, and more. If executives want to know how effective your remediation efforts are, you can tell them with just a few clicks.

Where we go from here

Over the next five years, it’s projected that 500 million more applications will be written. That’s more applications than developers have created in the last 40 years combined. With all this immense growth, security is only going to get harder if we keep forcing a shift-left mentality without addressing its critical pain points. Improving alert relevancy, speeding up remediation, and reducing friction are going to be key in keeping these applications safe, and AI will help us get there.

What we really need to do to make developers love (or at least like) security is to meet them where they are and provide them tools they want to use. Liffen’s goal: “Hopefully security will become so unconscious and frictionless in the developer workflow, security will just be the way developers work.”

Ready to harness our newly launched AI-powered security tools? Learn more or get started now.

Explore more from GitHub

Security

Security

Secure platform, secure data. Everything you need to make security your #1.
The ReadME Project

The ReadME Project

Stories and voices from the developer community.
GitHub Copilot

GitHub Copilot

Don't fly solo. Try 30 days for free.
Work at GitHub!

Work at GitHub!

Check out our current job openings.