Industry Report: The True Costs of False Positives in Software Security

True Costs of False Positives in Software Security(2023)

False positives: A wild goose chase (Photo: Chandler Cruttenden)

You’ve been told that there’s a vulnerability in your software application. You’ve spent hours trying to find the problem. You’ve double-checked the alert. You’ve double-checked the software. No patches are available. 

Finally, after ruling out every other option, you’re forced to conclude that there was never a vulnerability in the first place. You’ve just wasted your whole afternoon trying to fix a problem that wasn’t there. 

This is the problem with false positives in software security.

False positives are false alarms—and each one can send you on a wild goose chase that costs your organization time, money, and emotional overhead. 

That’s why it’s important to invest in security tools that reduce false positives. The more accurate your security software, the more easily you can mitigate the very real costs that false positives represent to your business.

We wanted to understand how false positives affect security teams, so in August 2022, we surveyed cyber security professionals on the subject. Twenty-nine individuals shared their thoughts on how false positives impact organizations.

Here’s what we’re going to cover in this article:


Survey results: How software security professionals feel about false positives

We surveyed software security professionals on the costs of false positives, and the results indicate that false alarms are not only economically costly but also have long-term negative effects on the teams that deal with them.

In fact, given their businesses’ resources and constraints, most of our respondents would rather reduce false positives than increase true positives. False positives are a big problem in software security today.

Survey-Results-Metric-to-improve

This report will dig into these findings one by one, but here’s a summary of what our survey found:

  • 72% of respondents believe that false positives damage team productivity.
  • 62% of respondents would rather immediately reduce false positives than immediately catch more true positives. 
  • 62% of respondents also believe false positives damage overall team morale.
  • 59% of respondents claim that, on average, false positives take more time to resolve than true positives.
  • 55% of respondents believe false positives damage relationships between teams in their organization.

False positives, true positives, and the confusion matrix

A false positive occurs when your tool detects a vulnerability that isn’t really present. If you’re not familiar with statistical classification, it might be helpful to take a moment to get an idea of what we mean by “false positives.” (If you already know the difference between a false positive and a true positive, feel free to skip this section.)

You monitor the security of your software using various scanning tools, which check for vulnerabilities in your product. These tools might include static application security testing (SAST), dynamic application security testing (DAST), and/or software composition analysis (SCA). If a tool doesn’t find any vulnerabilities, it reports a negative—but if a tool detects a vulnerability, it reports a positive.

What the tool says

However, that’s just what the tool says. There’s the tool report, and there’s the actual truth: there either is a vulnerability (positive), or there isn’t (a negative).

there’s the actual truth

In an ideal world, our tools would always match reality, but tools can make mistakes. If the tool report matches reality, then the report is true. If the tool doesn’t match reality, then the report is false.

To illustrate this mismatch, programmers use what’s (appropriately) called a confusion matrix. This compares the tool reports to the actual condition of your software, and it looks like this:

The Confusion Matrix in Software Security

So there are two kinds of positives your software security tools are going to give you: true positives and false positives. True positives alert you to real vulnerabilities in your software. A false positive is a false alarm. 

We queried software security professionals on this matter, and the responses indicated a consistent sentiment: false positives waste valuable dev time and damage team morale.

False positives take longer to resolve

When a scanner detects a vulnerability, the security team gets to work fixing it. If it’s a true positive, the issue is resolved when the vulnerability is patched. 

However, if it’s a false positive, the issue can only be resolved when the security team can demonstrate that there’s no real vulnerability present. In other words, a false positive is only resolved once you can prove it’s a false positive.

We asked software security professionals which task, on average, takes more time to resolve: true positives or false positives. Of the people we surveyed, 58% said false positives take more time to resolve than true positives.

false positives take more time to resolve than true positives

It follows that false positives generally damage team productivity. Because resolving false positives detracts from more useful activities (like building your software or fixing true vulnerabilities), false positives can put a good deal of drag on team productivity. 

Our survey responses reflect this: seventy-two percent of respondents agreed with the statement that “False positives damage team productivity”—including twenty-eight percent who said they “strongly agree.”

False Positives damage team productivity

Reducing false positives frees up dev time for things that actually contribute to your business.

False positives damage team morale, leading to vulnerability fatigue

In addition to the economic costs of false positives discussed above, false positives come with several soft, hidden costs. Our survey brought several examples of this to the surface—one of which was the sentiment that false positives erode team morale.

While this sentiment isn’t as pronounced as the sentiments discussed earlier, responses do skew toward agreement with the statement, “False positives damage overall team morale.”

False Positives damage team morale

This erosion of morale can lead to “vulnerability fatigue” or “patching fatigue”—which happens when a team becomes desensitized to vulnerability alerts (whether they’re true or false positives doesn’t matter). An abundance of false positives can weaken even the most zealous team’s motivation to patch vulnerabilities.

And once that attitude works its way into your operations, it can be very, very difficult to reverse.

False positives can damage relationships between teams

A slight majority of respondents agreed that false positives damage relationships between their teams and other teams within their organizations. 

False Positives damage relationships between teams

From an internal political perspective, false positives can easily undermine a security team’s credibility. 

Security teams are already fighting an uphill political battle:

  • Teams dedicated to security usually aren’t building the product. In some organizations, “builders” can see the security team as lower-status contributors, non-contributors, or even a net negative to the business entirely. False positives can reinforce this mindset.
  • Security teams have the thankless task of pointing out where other teams made mistakes. People generally don’t like hearing about how they messed up, and so your other teams have a built-in psychological incentive to discredit your security team. False positives can perpetuate the narrative that security is either not credible or incompetent, which makes it more difficult for them to be taken seriously when a true positive comes along.

Unfortunately, because false positives take more time to resolve than true positives, other teams are more likely to remember false positives. Even if most alerts turn out to be true positives, it doesn’t take many false positives for a security individual or team to get labeled as the proverbial boy who cried wolf in the office.

Drive down false positives with MergeBase SCA

False positives are an unfortunate side effect of vigilant tools, but you can reduce false positives by investing in high-accuracy vulnerability scanners. 

At MergeBase, we’ve built our software composition analysis tool with accuracy in mind. When customers switch to us, they commonly report a significant decrease in false positives and an increase in true positives.

For example, many of our customers are former users of OWASP Dependency-Check, a free SCA solution. Dependency-Check was built to be a super-vigilant vulnerability checker and therefore has a bias toward positives—including false positives. When companies upgrade to MergeBase (a premium SCA), they see an immediate decrease in false positives. 

If you compare MergeBase’s performance against OWASP Dependency-Check’s on a set of libraries pre-seeded with publicly-known vulnerabilities (like this one), the difference in false positives (and true positives) becomes abundantly clear: MergeBase catches more vulnerabilities while reducing false positives.

MergeBase catches more vulnerabilities while reducing false positives.

We’ve built a tool that catches more vulnerabilities and sounds fewer false alarms when it comes to scanning your software supply chain—and if you’d like to see how it can help mitigate the costs of false positives for your team, try Mergebase now!

Oscar van der Meer

About the Author

Oscar van der Meer

Inspiring leadership and innovative technology expertise in Digital, Payments, Finance and Artificial Intelligence.