A Healthy Skepticism of Security Tools

There are an overwhelming number of tools today in the security field, from open source to proprietary, covering everything from software composition analysis and secrets scanning to network vulnerability scanning, IDS/IPS, EDR, DAST, SIEM, etc. Some work better out of the box, and others need a lot of tuning to either flag issues or to avoid false positives.

One of the more troubling aspects is the amount of trust that’s often placed in these tools. Tools are generally great, they make life easier to spot potential problems and in most cases they lower the bar for technical expertise and allow most people to run a scan for potential problems. Notice the emphasis on potential. Unfortunately, all too often these tools are trusted to such an extent that anything other than a clean scan is seen as an unacceptable risk. In the view of such people and organisations, if the tool detects a potential issue, then it must be addressed until the scan comes back clean, no matter the cost. No effort is made at reviewing, analysing or indeed validating the results. From vulnerability scans that flag “invalid” TLS certificates for .local domains, to dependency scanners that completely mis-identify a particular library — it’s evident that vulnerability scanners are not always the sophisticated tools that some vendors have made them out to be. At best, they point to potential problems that almost always require human review and analysis.

To exploit a vulnerability requires a certain set of conditions to be valid. If 1 out of 3 conditions is met, should it still be viewed as a vulnerability? These factors often come up in software dependencies, where the presence of a certain library version is flagged as having been assigned a CVE, however in almost every instance there are highly specific conditions that must be met, e.g. calls to a certain function, enabling a certain feature, etc. There are a number that go as far as requiring the endpoint running the software to have already been comprised (in which case there’s a lot more to be concerned about!).

While I’ve mostly touched on the topic of false positives, false negatives are also a significant concern, and again highlight the importance of a healthy skepticism of tools. On numerous occasions, I have come across vulnerability scans on pen-testing engagements that have missed SQL injection, XSS vulnerabilities (among other issues), and these can seemingly be less complex attack vectors. A tool like DAST scanner also can’t tell whether a particular set of returned data should be part of the application logic or not. These are just a few small examples, but there are many more.

The reality is that none of these tools are a replacement for manual testing, analysis and review. No penetration tester should ever place absolute trust in their tools, but of course the sheer scope of work in most cases makes it impossible for every potential threat vector to be manually tested. Like audits, a limited sample can be tested in the hope that this will be somewhat representative of the whole, but miss just one vulnerability and you have left an opening for a compromise.

Running at least one other tool might provide some measure of validation or corroboration, but which business is going to pay the steep costs of a license for two or more different products that are supposed to be doing the same thing?

Ultimately, there will always be a need for better tools but more important is to understand the limitations that apply to all of them.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.