Never send a human to do a machine's job.
- Agent Smith; The Matrix (1999)
Recently I found myself in a discussion about which web application scanner is the "best". There is a tendency among network security people to reach for scanners and firewalls in response to most problem sets. In the world of web applications, security development life cycles (SDLCs) sometimes consist of scanning the app the day before it goes live and perhaps installing a shiny new web application firewall (WAF) to protect them. These measures inevitably come up short because this is essentially a network security approach to what is actually a code security problem.
Scanners might be sufficient for assessing security posture if we lived in a world where we only had to defend against scanning tools. In today's threat landscape, we're not fighting scanners, we're fighting humans. Specifically, we're defending against structured threats - intelligent human adversaries who have the skill and motivation to tease out the subtlest of security flaws. Internet facing or financial transaction processing apps therefore need a security review process that endeavors to find everything that can be found with a level of effort similar to what structured threats are willing to expend according to the value of a target. Scanners are useful for finding certain classes of security bugs but they will never find everything.
Take recently reported breaches , for example; reports suggest authorization bypass or insecure direct object reference type defects are implicated. These kinds of logic errors cannot be found by a programmatic scanner and this is one way security programs fail - by relying on scanners that can only ever find perhaps half of what needs to be found to harden an app against sustained attack.
A better model is to include coordinated static analysis and manual penetration testing (by humans); performing manual testing informed by static and dynamic scan results tends to make security test cycles far more productive. It's not as expensive as you might think to do this with Veracode, for example, who have a SaaS offering for performing static analysis of binary code as well as dynamic analysis of web applications. The analysis of compiled applications is a recent development in security testing. Similar to source code reviews, binary reviews fall under the category of static analysis, also commonly called ”white-box” testing; and have the same distinct advantages in that they can evaluate both web and non-web applications; and, through advanced modeling, can detect flaws in software programs' inputs and outputs that cannot be seen through penetration testing alone. Veracode take this approach, as they write, because "through examining a compiled form of an application in its run time environment, static binary scanning can provide a more comprehensive picture of real-world vulnerabilities. While integrating other forms of security testing requires significant process modifications, analyzing binaries requires very few such modifications. Binary analysis creates a behavioral model by analyzing an application’s control and data flow through executable machine code – the way an attacker sees it. Unlike source code tools, this approach accurately detects issues in the core application and extends coverage vulnerabilities found in 3rd party libraries, pre-packaged components, and code introduced by compiler or platform specific interpretations."
Static analysis, while possibly the cheapest way to find bugs in quantity, obviously cannot find logic errors like authorization bypass conditions any more than web scanners can - at least not until the machines can think. For that reason, it remains important to utilize manual testing, informed by the static and dynamic results, for maximum productivity.
Veracode are a Boston area security company founded by members of the l0pht heavy industries group famous in the 1990s who also previously founded the @stake security consultancy that was acquired by Symantec. Chris Wysopal, who was @WeldPond in the l0pht, is CTO and Christien Rioux, who was @dildog, is Chief Scientist. Both are original thinkers in security and bring tremendous knowledge to bear; their presence also tends to help build confidence and perhaps raise interest in security programs which can otherwise be somewhat dry at times.
Veracode scores your apps and shows you how your scores compare in your industry which can be appealing to managers and execs hungry for objective metrics. Driving towards a high score and obtaining a "Verified" quality rating is also a useful motivational goal.
If you have never seen a Veracode technical talk at an OWASP meeting or other conference, they're worth seeing. Possibly the case study I most recall seeing @chriseng present was a case of an elaborate homebrewed transposition scheme a developer had devised in order to obfuscate sensitive date. The algorithm apparently enjoyed a reputation as being "unbreakable" but Veracode researchers proceeded to painstakingly reverse it through mathematical analysis - just like a structured threat would - probably much to the shock of its creators. This sort of "structured threat" experience is exactly what apps need to go through before we send them out into the world of the modern Internet.