“Fixed fortifications are a monument to the stupidity of man.” - General George S. Patton
I for one have never quite grasped the school of thought that a well-designed security program consists of bringing in a (possibly technically unsophisticated) audit team once year to run through a checklist of what are essentially basic requirements for an infosec architecture - with the expectation that will be sufficient until next year.
It’s not impossible to detect structured threats with a well-designed detection and response capability. Even with a semi-well designed program, it is often possible to detect these sorts of “data breach” incidents – if the organization is thoughtfully deploying and using good tools and technologies in their security program. Why is it, then, that so many victims only build security capabilities after they have had their first incident?
One aspect of the problem seems to be a large gap between perception and reality, in many organizations, as to just how structured and sophisticated for-profit threats are. Some of these victims would have done more in advance if they really understood the scope of the problem – and they might never had been victims. Maybe we need more information sharing - or even more of what Schneier calls "security theater" - in order for stakeholders to better appreciate how big the problem is.
Some organizations are unwilling or unable to confront risk until it materializes in a dramatic way. There is psychological science to support this; some people and organizations are risk-seeking in the face of a possible loss even thought they are risk-averse about potential gains (see Schneier's 2007 Blackhat keynote, it's pretty interesting). The result being, that in many cases, the temptation to save money by neglecting security and assuming some risk is irresistible to many until they have their first incident.
Fundamentally, the economics are against the breach victims; structured threats probably out-match corporate security programs by orders of magnitude in resources. Security programs are often seen as overhead cost centers and fight a continuous battle to justify their existence and obtain sufficient funding to build meaningful capabilities. The specter of data "breach" costs cannot always be relied upon to justify funding a security progam - particularly when a breach has never been known to occur; these costs are are assigned to the responsible parties, for the most part, by the banks and credit card companies now, and this does not seem to slow down the breach phenomenon (or maybe it has, and it could be worse). Perhaps an economic incentive - pushing the costs of the breach onto the responsible parties - is the only thing that will make a difference, as with many aspects of the marketplace. Maybe if there were a mechanism for capping the amount of fraud losses that can be passed on to consumers in rate / price increases..that might be just too difficult to design implement and might fundamentally break too many risk models. Levying fines and penalties is tricky also; while some cases may be clear cut examples of negligence, other cases will inevitable be found which feature structured threats that are sufficiently sophisticated to evade detection by state-of-the-art security tools; can we fairly assign responsibility to the victim organization in a case like that?