The year, and much of the decade, has seen a series of historic public record incidents featuring deception, client side exploits and/or purpose-built malware. These exploit and malware payloads are often invoked by an unwitting end-user after being delivered by deception - using crafted email messages attempting to appear routine and harmless. Securing the client endpoints tends to be expensive and complex; operational costs for a large population of host intrusion prevention tools, for example, can easily move endpoint TCO outside budgets and expectations. Part if this cost spiral is due to the stateful
complexity inherent to many client environments and their large
populations of legacy products. Network defenses tend to be easier, and therefore more plentiful, but today's complex client-side document format and runtime exploits are not easily detected by traditional network monitoring tools.
I myself have not lived through one of these large public incidents. I have, of course, worked on and observed any number of incidents. Watching the rise of these incidents over the years, I find it curious how our initial reaction is often that that these sorts of incidents are the result of simple user education failure - that if we just spent a few minutes training users not to click the wrong things, the whole problem would vanish like a bad dream. This prognosis is probably less relevant today, in the face of well executed phishing and deception operations, than it ever may have been. Looking at the parade of public incidents across well-resourced organizations, and arguing we have a simple user education problem, seems unlikely. If we could teach users to discern the "bad" objects from the "good" ones wouldn't organizations choose to do that, instead of choosing to risk major incidents? If these incidents could be prevented with a bit of user training, I think we would probably be doing that already.
I have been reading Vice President Cheney’s memoirs and there is a good illustration in chapter 14 of the importance of developing actionable plans that can be effectively implemented and operationalized. In the chapter, a debate is raging circa 2006 as to whether and how deeply Americans should become involved in Iraqi sectarian violence and potential civil war. One school of thought, advocated by State, was supposedly that our troops should “engage only if they think they are witnessing a massacre, the kind of violence that had happened in 1995 when Serbian forces had slaughtered thousands of Bosnians in Srebrenica.” The chairman of the Joint Chiefs pointed out that this was impossible to operationalize: “How do I write that into an order for the troops?” he asked. “Hold your fire unless you think it looks like Srebrenica?” That just doesn’t work, he said. “Either we’re in or we’re not in. Either we’re operating or not operating.” A policy that users should “not click attachments” is also an example of a vague idea that cannot be operationalized.
The “don’t click the attachment" strategy is not clearly actionable if you stop to consider the fact that many kinds of knowledge workers’ jobs involve opening, reading, editing and generally working with attached files by the hundreds. The malicious attachments tend to arrive in increasingly well forged emails that can be difficult to distinguish from genuine messages – particularly to the busy knowledge worker who is multitasking, and partially distracted, most of the time. The false messages may appear to originate from a colleague when the target has been researched on social networking outlets. They may even resemble a follow-up on a pre-existing conversation, in cases where networks or mail systems have been penetrated.
Let's consider, for a moment, what it would mean to train knowledge workers to recognize and avoid “bad” attachments in order to protect themselves like security people do. We routinely forensically examine email headers; we open attachments in hex editors and debuggers; we open URLs in primitive tools like wget and lynx; we outfit browsers with any number of debuggers, plugins and tools for inspecting and controlling javascript and SWF content; and we use multilevel security environments with any number of virtual and physical machines of different trust levels and separation mechanisms according to different risk levels. All of this, as you might imagine, tends to slow things down a bit and it is unlikely that an ordinary user could work this way, even if we could somehow train them to learn to do all of this, while maintaining the kind of productivity output levels that are expected of a modern knowledge worker.
Let's consider, for a moment, what it would mean to train knowledge workers to recognize and avoid “bad” attachments in order to protect themselves like security people do. We routinely forensically examine email headers; we open attachments in hex editors and debuggers; we open URLs in primitive tools like wget and lynx; we outfit browsers with any number of debuggers, plugins and tools for inspecting and controlling javascript and SWF content; and we use multilevel security environments with any number of virtual and physical machines of different trust levels and separation mechanisms according to different risk levels. All of this, as you might imagine, tends to slow things down a bit and it is unlikely that an ordinary user could work this way, even if we could somehow train them to learn to do all of this, while maintaining the kind of productivity output levels that are expected of a modern knowledge worker.
Even if we could somehow train users to protect themselves this way – and I think this is unlikely – it’s important to consider the psychological reasons people don’t work this way outside security organizations. Security people survive these sorts of attacks (mostly) due to our well-developed trust issues, and associated paranoia, that some would say makes us about as much fun as Marvin - the paranoid android in The Hitchhiker's Guide To The Galaxy. We maintain a suspicious mindset, and a level of vigilance, that is quite beyond the ordinary line worker. Ordinary people don’t think this way and have been falling victim to fraud, deception and trickery for as long as human communities have existed because many people simply don’t recognize it most of the time. Most of the population have personality types that are not antisocial or criminal and assume everyone else is similar. They simply fail to recognize deception much of the time because they’re not expecting to see it. Fraud and subterfuge do occur, of course, but in a minority of transactions between a minority of parties – a safe assumption much of the time. People then make assumptions that they are able to conduct their business without encountering frequent subterfuge or deception because, frankly, human civilization is largely predicated on that basis. Not to mention that treating a majority, or a plurality, of transactions with suspicion would slow people down, reduce their output and affect their productivity. So, people go about their business and assume that security departments exist which will intercede and save the day in the event something is amiss.
Another argument sometimes made is that the victims of these sorts of deception campaigns are simply less intelligent. In my experience over the last decade, the intelligence of the target subject is not a significant factor. I've personally seen countless highly intelligent people compromised by client side exploits and phishing campaigns. I've personally seen brilliant software engineers and scientists with IQs near 200 compromised this way - and if anyone, I would have bet on them. Questioning the intelligence of the end-users is simply a distraction and the emotion it invokes derails any meaningful consideration of the problem sets. If anyone still espouse this simplistic view, we might settle it by asking them to wager on this. They could agree to run an authorized deception campaign against their users without warning.
There is something of a tradition, in IT, of blaming users -and of blaming victim organizations for security incidents. In prior decades, when zero-day vulnerabilities were less common, it was sometimes said that security incidents were a failure of patch management. If organizations were less lazy, the argument went, and patched their systems quickly, there would be no security incidents. This advice usually came from people with little or no operational experience. The reality, we have learned, is that patch management is a Sisyphean task - an expensive, infinite task with no beginning or end - that drains resources which could otherwise be used on more productive and worthwhile endeavors (like improving client security). The nature of patch convergence at scale in a modern multi-vendor enterprise is a bit like the speed of light - as you approach 100%, the cost curve tends to become asymptotic and unattainable. Einstein famously remarked that a problem could not be solved at the same level of consciousness that created it; this is the nature of the problem in automated patch management. Patch management tools, and sometimes even the patches, are themselves imperfect creations of the same vendors who created the products being patched, and they will never perform at 100% effectiveness, leaving some number of patching processes in a state of failure requiring manual intervention. Manual patch management at scale is terrifically expensive and disruptive. IT, like Sisyphus, are condemned to roll the patch management boulder uphill for eternity only to see it roll back down again before reaching the summit. Those who have actually approached patch convergence were disappointed to learn that we had it wrong all the time, and the hilltop was not actually a place of safety, thanks to the rise of zero-day exploits. This model, and its inability to scale, is one of the reasons for the rise of the software-as-a-service model which will probably eventually prove superior to the first-generation boxed software model. The present model, where resources must be devoted to tens of thousands of instances of dozens of products that must be served a monthly helping of patches, interspersed with the occasional emergency patch that must be installed at once, resulting in a maintenance cost that may approach or even exceed the purchase price, resembles something designed by a madman. It's also interesting that the security community seems to struggle with patching as much as anyone. For example, earlier this year I had blog posts tweeted by a number of security leaders with large numbers of followers. This resulted in a few traffic bursts to some websites I maintain, presumably from security professionals who read the tweets. Analysis of the web log data revealed that a majority of the readers' browsers did not have current versions of major runtime plugins which are frequently patched for security vulnerabilities. The data seems to suggest that patch convergence is hard enough that we in the security community struggle with it ourselves, which is interesting.
There is something of a tradition, in IT, of blaming users -and of blaming victim organizations for security incidents. In prior decades, when zero-day vulnerabilities were less common, it was sometimes said that security incidents were a failure of patch management. If organizations were less lazy, the argument went, and patched their systems quickly, there would be no security incidents. This advice usually came from people with little or no operational experience. The reality, we have learned, is that patch management is a Sisyphean task - an expensive, infinite task with no beginning or end - that drains resources which could otherwise be used on more productive and worthwhile endeavors (like improving client security). The nature of patch convergence at scale in a modern multi-vendor enterprise is a bit like the speed of light - as you approach 100%, the cost curve tends to become asymptotic and unattainable. Einstein famously remarked that a problem could not be solved at the same level of consciousness that created it; this is the nature of the problem in automated patch management. Patch management tools, and sometimes even the patches, are themselves imperfect creations of the same vendors who created the products being patched, and they will never perform at 100% effectiveness, leaving some number of patching processes in a state of failure requiring manual intervention. Manual patch management at scale is terrifically expensive and disruptive. IT, like Sisyphus, are condemned to roll the patch management boulder uphill for eternity only to see it roll back down again before reaching the summit. Those who have actually approached patch convergence were disappointed to learn that we had it wrong all the time, and the hilltop was not actually a place of safety, thanks to the rise of zero-day exploits. This model, and its inability to scale, is one of the reasons for the rise of the software-as-a-service model which will probably eventually prove superior to the first-generation boxed software model. The present model, where resources must be devoted to tens of thousands of instances of dozens of products that must be served a monthly helping of patches, interspersed with the occasional emergency patch that must be installed at once, resulting in a maintenance cost that may approach or even exceed the purchase price, resembles something designed by a madman. It's also interesting that the security community seems to struggle with patching as much as anyone. For example, earlier this year I had blog posts tweeted by a number of security leaders with large numbers of followers. This resulted in a few traffic bursts to some websites I maintain, presumably from security professionals who read the tweets. Analysis of the web log data revealed that a majority of the readers' browsers did not have current versions of major runtime plugins which are frequently patched for security vulnerabilities. The data seems to suggest that patch convergence is hard enough that we in the security community struggle with it ourselves, which is interesting.
I think we are going to have to forget the "don't click attachments" debate and start thinking about the problem sets in more depth - and about finding strategies that actually work. There are better questions we could be asking and more useful discussions we could be having. Why, for example, do we need SWF code inside Office documents where it is harder to find and inspect? Why don't the mail services to do a better job at identifying forged messages and filtering content? A message that claims to come from a user inside the organization, but originated on a foreign network, should be detected. Email services - and endpoints - could be smarter about handling executable code embedded in binary data formats that blur the lines between code and data, particularly in organizations that need not receive such things from outside their borders.
Chrome's automatic patching model has been praised by both effectiveness and usability. The SaaS/PaaS models tend to have more sane coast and survivability models and may supplant some of the boxed software products we use today in the future. These models will take time to implement and have their own challenges - and we're obviously not going to replace everything with a service, but the economics are interesting. Desktop virtualization also has attractive patch management and maintenance cost advantages; I believe this is also a future direction for those willing to migrate to a more stateless endpoint. Whenever I hear about a network with hundreds of infected endpoints, I tend to think desktop virtualization would probably be the only way to undergo a fast and efficient re-provisioning cycle, for those organizations who prefer to "nuke infected systems from orbit" and re-provision rather than take on the time and effort required to attempt removal of today's elaborate malware.
There are also changes we could make today. Vendors could do a better job of signing all binaries, and client endpoints could do a better job and making policy or reputational decisions about safely handling code based on that data. Executable and binary whitelisting tools are becoming available for endpoints. Whiltelisting has a cost of ownership, to be sure, but is probably still preferable to the cost of some of the major incidents we see in the public record. Email services could get smarter at detecting false messages, particularly with binary data or code attached, or feeding necessary event log data to threat management systems. If we had not abandoned PKI projects in the 1990s we could be forging agreements with peers to exchange signed documents, in order to detect a false document that does not originate from the organization it claims to, and reduce the inbound volume of fake documents.
1 comment:
Well said Craig, glad you could work Sisyphean in after all these years.
I tell you it is a tough task filtering e-mail so only the right stuff gets through, and none of the bad stuff, it is an ongoing battle. If everyone was on board with SPF records that would help, but there are motivation issues there.
Post a Comment