TRINITY: Did you hear that?
CYPHER: Hear what?
SCREEN: Trace complete. Call origin: #312-555-0690
TRINITY: Are you sure this line is clean?
CYPHER: Yeah, course I'm sure.
CYPHER: Hear what?
SCREEN: Trace complete. Call origin: #312-555-0690
TRINITY: Are you sure this line is clean?
CYPHER: Yeah, course I'm sure.
- The Matrix, 1999
When you start a proxy web debugger with a *.* SSL proxying configuration statement, as I recently did, all of your SSL sessions are rerouted though the proxy. The result is a mid-session certificate change warning like the one shown here from gmail. Such SSL interception attempts have been common on public networks for years. In a perfect world, any such cert change would be noticed and rejected by alert and well-trained users.
I recently helped report and test CVE-2011-2874, Failure to pin a self-signed cert for a session, in Chrome 13. This bug (95917), fixed in Chrome 14, resulted in no cert warning being presented during a mid-session cert change, when self-signed certs were in use, and could yield silent SSL interception (the bug did not affect SSL sessions using valid certs). Apparently there has been at least one similar bug in Firefox in the past. This is interesting to me as it tends to confirm something I have suspected - that SSL bugs involving self-signed certs take longer to find than bugs involving valid certs which presumable undergo higher priority testing - and that using self-signed certs was never a great idea.
While discussing the bug, someone asked, "Realistically, I wonder who actually pulls up the certificate chain and validates the SHA fingerprint of self-signed certs?" The answer, in all likelihood, is nobody - apart from SSL researchers and possibly the occasional security analyst who had reason to be suspicious. Moxie Marlinspike recounted in his latest talk that during the latest CA incident, where the DigiNotar CA issued fake certs for Google properties, Chrome users apparently noticed only because Chrome ships with Google certificates embedded. Moxie also observed the perpetrators could have fingerprinted and avoided Chrome browsers if they had chosen to do so. He also made the point that a nation-state probably does not need to subvert a CA, as they can instantiate a CA themselves for around $30,000 US, and many nations have already done this.
During the same bug discussion someone observed, "A more likely model might be to add the self-signed cert to the system's cert store (this would sidestep the bug as the user wouldn't be clicking through anything)." This is how self-signed certs should be used, of course, but in many cases this is not done for the same reasons valid certs are not being used - because IT do not fully appreciate the consequences of skipping this step, or because the time and effort of doing this appears too large. In cases where self-signed certificates are used, users actually learn to click through cert errors in order to get their work done. There are populations of users who have learned to work this way, and would probably never distinguish a routine cert warning from an actual warning when they're being intercepted. And, of course, if the browser never throws a warning due to a bug, the user isn't going to notice anything.
Why do we insist applications and servers use HTTPS, but tolerate usage of self signed certs? The answer, like many things in security, is that the cost in dollars, time and effort is significant. And that the consequences of the problem are not visible; cases of interception of roaming users' SSL sessions, on sites with self-signed certs, are probably never detected. Fighting for funding takes energy and can be difficult when there is perception that self-signed certs are commonly used by many organizations or there is history of doing so. In the 1990s many IT organizations set about building their own PKI infrastructure and later abandoned these projects after realizing that there was a learning curve in addition to a significant time and effort cost. PKI should not have been abandoned so easily. When organizations have their own CAs, as some do, certs become cheap enough that every server can have one; even clients can have certs for authentication. If everything had a cert, including users, we could even learn to sign documents - signing Office documents and PDFs, two of the most exploited binary document types, is easy to do. If we were signing documents, we could explore forging agreements with peers and supply chain organizations to exchange signed documents - and reduce the inbound volume of fake documents that did not originate from where they claim to. Many of these fake documents are rigged with malware delivery systems and therefore worth detecting and filtering. While this would not eliminate the problem of inbound malicious documents attached to deceptive messages, it would make a sizable dent, and any significant reduction in such incidents that yields a cost savings frees up some resources to be utilized on additional security improvements.
Of course the SSL system is itself imperfect and abuse does take place as we have seen this year. This may be another reason to revisit building a private CA, provided resources are available to operate and monitor it for abuse.
The CA system itself is receiving considerable scrutiny this year. At the Qualys conference last week Moxie gave a detailed talk of his proposed improvements to the CA trust model and announced the Convergence plugin which is based on work done by the Perspectives project. Qualys also announced support for the project by operating two notaries that are online at the time of this writing.
The CA system itself is receiving considerable scrutiny this year. At the Qualys conference last week Moxie gave a detailed talk of his proposed improvements to the CA trust model and announced the Convergence plugin which is based on work done by the Perspectives project. Qualys also announced support for the project by operating two notaries that are online at the time of this writing.
No comments:
Post a Comment