The Trouble with Encryption
Invariably it seems that whenever a new data breach is revealed – and that appears to be almost a daily, or at least more than once-weekly event – that a number of so-called gurus and pundits will bemoan the fact that the data was unencrypted. There’s always been one good reason why this was irrelevant, but recent news has uncovered a second reason.
The first reason, one I’ve gone on about for quite some time, is that the data breaches are no longer “smash and grab” operations. The miscreants use social engineering to obtain legitimate credentials from users, then provide these credentials to log in and accumulate the data which is then sent out to a site friendly to the hacker. Usually without triggering any alarms, and with no need to circumvent any encryption should it be in place. That’s because authorized users see unencrypted data and logging in as that authorized user – with stolen or forged credentials – gives the bad guys unencrypted access.
Still, social engineering can be a long game and some attackers can be impatient and continue to use “smash and grab” techniques. There’s also the state security services (NSA, GCHQ, etc.) which intercept communications in an attempt to monitor evil doers. In both of these cases encryption might help, or at least you’d think so.
However, a recent paper, “Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice” reveals that the most widely used encryption system is seriously flawed. Written by a consortium of scientists from the French Institute for Research in Computer Science (INRIA), The National Center for Scientific Research (CNRS), The University of Lorraine, Microsoft Research, the University of Pennsylvania, Johns Hopkins University and the University of Michigan, the paper reveals two significant problems with most implementations of the Diffie-Hellman system.
Diffie-Hellman key exchange is widely used to establish session keys in Internet protocols. It is the main key exchange mechanism in SSH and IPsec and a popular option in TLS. See Wolfram Mathworld for a full explanation of the it works, but for our purpose here it’s sufficient to say it relies on the use of a pair of very large prime numbers which are manipulated mathematically called a public-private pair. The theory is that while it’s possible to work back from the public key to discover the private key, in practice this would take an inordinate amount of time. The Imperfect Forward Security paper found that, in practice, this theory doesn’t hold.
At the start of the encryption process, a common public key, a (very) large prime number, is generated. Since those prime numbers are public anyway, and since it is computationally expensive to generate new ones, many encryption systems reuse them to save effort. In fact, the researchers note, one single prime is used to encrypt two-thirds of all VPNs and a quarter of SSH servers globally, two major security protocols used by a number of businesses. A second is used to encrypt “nearly 20% of the top million HTTPS websites”.
And this is where the problem begins: while there’s no need to keep the chosen prime number secret, once a given proportion of conversations are using it as the basis of their encryption, it becomes an appealing target. And it turns out that, with enough money and time, those commonly used primes can become a weak point through which encrypted communications can be attacked. The more communications captured using this prime, the easier it is to use standard cryptographic methods to uncover the message text. Many implementations of Diffie-Hellman exacerbate this by using only the minimal 512-bit encryption. The researchers note that using a standard man-in-the-middle attack, you can force the endpoints to downgrade from a robust 1024-bit encryption to the more easily cracked 512-bit encryption.
Correcting this problem isn’t easy. The paper notes that many programs are hard-coded with the same public key, and most are still easily downgraded to 512-bit encryption.
It’s now widely believed in the security community that the NSA discovered these flaws some years ago, spent a great deal of time and money breaking the code, and put themselves (and their allies) in position to decrypt most encrypted communication. Hackers with either a lot of money (usually from state-sponsored infiltration agencies) or a lot of time (typical of lone hackers) may also have been able to do this.
Does this mean you shouldn’t bother encrypting your communications? Of course not, any more than you should stop using passwords because they are massively flawed. And, just as with passwords, do whatever you can to strengthen the encryption process, abstract it from within applications and insist on strong (i.e., 1024-bit) factors to limit the potential for decryption. Security is a never ending exercise in keeping ahead of those who wish to intercept our communications and compromise our data. Stay vigilant.
Dave Kearns is Senior Analyst at KuppingerCole and focuses especially on the future trends around authentication and authorization and therein on risk-/context-based authentication and authorization. He attended Carnegie Institute of Technology (now Carnegie Mellon University), leaving to help found the first on-line banking system in the US, with Pittsburgh‘s Dollar Savings Bank. Dave Kearns gave up computers to spend 15 years in the wine & spirits trade, only to come back to technology with the advent of local area networks in the mid 80‘s. He spent 10 years as a network manager, ending up as Information Services Manager for the former Thomas-Conrad Corporation (now part of Compaq). In 1987, he was a founding SysOp of Novell‘s Novell Support Connection service on Compuserve and served as the first president of the Association of NetWire SysOps. Dave Kearns was formerly Technical Editor of Networking Solutions magazine. He‘s written, edited and contributed to a number of books on networking and is a frequent speaker before both trade and business groups.