12 months have passed since a bug was publically revealed that would shape headlines, researchers and presentations for the next year.
A flaw in the OpenSSL cryptographic software library, Heartbleed allowed information protected by the SSL/TLS encryption to be captured by anyone on the internet who was able to read the memory of the systems protected by the vulnerable versions of the software.
A fix was released on April 7th 2014, and many applied it immediately due to the publicity scale ripples that Heartbleed caused. However the flaw was apparently present from December 31st 2011, and was in the release of OpenSSL version 1.0.1 on March 14th, 2012.
Was it really that serious? Bruce Schneier said that “Catastrophic” is the right word, saying that “on the scale of 1 to 10, this is an 11”. Since then we had the Bashbug/Shellshock bug in September, while POODLE bit OpenSSL again in October. The announcement of “high severity” fixes last month for OpenSSL sent the industry into a spin, before it was revealed that the fixes were not as critical as expected.
A year on, there were two questions that I wanted the industry to answer for me, and I am sure I am not alone in wondering if it is time that more responsibility was placed upon fixing open source code so that this doesn’t happen again, and if so, whose responsibility is it?
Mike Janke, chairman of Silent Circle, agreed that there should be more responsibility as if you build a product and put the code out to open source, it’s your responsibility as the author. “However, if your code ends up being used by organisations, corporations and thousands of others in their product – it’s everyone’s responsibility,” he said. “It’s very tough or impossible to police this and even enforce it. One method would be more funding for open source review boards and non for profits.”
Likewise, Wim Remes, manager strategic services EMEA at Rapid7, said that Heartbleed was a wakeup call, not just for the security community, but for the internet community at large.” It helped us again realise that the network we all rely on is built largely on the efforts of volunteers,” he said. “While initiatives like the Internet Bug Bounty are laudable (having paid out more than $200,0000 for fixed bugs, which is awesome), it’s not the first time that code review for critical open source code has been offered as a solution to the problem.
“Unfortunately, though, the Internet Bug Bounty Initiative has only two sponsors to date, and it’s impossible to believe those are the only two organisations with a vested interest of making the internet a safer place.”
He admitted that there is no immediate answer to the question of who has the responsibility to ensure that the code supporting our internet is secure, but he said it is a shared responsibility. “The resources with access to the right skills to actually do it are limited,” he said. “We all share the responsibility to be vigilant but the internet is not going to fix itself overnight.”
Luis Corrons, technical director of PandaLabs, said that Heartbleed and subsequent bugs are not the fault of open source code, as all software has security holes, and new vulnerabilities will be found in the future. “One of the main problems with open source code is that it is used in a wide variety of devices, and it is the responsibility of the manufacturers of those devices to provide an update as soon as the fix is published,” he said.
For a time, the headlines of this website were dominated by Heartbleed, although the number of reported victims were not especially high. So is it the case that the worst is still to come from this bug, and 12 months on from the revelation, how many remain unpatched?
Research released last week found that 86 per cent of US citizens had not even heard of the bug. Gavin Millard, technical director of Tenable Network Security, said: “One of the major lessons learnt in 2014 from the emergence of the superbugs is that organisations that thought they had a good grasp of security, quickly realised they had no idea what was deployed where and the processes designed to fix major issues weren’t able to effectively handle rapid response.
“Another interesting impact of Heartbleed is how we now view disclosures of vulnerabilities that could become the next Heartbleed or Shellshock. A good example is the recent Denial of Service OpenSSL vulnerability that was embargoed for a few days. With no word coming from the contributors of the project, the lack of information was filled with rumours of massive remote code execution issues. As we now know, this particular bug was nothing more than a storm in a SSL/TLS teacup, but it serves as a reminder that we shouldn’t let hype overrule logic.”
I asked Robert Hansen, VP of WhiteHat Labs, if a lesson learned is that we need to do things better? He said that there is a much greater interest in looking at technologies that we have all assumed were safe – the TrueCrypt audit was a great example of that – but there are countless libraries that we rely on that have not been carefully scrutinised.
“So while the intent is good, there is a large amount of mountains ahead of us that we are yet to climb,” he said. “Getting things done right in the first place is a great goal, but I don’t see it getting much traction. I don’t think that we’ll see a lot more attention to doing things right in the first place than we have before, but rather a quickening pace of rapid fixes.”
Asked who he felt responsibility lay with to fix open source, he asked if we did place responsibility on someone, what does that responsibility entail? Lawsuits? Public shaming? Your internet license gets taken away? “Unfortunately, there are just more questions than answers when you talk about placing responsibility,” he said.
Research released yesterday by Venafi found that 84 percent of Forbes Global 2000 organisations’ external servers remain vulnerable due to certificates not being renewed, and 67 per cent of the Forbes Global 2000 most profitable companies in the UK are still vulnerable to the security flaw and risk a massive security breach.
Mark Nunnikhoven, senior research scientist at OpenDNS, said that if an attacker stole the key to the certificate (and can therefore read all of the encrypted traffic), any organisation that hasn’t rotated their certificates remains at risk of a man-in-the-middle attack.
“Rotating a certificate isn’t a particularly hard task nor will it have an impact on an organisation’s users,” he said. “The reason most organisations don’t change their certificates is because they believe that the crisis is over and there’s some other task that requires IT’s attention. But by not doing that, they are putting their users needlessly at risk. Most certificate providers offered free rotation in light of the vulnerability. There’s no excuse not to make the change.”
So a year on from the public revelation, we can only assume that the majority took some action and there are some servers still unpatched. Mark James, security specialist at ESET, said that it is not enough to take some action to protect internal servers, more that you either patch it or you don’t.
“You have to check all your hardware to see if you are using an affected version of OpenSSL and if you are it must be patched, even if its deemed as an unimportant server or not,” he said. “Bad IT practices are what caused these type of attacks to be successful years after they have been found and patched. Would be attackers will use any and every means to exploit your systems so don’t make it any easier for them.”
I suspect that the major bugs of 2014 will not be the end of the story, and something is likely lurking now in the wires waiting to surprise us. The problem will not be in the flaw, the problem will be in the response to it and a proactive industry will mitigate the issue better.