These might be boom times for anyone involved in application containerisation but lurking just under the surface are a host of security issues that development teams are only now starting to get to grips with.
It’s an old story for the security industry that has played out numerous times in the past: a new technology arrives on the back of big promises and for a while security gets pushed to the side in the excitement. Eventually, a new set of risks must be factored in at which point organisations look around for a lifebelt.
What might a lifebelt for application containers look like and where can organisations find such a thing? It’s a subject that Gavin Millard, EMEA technical director of vulnerability management company Tenable Network Security would probably be happy to talk about all day.
Founded in 2002 on the back of the famous Nessus vulnerability scanner, Tenable’s expertise is in helping companies with the increasingly complex processes that have grown up around spotting, remediating software flaws, both on premises or in the cloud.
Every ethical hacker will be able to name a favourite pen-testing or vulnerability scanning tools but very few will end up working for the company that makes them. Millard, is one of those rare exceptions.
“I’d known Tenable for many years – Nessus was one of the first tools I used. It was unbelievable the type of information you could get from this free tool,” says Millard.
After ten years at cybersecurity outfit Tripwire, three years ago he made the jump to Tenable. What grabbed his attention was a talent mix that included founder and then CEO Ron Gula (now succeeded by Amit Yoran, who joined from RSA), Security B-Sides co-founder Jack Daniel, noted firewall engineer Marcus J. Ranum and, of course, Nessus inventor Renaud Deraison himself.
In 2014, Millard joined a company of 300 people – it is now three times that number and still expanding. It’s a growth that has happened on the back of a desire by large organisations to see what is going on inside their software deployments as well as being able to do something about it.
Containers might be new but they are turning into one of Millard’s biggest challenges. He frets about the way DevOps teams understand their benefits without necessarily being cognizant of the risks they bring.
“The reason people have started using containers is because it gives them a transportability and predictability to their code,” he agrees. The key moment was the arrival of Docker in 2013, which has taken the old idea of Linux containers and “done a Red Hat” on them, he says.
There are also advantages. “It’s a bit like the next generation of virtualisation. You have ephemeral assets that spin up and drop down as required.
“What that means is that companies save a significant amount of money on the hosting of those assets. They are increasing and decreasing their compute power as required. You can’t do that with a virtual machine – it takes minutes for a new version of Linux to spin up.”
Understanding the beast
For web and microservices developers, this application-centric approach has been a shot of adrenalin. Normally, anyone running an application would need to spin up a separate virtual machine (VM) or hypervisor, a design that scales inefficiently across large numbers of possibly small applications. Containers avoid this by allowing multiple applications to be virtualised inside a single OS.
As well as allowing more smaller applications to use the same underlying hardware, everything needed to make each application function sits within each container, which overcomes the drag of having to create different versions for different OSes. It’s a model that suits the world of small, portable microservices applications built, deployed, and taken down in short order but it does have a security drawbacks.
Now comes the ‘but’. Millard points out that containers lack the isolation of VMs and are therefore, on paper, not as secure. A single vulnerability at kernel level will compromise every application container running on it. Similarly, unless Docker ‘names-pacing’ has been set, a compromise of a container with root privileges will be able to conduct a breakout through the host. Scanning for and managing vulnerabilities on containers, many of which aren’t running for very long, can quickly become a major management headache.
“The big issue with containers is the lack of visibility into the container itself,” agrees Millard. “We’ve been aware of containers being a blind spot for a while now.”
This is compounded by the fact that containers are the responsibility of development rather than operations teams. A vulnerability inside a software library could be embedded inside a single Docker image that ends up being spun up dozens or hundreds of times, causing a new vulnerability to pop into existence on a large scale, as if from nowhere.
“As a CISO I have to trust that the development team are keeping all the libraries up to date. Security isn’t generally part of that process. The risk is these containers are being created without the oversight of what’s being done.
“You can tell that the container is there but you can’t connect into it to see what code it’s running or whether there are any issues with that.”
The good news is that it is easier to address a vulnerability in a container than on a virtual machine. After the doom and gloom, Millard brightens.
“Done properly, containers are an awesome way to do security,” he says, before mentioning Tenable’s October 2016 acquisition of a company called FlawCheck as a case in point.
With containerisation suddenly growing, FlawCheck offered the capability to scan Docker images for both vulnerabilities and malware in a way that would work with different software registries. As we interviewed Millard, Tenable was hard at work integrating FlawCheck’s technology with established Tenable vulnerability management.
What remains harder to solve is the classic problem of awareness: Docker containers are still growing faster than people’s security understanding.
“We have mass adoption with low security awareness. It’s like BYOD a few years ago. Security teams said ‘no’ and they were circumvented. This is the same with DevOps and containers. The best approach is always to put the appropriate controls in place,” he says.
“As security professionals, we must embrace the new wave of deployment technology but do it in a secure way.”