Everyone has had to become accustomed to the phrase “social distancing” — the practise of using distance from others to minimise health threats. Inevitably, people have started using the analogy “digital distancing” to talk about similar ideas in information security. It’s not exactly new to use disease metaphors in infosec — we already talk about ransomware “infections,” for example. But with public health understandably in the front of people’s minds, let’s discuss “digital distancing” as one part of network defences.
Microsegmentation is an increasingly popular approach to enable digital distancing. As with social distancing, the basic concept behind microsegmentation is to limit as much unnecessary contact as possible. Most computers only need to communicate with a small number of other computers; otherwise, they can and should keep their “digital distance” from the rest of the network.
Microsegmentation functions like an allowlist for network traffic. Systems on the network can communicate only with the other systems that they need to, and only in an expected manner. These network segments have a parallel in the disease management concept of “social bubbles” — limiting contacts to a small group of necessary interactions. The digital version of this works by controlling the network traffic into and out of a given network connection.
Microsegmentation is among the best protections currently available to IT professionals against lateral — or east-west — spread of compromise when defending an organisation’s overall data estate. By limiting each system’s ability to communicate with others on the network, the chance of the digital infection spreading is minimised. Compromise can be further limited by selective use of quarantine: locking down compromised network segments completely, thus preventing spread.
This is in contrast to the “eggshell computing” model in which defences are only placed at the network perimeter, leaving everything behind that perimeter effectively unprotected. Eggshell computing does not protect from lateral compromise. This is unfortunate because many ransomware attacks – such as the recently-discovered Conti malware – deliberately make use of lateral spread.
Microsegmentation can be done in several different ways. Some deployments are little more than “a firewall run by a different team”. More advanced implementations add network overlays. With a combination of overlays and Access Control Lists, it is possible to control all traffic in and out of a specific system. Ideally, traffic can only be “seen” by the systems that are supposed to receive it.
Unfortunately, it’s not practical to isolate most systems so that they only communicate with other systems inside their segment. At a minimum, they must be able to get regular security updates from elsewhere. This can create something of a problem, especially from a regulatory standpoint. Many regulatory regimes call for the isolation of various systems, but how can one isolate a system that needs to communicate across segments?
One solution is to put virtual or containerised firewalls at the edge of each microsegment, so that any traffic into or out of the segment can be filtered and inspected. Only systems which absolutely need to communicate with each other are included in a given network segment, and the firewall offers routing beyond that segment.
This approach adds an increasingly necessary layer of defense to ACLs and network overlays. The ACLs restrict the communication to and from an individual workload. The firewall and associated advanced security services at the segment edge analyze and restrict the traffic entering and leaving the segment. For any of those data flows that are leaving the data center, they can be examined further by the datacenter edge defences.
This creates a layered approach to security that defends workloads from threats that originate outside the data center (data center edge defences), (within the data center, but outside the workload’s own network segment (segment edge defences), and from within the network segment itself (ACLs). If security controls are too tight, these multiple layers of security can make figuring out which security layer is preventing an application from working, but they provide a significant improvement over the “hard shell with soft insides” approach of traditional eggshell computing, which relies exclusively on data center edge defences.
Combining ACLs with network overlays allows for workload placement agility. Network overlays allow a workload to exist anywhere on the network that the network overlays can reach. If all switch ports are capable of allowing a workload to participate in the overlay network, then that workload can physically exist anywhere that the network extends, making it easier to add workloads where needed, when needed, and without significant planning.
While this agility is useful to both application designers and infrastructure administrators, it can also magnify the impact of failures. Distributed applications are broken into microservices, and each application can withstand a different number of different types of microservices going offline before the whole application is compromised.
The ability to distribute these microservices throughout an entire network is also the ability to have the critical components of multiple applications cluster in areas of an organization’s infrastructure where a single outage may down multiple applications by only taking a small number of microservices offline. Physical network resiliency increases in importance with the use of network agility capabilities.
This kind of workload placement agility places pressure on network design. Instead of a strictly hierarchical network designed for North-South interactions, East-West traffic becomes more important, and mesh-like networks become more popular. While the transition between these two design philosophies often necessitates a period of adjustment, mesh networks designs can reduce costs by allowing for capacity to be added more organically, and without needing core switches capable of handling virtually all east-west traffic entirely on their own.
Well-planned implementations architected by experienced professionals can not only be successful, but significantly increase an organisation’s ability to respond to unexpected change, ultimately proving to be of financial benefit.
An increasingly important consideration when examining the benefits of microsegmentation is regulatory compliance. Not only is microsegmentation an important tool for achieving compliance – as a general rule, the more isolated and secure you can make a workload, the happier regulators are – but microsegmentation also functionally requires a centralised management platform.
Having one’s network security that is orchestrated by a centralised management platform means all the rules governing that network security are in one place. They can thus be reported on quickly and easily, making audits far less concerning. A single report can prove out one’s security design, especially in the case of microsegmentation, as the exact list of network traffic restrictions can be examined, from the individual workload through to the data centre edge, to edge computing workloads, and throughout every cloud in between.
Microsegmentation has become a must-have capability. It is a key enabler of both network agility and information security. Implementation will almost certainly require addressing decades of technical debt, but the bill on that was going to come due sooner or later. Achieving security excellence requires digital distancing to minimise the risk of compromise spread, and microsegmentation is the most obvious tool available to accomplish the job.
Contributed by Trevor Pott, technical security lead, Juniper Networks