Being struck by ransomware has been compared to having a heart attack. It’s something that stalks everyone in theory and yet when it happens the shock of the experience is always a surprise. For the first seconds, minutes – and sometimes hours – organisations are on their own.
It’s a moment of unexpected trauma which many organisations find paralysing, something attackers plan for. This makes the attack’s effects even worse. Eventually a growing number call for help, valuing the experience of a service provider that’s seen others go through the same mill many times before.
One company on the end of some of those calls is AT&T and its Managed Security Services business unit. Director Bindu Sundaresan has first-hand experience of helping victims through the difficult day one. What advice would she give to anyone worried about this threat?
1. You tested the incident response plan, right?
“When a customer engages with us during a ransomware attack, it’s always a chaotic situation where the client’s ability to conduct business has completely stopped. This is typically the first time they’ve ever suffered an outage of such magnitude,” she says.
The first hit is to the IT team itself as a functioning unit. “Many times, the IT team feels it’s at fault for having had this happen to them, and that fear propagates across the team.” In her experience, the most important oversight is not that there is no incident response plan, but it’s not been properly stress tested, starting with the communication and decision-making chain of command. So, you need to regularly test your cybersecurity incident response plan, along with the humans and technology that will carry it out. You could get a false sense of security if your only testing comes from conversations in a meeting room with no pressure bearing down as your organization goes dark.
“Who is the ultimate decision maker for this incident? Often you see a bunch of people raise their hands, which is not ideal. My advice is that you can only have one person in charge of decision making.”
Just getting together some people from a third party MSSP doesn’t cut it. That company can’t make decisions for you – a company official must be in charge, ideally someone who’s seen a ransomware attack from the inside. Communication isn’t just about the internal chain of command but who talks to external providers, partners, and law enforcement.
2. Thirty days of logging isn’t enough
The first question every victim wants answered when an attack happens is whether the attackers are still on the network and, if so, where they’ve hidden themselves. The first thing the IT team will reach for are logs which hopefully betray the fragments of their movement and tools techniques and procedures (TTPs).
The flaw in this is that logging doesn’t always capture enough data on default settings; for example, the last 30 days on an Active Directory (AD) controller. Sundaresan’s advice is to go beyond what is required for basic compliance and extend logging to several months at least on important servers. Only then will it be possible to discover the root of a compromise, essential to avoid a repeat incident.
“Attackers can be on your network for 230 days in some cases and the company’s logs only go back 30 days. That doesn’t work anymore.”
3. Where are the assets?
The next remediation task is patching, which turns out to be harder than it sounds. “More often than not, people don’t have an accurate asset inventory. If you don’t know what’s on your network there’s only so much we can do in terms of stopping propagation,” she says. “The moment you ask them whether it’s up to date, there’s often a silence.”
For attack recovery, the only meaningful asset inventory is one that functions in real time, adding an asset every time it is seen. Organisations can’t secure what they can’t see or don’t know about, including not only physical devices but cloud repositories, storage, applications, and every kind of servers.
Real-time asset discovery has been possible for years with online asset inventory engines offered as a service just one example of how this doesn’t have to be an onerous undertaking. This will sync with the organisation’s ServiceNow configuration management database (CMDB).
4. Backup is great – if it’s been tested
Every organisation mandates backups but not all backups are as useful should ransomware strike. According to Sundaresan, the first problem is that organisations don’t always test them. That means making pessimistic assumptions about the state of the network itself.
“Backup is a no-brainer, but you have to test it from the point of view of being able to bring the systems back up without access to certain resources.”
The traditional core of backup is the 3-2-1 format, where organisations make backups on different types of media in different locations, including offline and off-site. But if one of more of those is in some way disrupted – a connectivity issue caused by the attack, say – that strategy starts to show its frailty.
“Time and again organisations think they have tested the backup, but they haven’t tested it often enough under realistic conditions. Additionally, practicing recovery leads to discoveries of other weaknesses in your preparations. Usually these are discovered in the quality of the backups, which in turn will lead to better backups for when you really need them” The simplest way to embed more thorough testing is, Sundaresan says, to make it someone’s responsibility.
5. Paying up isn’t an easy way out
Whether to pay a ransom has been a contentious issue from the earliest attacks a decade ago, and the issue seems no nearer being settled. Could paying a ransom simply invite future trouble?
“Our recommendation is not to pay because your chances of getting your data back are partial at best. More importantly, you’re giving them more ammunition to go after you.” Sundaresan’s other concern is that making payment part of the cybersecurity strategy risks undermining the sort of controls which might make this unnecessary in the first place.
“You might as well take that money and invest it in cybersecurity and reduce your risk exposure.”
6. DIY defence is obsolete
A major block for many smaller organisations has been marshalling the necessary investment and skills to defend themselves. But the DIY approach isn’t necessary in an era of MSSP services, argues Sundaresan. “When you need surgery, you go to a surgeon. You are doing yourself harm by thinking you have to do it all by yourself.”
Equally, she concedes, choosing an MSSP isn’t easy in a crowded market. Her advice is to look for a partner which can not only tell you what the problem is but fix it, too. But because that is changing quite rapidly as new attacks appear, that demands a provider able to show that it can invest and innovate over time.