Wednesday , 13 December 2017
Home » NEWS » EDITOR’S NEWS » AI technology doesn’t make any assumptions about what ‘bad’ looks like
AI technology doesn’t make any assumptions about what ‘bad’ looks like

AI technology doesn’t make any assumptions about what ‘bad’ looks like

Discussions around AI cyber defense have traditionally focused on the ability of advanced machine learning to detect the earliest signs of an unfolding attack, including sophisticated, never-seen-before threats. This real-time threat detection overcomes the shortcomings of legacy tools and cuts through the noise in live, complex networks to accurately identify threatening anomalies, including ‘unknown unknowns’.

 

But while the capability to identify the entire spectrum of threats in their nascent stages before a problem becomes a crisis is incredibly powerful in its own right, it also serves as a fundamental enabler for autonomous response measures, which truly deliver on the promise of artificial intelligence in cyber defense.

 

Before the advent of AI cyber defense, the principal obstacle to achieving autonomous response was determining the exact action that is needed to stop an infection from spreading, while keeping the business operational. By their very nature and definition, traditional approaches to cyber security cannot make the jump from detection to response. While legacy rules- and signatures-based technology can offer the most basic protection by correctly identifying commonplace attacks, it cannot contain them. If your rule/signature correctly identifies that an attack is in progress, say by matching on a known bad IP address used by a malware family, then what do you do in response? There is nothing in the rule or signature that contains the remedy.

 

In the past, security teams could choose from two imperfect options: on the one hand, if a rule or signature for a ‘known bad’ matched, you could automatically block exactly the behavior that matched the rule, e.g. block connections to the bad IP address. The problem with this approach is that it is far too brittle and simplistic – the attack might involve far more than connections to that IP. It might involve connections to other IPs, or internal lateral movement. The connection to the bad IP is not the full extent of the threatening behavior of that malware, but is just one indicator.

 

At the other extreme, the autonomous response could be pre-programmed to completely isolate, or deactivate a compromised device at the earliest signs of an unfolding attack. However, while this action would probably halt the attack, it would also disrupt business activity, potentially even grinding operations to a halt: imagine if the affected device was the CEO’s laptop.

 

This is where artificial intelligence can augment humans with autonomous response acting as a force multiplier for security teams. The AI algorithms learn the normal ‘pattern of life’ for every user and device on the network and use that understanding to detect compromise and threats by their deviation from ‘normal’. The machine learning technology can then intuitively make the natural jump from detection to response by generating highly targeted remedial action, mitigating threats without overreacting.

 

Unlike traditional methods that rely on the false premise that chasing after yesterday’s attacks will help us defend against those of tomorrow, this new class of AI technology doesn’t make any assumptions about what ‘bad’ looks like. It doesn’t attempt to predict or anticipate future threats. It doesn’t classify threats in black and white, allowing for the shades of grey that exist in messy, live networks. The AI algorithms learn ‘on the fly’ about the normal ‘pattern of life’ in a network and can detect and remediate the entire spectrum of threat, from sophisticated ‘low and slow’ threats and lateral movement, to brute-force, automated attacks such as ransomware.

 

If a human security team is tasked with investigating the circumstances around an unfolding attack with a view of identifying the most appropriate action to take, they can devise a response that accurately targets the problem, while also minimizing any negative impact on the bottom line. Devising and executing such targeted action takes time and effort, and requires contextual understanding of the threat that human security teams often do not have.

 

Autonomous response is the future of AI cyber defense. It will take humans out of the weeds of the initial response to threats, enabling them to spend their time and effort on higher level issues that need human input.

About Japonica Jackson

Japonica is head of editorial at IT Security Guru. If you'd like to get in touch with Japonica, please email editor@itsecurityguru.org.