FireEye’s “M-Trends 2014: Beyond the Breach” report contained some sobering numbers regarding the current state of incident response and breach response around the world.
Security incidents go undetected an average of 229 days. Once detected, responding to and fully containing an incident takes an average of an additional 32 days. Further, third parties detect more than two of every three (67 per cent) incidents, rather than the victim organisation itself. Worse yet, these numbers seldom improve year over year. Despite the more than $30 billion spent each year on security, it would seem that we as a community are not progressing.
Attackers can help themselves to an overwhelming amount of data during the eight to nine months on average that they persist inside the organisation following intrusion. Whether leveraging stolen credentials or intruding into additional systems, with months to explore, attackers can gain knowledge and intelligence on the target organisation, amass a war chest of the information they desire, and send that information to wherever they would like to warehouse it.
Unfortunately, in many organisations, this activity most often goes completely unnoticed. How did we get into this situation? Why do detection and response take so long? How can we address the issue?
Although there may be several potential explanations for why we find ourselves in the current situation, I suspect that an inadequate signal-to-noise ratio may be the biggest culprit. Boiled to its essence, the efficiency and effectiveness of a security operations program is directly correlated to the quality of its work queue.
In most organisations, that work queue is populated by alerts. There is nothing wrong with alerts per se, provided they are approached strategically, are of high fidelity, are reasonable in volume, and are contextually aware. Unfortunately, for a number of reasons, most alerts arise without proper strategic alignment, are quite noisy (low fidelity), are excessive in volume, and are not enriched with the proper context. This quickly deluges the organisation’s work queue with noise, making it extremely difficult, if not impossible to identify the signal.
This inadequate signal-to-noise ratio can have disastrous consequences, as we are reminded each time a breach hits the news. A low signal-to-noise ratio essentially creates two critical issues. First, the volume of noise buries and obscures the signal, preventing organisations from paying proper attention to the most important alerts. This is essentially what impedes detection. If I am looking at an enormous queue of alerts, how do I know which alerts I can safely ignore, versus which alerts will land me in the press in six months?
Second, the lack of context around each alert creates the need to manually enrich the alert with adequate context to reach a conclusion as to its nature. Unfortunately, this often necessitates first and foremost tracking down the data required to add the proper context. In many cases, this can be a time consuming process, and can sometimes result in the realisation that the required data was never properly recorded and retained. This obviously impedes response, resulting in the difficult situation we find ourselves in today.
Fortunately, there are steps organisations can take to improve their signal-to-noise ratio and reduce their detection and response times. Although the length of this piece does not permit an in-depth exploration of these steps, I will mention two important points here. F
irst, organisations can take a more strategic approach to developing content and logic to drive alerting. Risks to the organisation should be enumerated on a continual basis and further broken down into attainable goals and priorities. Precise, targeted, incisive logic that identifies activity or behavior matching those should be used to produce high fidelity, contextually aware alerts for the work queue.
Second, gap analysis should be performed continually to identify gaps in visibility. It’s important to consider that this involves not only collection, but also analysis. The most efficient, least voluminous set of data that ensures the required visibility should be collected. Less is more – both in ensuring adequate retention windows, as well as in query performance.
The state of security operations today is not great. A deluge of noise and a lack of context overwhelm most organisations so much so that they wind up with unacceptably long times to detection and response.
On the bright side, approaching security operations more strategically can help raise the signal-to-noise ratio and dramatically improve the efficiency and efficacy of an organisation’s security program.
Josh Goldfarb is chief security strategist at FireEye