By Mike Banic, VP of Marketing, Vectra Networks
Conventional wisdom about malware infection paints a picture that hapless users click on something they shouldn’t, that in turn takes their Web browsers to a drive-by-download website. It then exploits a vulnerability to install a botnet agent that eventually steals all their personal data and uploads it to cybercriminals in another country.
That conventional wisdom isn’t completely wrong, but it needs some serious updating. Today’s malware infections are more typically multi-stage events, wherein a user visits a favourite website with a banner advertisement supplied by a third-party ad network that was supplied by an affiliate ad network.
One of those affiliates is operated by a criminal entity serving up malicious ad content. The code in the ad stealthily downloads dropper code from an external website, executes the code to build a dropper agent, and runs unnoticed in the background.
That agent then disables the operating system’s security update features as well as the user’s anti-virus capabilities. The agent scans a preset list of URLs and downloads malware packages that are still active.
The dropper agent next erases any incriminating log items that could give it away and erases itself. The downloaded malware inventories the computer, uploads the results to a botnet command-and-control centre, and downloads a malware update, including new C&C locations. The malware then retrieves any cached commands from the botnet operator and begins to execute them.
These malware-infection scenarios reflect the escalation that has unfolded for decades between malware creators and the security community, but they also point to another important shift: The focus of the battle has transitioned from host protection to network-based malware detection.
Host-based defences may provide more advanced defences and cleanup capabilities, but they are also more difficult to manage and easy to circumvent. Network-based defences – everything from dynamic analysis sandboxing to network traffic analysis – cast a wider, more economical net for detecting malware payloads and communications.
But only so much can be accomplished at the network layer. To build new defences, security vendors and researchers must have access to families of malware samples, whether they’re based on signatures, attack-behaviour analytics or machine learning.
And that’s where it starts to get complicated, given the countless nuances to malware construction, agent deployment, and the supporting ecosystem that limits what can be derived from captured samples and converted into actionable intelligence.
That, in turn, means organisations must find ways to mitigate the malware threats based upon network-based detection systems. They also need to surrender the practice of attribution and naming the type or family of threat.
Invariances of the infection lifecycle, the malware analysis timeline, and the constant shifting of domain names and IP timelines combine to make that sort of correlation exceedingly difficult. In fact, its behaviour rather than the name that give attackers away.
Network-based threat detection focuses on the identification and classification of threats that perform network actions. In turn, behind-the-scenes analysis of malware samples, artifact extraction, feature classification, and threat labelling increase the efficacy of signatures and systems capable of detection. But they’re not really useful as a tool for retroactive attribution of an in-progress attack.
So here is the new conventional security wisdom: Today’s most efficient and precise detection methods are at the network-layer; attribution should be relegated to a statistical curiosity, assuming there is some historical evidence for correlation.
As advances in network-based detection increase the fidelity and coverage of malware and threats, the possibility of specific attribution will continue to recede. The malware ecosystem continues to evolve swiftly, and security researchers and professionals need to adapt accordingly.