According to Nuix’s recent Black Report, 75% of organisations only perform limited remediation after a penetration test. To take the positives, it’s good that organisations are paying attention to critical vulnerabilities. However, the report also shows that 64% of penetration testers say their biggest frustration is that organisations do not fix the things they know are broken.
Product and system owners face a few options when they learn about a vulnerability and the risks it poses. They can accept the risk, usually when the value of the asset is less than the cost of protecting it. A second option is mitigation, which can entail implementing external controls to the product, and relying on internal mechanisms to make it significantly more difficult to exploit a vulnerability.
In most cases, a third possibility – remediation – is the preferred course of action. However, product or system owners often choose not to remediate vulnerabilities, as remediation can be costly and complex. In this case, their justifications tend to be misguided. Here are some of the most common reasons organisations choose not remediate a vulnerability after they find out about it.
Root or user accounts are required to access data, and therefore are protected: By relying on this security measure, there’s an inherent assumption that organisations have effective controls around who can access the servers—physically and digitally. The main reason this fails the security test is that insiders can still access the data, which may or may not be encrypted. Considering it takes an attackerless than 12 hours to compromise a system, organisations cannot afford to rely solely on system-level access controls to protect application data. And if the asset includes database credentials that have significant privileges on the database, the product owner just provided another avenue for attack.
The framework provides protection: Frameworks are a very important part of developing secure applications. The longer a framework has been around, the greater the chance that most of the lower-hanging security issues have been resolved. However, even a well-tested framework is no guarantee of security. Firstly, because not all framework owners respond effectively to security issues offer effective general long-term support. They may not be effective when time comes to communicate issues to the community using the framework, or when fixing problems in a timely manner, or when effectively determining the source of an issue, or with patching the system and delivering updates to the customer. Secondly, most organisations do not use trusted local versions of the framework. If the organisation is always pulling down a copy of the framework from a shared repository, how do they verify that the download is legitimate and not compromised? Last, but not least, because it is possible for malicious code to be injected into open source frameworks. Once this has happened, there might not be a way to recover.
Browser controls are in place, and are sufficient: Browser controls provide a basic level of defence that is meant to act as a gatekeeper – not as an overall solution, given the lack of context browsers have regarding applications. Not allbrowsers support controls and there is no guarantee that current browsers will support them in the future. For example, the HTTP Strict Transport Security feature tells the browser to force any request coming from the page through HTTPS. This header provides a false sense of security. If cookies are not set to be HTTPOnly and Secure Only, any cross-site scripting vulnerability will result in the ability to steal cookies or local storage (HTML5).X-Frame-Options is a very important header to set, but be sure to set it correctly to prevent an attacker from building a site that frames the victim site. It helps prevent against attacks known as clickjacking. This protection also does not prevent cross-site scripting attacks that manipulate the document object model. Organisations can use Access-Control-Allow-Origin to prevent JavaScript hosted on third-party domains from running. Just remember, it doesn’t prevent a user from executing an attack using JavaScript that might have been dropped onto the web server via another attack vector.
Employees won’t misuse internal apps: Organisations tend to think that no employee will take advantage of an internal tool. They assume that employees never make mistakes and that external threats—hackers—will not get access to legitimate user accounts. The key is to remember that not all insider threats are malicious and that there is nothing to differentiate a hacker with stolen credentials from a legitimate user.
Networking controls are efficient: Some product owners use firewalls and network access controls as a justification for not remediating a vulnerability. They believe the network controls are effective enough to make fixing the application unnecessary. There are two challenges with this. The first is it assumes that there are no vulnerabilities in the network firewall or web application firewall and that both are patched in a timely manner when patches are available. The second is that in large, complex network architectures, it can be difficult or impossible to fully understand the flow of network traffic. If the firewall protecting them is misconfigured, there is an increased risk of accidental exposure.
No matter which decision product and system owners decide to make – be it acceptance, mitigation, or remediation – each of these paths carries its own risks and consequences.
Understanding the issues that lie beneath each option will be the key to success.
By Evan Oslick, Software Security Developer, Nuix