As pen testers, we often test the same system every year.
In an ideal world, the security of the system would get better every year as the issues we find are fixed and lessons learnt and in theory, you should end up with an ever smaller list of issues.
Unfortunately this isn’t usually the case. In fact it is a particular problem for large companies. In the larger companies we typically find an issue, report it, they fix it, we confirm it is gone, then next year when we test it has come back. Obviously from a security and ROI point of view this is worrying. It means all we have achieved is getting rid of the vulnerability for a relatively short period of time.
To give you an example of how common the problem is, we have got to the point of writing scripts for some customers because the same vulnerabilities come up every time.
Organisations genuinely want to fix the issue and solve the root cause, but the issue is that modern pen tests, find too much. When the bulk of testing was manual ten years ago, you would get a relatively small report. Even for an organisation with relatively large ranges, the report would include a small number of critical issues, some medium and some low. It was relatively easy to sort out the vulnerabilities identified, the number of systems was low and the people who were responsible for looking after the systems were known.
Yet now organisations have vast IP address ranges, have a huge volume of services exposed to the internet with incredibly complex systems exposed to the internet, and they have a distributed IT structure with different people responsible for different bits of different systems in different countries.
When you add into this the improvement in testing, that the skill level in the industry has gone up and that automated tools find a lot more, we end up with an exponential growth in the numbers and types of issues we find. This means that organisations now have a huge volume of data to crunch. It’s no longer a 50 page PDF, it can be thousands of pages.
Having worked with lots of large clients like this, it becomes obvious that they fix some issues, get very worried about some issues and miss some issues. Mostly, however, they fix the issues but don’t establish the route cause.
So a common issue we find on lots of tests are weak SSL ciphers. They are dead easy to fix and most people do but unfortunately, they fail to fix the server build guide or image they use. So in a year’s time, the organisation will have built another 50 servers, rebuilt a bunch of the ones they have fixed and the issue is back. They are treating the symptom not the disease.
The solution is to fix the servers we identify as weak, and then work out how they got into that state. This will probably involve tracking down the build documentation or gold image, updating it and testing it. However, given the volume of issues we find and the fact that IT is spread all over the world, companies are just drowning under the data. We need a better way to help organisations make sense of this. What I am really talking about is the difference between data and information.
We need to present the results of pen testing in such a way, that an organisation can find patterns, commonalities and therefore efficient ways to solve the problem. For example, instead of fixing 50 servers with weak SSL configuration issues, the data might show it was a common and recurring theme, which could be resolved by investing a few days work in fixing the gold image. This would bring a huge improvement in security for relatively little effort.
Equally we find customers struggling with prioritising critical issues. Even if you can afford to fix them all, which do you fix first? Customers have this problem because the pen tester’s opinion of what is critical doesn’t always reflect the business owner’s view. Different types of issues have a different impact on organisations.
A standard pen test report could present two critical issues that would have the same rating but no information about their relative importance to the business. Again it’s about information versus data.
So what’s being done to address this problem? At CNS Hut3, we are trying to solve this with our Next Generation Penetration Testing, specifically our new portal. This allows large companies to manipulate the results of the pen test, letting them add weightings that reflect the drivers of their business, letting them specify what is important both the targets we test and the types of things we find.
Using this and our data modelling, large organisations will be able to find the patterns and work out what to fix, how to fix it, and track the improvements to show ROI. It is our attempt to use Pen Testing to generate information, not just piles of data.
Edd Hardy is head of operations at CNS Hut 3