Earlier this week, it was reported that 198 MILLION US voter records were leaked on a public Amazon S3 storage server owned by a Republican data analytics firm, Deep Root Analytics. This is reportedly the biggest leak of its kind in history.
Various databases were found on the server, containing personal information of American citizens, including their name, date of birth, home address, phone number, and voter registration details- which shows their own voting preference. Deep Root Analytics, a republican data analytics firm, uses various data sets to help their political partners target potential voters, by analyzing big data, and was used in the 2016 presidential campaign.
They have since spoken out to take responsibility for the leak, whilst acknowledging they have no reason to believe their security systems were compromised.
Terry Ray, chief product strategist at Imperva, gave some insight on the breach;
“This was less a leak, but was rather an identified exposed server. From the information provided, the data is not known to have been stolen necessarily. It sounds to me that this is another case of incorrectly secured cloud based systems. Certainly, security of private data – especially my data, as I am a voter – should be of paramount concern to companies who offer to collect such data, but that security concern should ratchet up a few marks when the data storage transitions to the cloud, where poor data repository security may not have the type of secondary data centre controls of an in-house, non-cloud data repository.
With more data being collected by companies than ever before, securing it is no small task. There are many factors that need to be taken into consideration. Are the environment and the data vulnerable to cyber threats? Who has access to the data? And there’s also the issue of compliance. Big data deployments are subject to the same compliance mandates and require the same protection against breaches as traditional databases and their associated applications and infrastructure.
He added-
“Much of the challenge of securing big data is the nature of the data itself. Enormous volumes of data require security solutions built to handle them. This means incredibly scalable solutions that are, at a minimum, an order of magnitude beyond that for traditional data environments. Additionally, these security solutions must be able to keep up with big data speeds. The multiplicity of big data environments is what makes big data difficult to secure, not necessarily the associated infrastructure and technology. There is no single logical point of entry or resource to guard, but many different ones, each with an independent lifecycle.”
Andrew Clarke, EMEA director at One Identity gave some pointers on how best to avoid this type of data breach in the future;
- “Always ensure that only the right people can access data
- Empower the owners of the data to easily put the proper access controls in place
- Don’t assume that just because it is password it is safe (use multifactor and role-based access controls)
- Slow down and make sure that governance is in place, especially for data stored in the cloud this means: The owners of the data decide what is right (not IT); making it easy for someone that is right for the data to get to the data; run periodic attestations to validate that all of the people with permission to access the data actually should have that permission”
He adds- “Once a “security first” and “Identity is the new perimeter” attitude is adopted, incidents will be dramatically reduced”.