US (and some European) newspapers recently carried headlines such as this one from the New York Daily News – “Facebook, Twitter and Instagram allowed over 500 law enforcement agencies to monitor users at protests nationwide”. Like many a mainstream newspaper headlines this one is rather misleading. The social networking sites didn’t “allow” any law enforcement agencies to do anything. Rather, either through normal reading of tweetstreams and the like, or through the use of APIs available to any developers, a company called Geofeedia, monitored social media streams in real time and sold the results of their big data analytics to clients, which included over 500 law enforcement agencies among other (such as marketing organizations). They did nothing illegal or even unethical although they may have brushed up against the “terms of use” policy of Facebook, at least.
One point that companies like Geofeedia make is that they aren’t doing anything that a human using pencil and paper (and an infinite amount of time) couldn’t do on their own. The aim is to arrive at a concept which we at KuppingerCole have dubbed “Cognitive Security”. A cognitive security solution would be able to utilize natural language processing and machine learning methods to analyze both structured and unstructured security information the way humans do.
It’s long been known, at least among security analysts as well as forward thinking CISOs, that traditional static security perimeters – while still considered necessary – are no longer sufficient to protect our digital resources.
Equally true is that there now exists a “skill gap” in Information Security – we just don’t have enough skilled people to handle the analysis of the reams and reams of data being collected by our security systems. Improved collecting techniques, part of what is called “Real Time Security Intelligence” (RTSI) isn’t really intelligent enough. The systems need to be trained and revised almost continuously. Still, it’s better than having a team of humans trying to analyze data 24-7. It’s a step in the right direction.
In a true “Cognitive Security” system, the service would continually learn from its actions, the reactions to them and the grading of those activities. That is, a machine learning situation such as is used with those computer systems that play games (such as chess) or the algorithms that IBM uses with Watson which made it a game show champion (Jeopardy), a financial wiz and a skilled medical diagnostician. Such systems do more than simply search through large amounts of data. Try this – go to Google (or Bing or other search engine) and do an image search for “anything that’s not an elephant”. What do you get? You get hundreds of pictures of elephants! The ability to recognize the concept of “not an elephant” is the cognitive part, and that’s missing from almost all of the security systems we use today.
Now we don’t expect that Cognitive Security will emerge overnight. It takes time to train a computer system just as it takes time to train a security analyst. But we are starting. Today’s RTSI systems are becoming really good at identifying patterns and correlations. They can identify some of the “needles” in the burgeoning security data “haystacks” and flagging their findings to humans who can interpret the activity as malignant or benign. Cognitive Security systems would then take that judgement to modify their own behavior and better identify the threats that are being presented. Further, they could identify new types of threats based on an ability to understand concepts rather than just data points.
When Robert Goddard launched the first liquid fuel rocket in 1926, he may not have foreseen that we would send rockets – and contemplate sending people – to Mars 90 years later. But we are. We are at the very beginning of the Cognitive Security era. But we can foresee the day when it may be too expensive to create malware that can evade detection by Cognitive Security Systems.