A group of MIT researchers has sketched out a way to address a gap in cybersecurity that exists between human and machine. Human-made rules, which are meant to alert the system of an attack, don’t work unless an attack exactly matches one of those rules. Machine-learning measures typically rely on anomaly detection. Consequently, false alarms aren’t uncommon and the system starts to distrust itself.
View full story
ORIGINAL SOURCE: Bostinno