While AI holds the potential to be a powerful tool for positive change, it can also be weaponised by bad actors, amplifying risks we have never encountered before. One of the most alarming threats is the rise of deepfakes; videos or audio created using AI to imitate real people.
While deepfakes have been circulating online since 2017, their impact has surged considerably, especially over the last year where they have become more sophisticated and as cybercriminals have found new and creative ways to exploit them. Initially used to impersonate celebrities and public figures, they are now being used in more personalised attacks specifically aimed at senior executives within organisations. A notable example of this saw a finance employee duped into transferring $25 million to fraudsters who used a deepfake to impersonate the company’s chief financial officer.
It’s likely this won’t be the last serious attack of this kind with our own research finding that only 52% of organisations are highly confident in their ability to detect a deepfake of their CEO. Fuelling this is a lack of public awareness around the dangers of deepfakes. According to Ofcom, less than half of UK residents understand what deepfakes are which exponentially increases the likelihood of a successful attack. Equally concerning is that, according to KPMG, 80% of business executives believe deepfakes represent a major threat to operations, yet only 29% have implemented steps to defend against them.
Raising awareness and implementing pre-emptive strategies are the first steps organisations must take toward tackling the deepfake threat. But where should they begin? Let’s delve deeper, looking at three solutions that organisations should consider to prevent being caught out.
Dual-layered identity protection
Organisations need a layered approach to identity verification for improved security. Biometric methods, like fingerprints or facial recognition, are strong but not foolproof. By combining these with additional steps – such as security questions or push notifications – allows organisations to better defend themselves without negatively impacting user experience.
Passive identity threat detection, a security method that works in the background to monitor for unusual activity, becomes particularly useful here. Working silently alongside active methods of authentication, it will monitor unusual activity in the background and trigger alternative methods of verification, such as a push notification alerting to unusual login activity, without disrupting what the user is doing. This approach ensures threats can be identified and neutralised before they cause significant harm, all while keeping the process user-friendly.
Transitioning from implicit trust to explicit verification
The idea of automatically trusting what we see and hear must stop now deepfakes can make it so easy to fake identities. Appearances can be deceiving, and so employees and consumers must verify everything. This means using extra security steps for high-risk interactions like transferring money or clicking unfamiliar links. While checks aren’t necessary for every interaction, they are essential for sensitive tasks where the person or request must be deemed as genuine before any action is taken.
These security steps are also vital to protecting against deepfake attacks. We know deepfakes often leverage social engineering tactics to deceive employees, which presents a serious risk as these interactions often bypass traditional security measures. For instance, a Zoom call seemingly from a CEO could prompt an employee to approve a financial transaction or share sensitive credentials. The consequences of this are not just financial loss but also jeopardised security and eroded trust essential for efficient business operations. Organisations must recognise trust can no longer be assumed and must be actively verified.
Harnessing AI for defence
We are at a pivotal point where AI is both the source of and solution to the deepfake challenge. As deepfake technology advances, it has become critical for businesses to adopt the latest tools to detect and neutralise risks. AI-driven solutions, like image insertion detection which identifies manipulated content, and audio detection tools designed to spot synthetically generated audio, are essential defences in the fight against this growing threat.
As deepfake technology continues to evolve, organisations must stay one step ahead. Truly safeguarded operations takes organisational proactivity rather than reactivity in the face of this new frontier of digital impersonation. By adopting a multifaceted approach to identity verification and remaining aware of the tactics employed by cybercriminals, organisations can turn the tables on cybercriminals, using the very same technological innovations to protect themselves that the attackers rely on to cause harm.
Alex Laurie, SVP at Ping Identity