Since the rise of generative AI in late 2022, a lot has been written about the impact of these tools on cybersecurity. Over $1.1 billion was made in ransomware payments in 2023 alone – and the harsh reality is that cybercriminals, now fuelled by AI technologies, are becoming increasingly prolific, persistent and sophisticated in their attacks.
Research shows that 45% of business leaders have voiced concerns about the threat landscape worsening due to AI. This concern has also been shared by the UK’s National Cyber Security Centre (NCSC), which stated that AI is democratising cybercrime and enabling novice criminals to engage in sophisticated attacks previously reserved for seasoned attackers.
The top three reasons AI-powered attacks are on the rise
In the age of AI, phishing remains the number one attack vector for criminals to establish a foothold in an organisation. Organisations need to understand how AI is being used by cybercriminals to effectively create a cybersecurity strategy which confronts all factors.
Firstly, AI can be used to enhance efficiency and precision. Large Language Models (LLMs) can be used by cyber attackers to automate and streamline their operations, making attacks more efficient and precise. Machine learning (ML) algorithms can also be used to quickly analyse vast amounts of data, assist with identification of vulnerabilities, and execute attacks with minimal human intervention. This results in more effective phishing campaigns, malware distribution, and exploitation of security flaws.
Though most LLMs have safeguards in place to stop these sorts of malicious uses, they can often be bypassed – and are also being used in conjunction with malicious versions of popular LLMs like WormGPT and DarkBERT to write increasingly believable phishing emails. There are also geographies around the world where highly targeted phishing attacks have been less common until now. However, we’re likely to see a surge in attacks based on the ability to translate emails into near perfect prose, by attackers who aren’t fluent in the language. This scalability means that attackers can reach a broader range of victims with less effort.
Finally, AI enables cyber attackers to develop more sophisticated methods to evade detection and adapt to defensive measures. LLMs can be used to create polymorphic malware that constantly changes its code to evade detection by traditional security systems. This not only increases the efficiency of attacks but also allows attackers to scale their operations and launch large-scale attacks with unprecedented speed.
How companies can combat these effectively
You cannot build a truly cyber-resilient organisation without involving every single person who works there. Everyone needs to have a basic understanding of the current threat landscape, especially as it’s becoming harder to identify potential threats. By having this context, they’ll be better able to spot when things are out of context or unusual and have enough suspicion to ask a question or two and avoid clicking the link, wiring funds, or approving an MFA prompt.
To keep employees engaged, training programs should be designed to be ongoing. For added engagement, they should consist of bite-sized, interesting, immediately applicable and fun training modules. They should also include simulated phishing attacks to test users and give them an opportunity to apply learnings from the modules. If any user clicks on a phishing email, they should receive additional training at that very moment, to cement the learning. Over time, the system should automatically identify users who rarely fall for such attacks and reduce the training they receive while making the simulations they do still receive more difficult. Conversely, giving persistent offenders additional bite-sized training and simulations can help improve security outcomes over time.
The other reason for ongoing training is that the risk landscape is continuously changing, and attack methods are developing. For example, malicious emails with QR (Quick Response) codes to scan were the exception but now they’re a very familiar sight. We cannot assume that all staff have the same understanding, however, it is key to supply them with the tools and ability to recognise potential threats.
Ultimately, businesses need to recognise that cybersecurity threats are constantly evolving, especially in the age of AI. Threat actors are leveraging AI tools to create sophisticated phishing attacks that can lead employees to click on malicious links or disclose sensitive information. While implementing security solutions is crucial, it isn’t enough on its own. Addressing cybersecurity threats in the age of AI requires a multifaceted approach, starting with robust, next-gen tech solutions. It takes an understanding of the risks and involvement of everyone in the business to build a cyber-resilient culture, combined with phishing simulations and ongoing training to truly improve an organisation’s security posture.
Irvin Shillingford, Regional Manager, Northern Europe, Hornetsecurity