ChatGPT has just celebrated its second birthday (30th November)! Parallel to its steep rise to notoriety, ChatGPT is revolutionising the way we interact with technology. Known for generating human-quality text and information (worryingly?), it has become a useful and versatile tool for many. From writing emails and essays to translating languages and providing summaries, ChatGPT can assist users with a wide range of needs, often making it a good accessibility tool.
Its ability to understand and respond to complex prompts demonstrates the power of artificial intelligence in natural language processing. As technology continues to advance, ChatGPT and similar models hold the potential to reshape industries and redefine human-computer interaction.
ChatGPT, while a powerful tool, also presents significant security risks. Sharing sensitive information with the model, for example, raises big privacy concerns. Malicious users can also manipulate prompts to elicit harmful responses, spread misinformation, or generate malicious code. Additionally, weak security practices can lead to unauthorised access and account takeovers. It’s essential to use ChatGPT responsibly and be aware of these potential threats.
So, what do cybersecurity experts think?
Dr Andrew Bolster, senior research and development manager (data science) at Black Duck:
“After two years of ‘ChatGPT’ taking over the public consciousness around ‘AI’, the sheen is coming off and the practical realities of building, productionising and supporting software that is ‘touched by AI’ is coming into stark reality.
Tense discussions (and legal cases) around code authorship, licensing, data residency and intellectual property are taking the place of breathless celebrations of the ‘Transformative Power of AI’.
The reality is that ‘code is code is code’; be it generated by a large language model or an intern, and for security and software leaders to maintain any confidence in their products, that code still needs to be assessed for security risks and vulnerabilities.
Over the past two years, one can see the conversations maturing; around how ‘AI’ can be a participant in the software engineering process. First it was ‘AI will swallow the world and write everything itself’, then came ‘AI can write code but it will still need verification and attestation’, then we passed through ‘AI in the IDE can be an intelligent assistant and help with boilerplate’, and now we’re pacing through a haze of ‘AI Agents can assist with different parts of codebases and help troubleshoot’. One way to think about this is how software startups mature from the crazed ‘I can build it in a weekend’ of a wizardly technical founder and evolves through towards a rigorous collaborative software development lifecycle with quality guardrails, operational stability, and global scale. To drive AI-empowered organisations to reach the quality, stability and scale of modern software development, we must mature our ecosystem of tools and processes _around_ such clever AI ‘agents’; not blindly trusting our businesses to magical black boxes.”
Chris Hauk, Consumer Privacy Advocate at Pixel Privacy:
“While ChatGPT made advancements in the last year, it has yet to reinvent our life as we know it and as was promised. However, the possibilities for the technology, both good and bad, are becoming clearer. ChatGPT and similar tools help us with routine daily tasks, help us to code better and easier, have accelerated scientific advancements, and more.
For many users the Artificial Intelligence space is restricted to novelty and curiosity, especially when it comes to Siri and other virtual servants. I would venture to say that a majority of users are afraid of AI and are hesitating to dip their toe into the AI waters, even though they are likely using AI-powered apps and devices without realizing they are doing so.
While educational institutions are wary of students using GhatGPT to cheat on writing papers and taking tests, the AI tools used to detect such cheating have definitely not proven to be infallible, with original content sometimes being flagged as AI content.
AI is increasingly used to create deepfakes. It now takes under 3 seconds of audio for bad actors to use generative AI to mimic someone’s voice. These voices can be used to trick targeted users that family members or friends are hurt or otherwise in trouble, or to trick financial institution workers to transfer money out of a victim’s account. Deepfakes are also used in phishing attempts. Deepfakes are becoming increasingly tougher to detect, whether it is audio, video, or still images.
Generative AI tools have become easier to use than ever, as these low-cost easy-to-use tools, along with the plethora of personal information available on the internet results in an ever-expanding threat surface. AI automation tools make it easy to scale attacks, increasing the volume and possibly success of AI scams.”
Javvad Malik, Lead Security Awareness Advocate at KnowBe4, echoes these thoughts on deepfakes:
“From a social engineering perspective in particular, trying to identify when an attack is AI-generated may be the wrong way to look at the challenge. Rather, one should look at where the attack is originating from, what it is requesting, and what kind of urgency is being emphasised. In doing so, people are more likely to be able to spot and defend against attacks regardless of whether they are AI-generated or not.
Likewise, in the broader context, fundamental cyber hygiene remains crucial. Employee training, strong access control, patching, incident response planning, amongst other practices remain vital to building a secure organisation.”
Lucy Finlay, Director of Secure Behaviour and Analytics, at ThinkCyber Security, noted how ChatGPT has been utilised by cybercriminals:
“ChatGPT has lowered the barriers to entry for cybercriminals by eliminating common signs of phishing, such as poor grammar and punctuation, making their scams more convincing. It also enables the creation of tailored phishing scenarios at speed, allowing scammers to pivot quickly between victim types. Additionally, ChatGPT is being exploited to develop new forms of social engineering, such as deepfake video meetings. In these cases, criminals use a deepfake “mask” to impersonate someone’s face, producing highly convincing video calls that are nearly indistinguishable from genuine interactions.”
Lucy continues, on the risks of using AI, like ChatGPT, without proper data safeguarding in place:
“One major risk is that reliance on AI can undermine critical thinking, a skill essential in the current landscape of misinformation, disinformation, and mal-information. Many people trust AI-generated content because it appears “clever,” yet it merely compiles open-source information into digestible chunks. Users may inadvertently trust and act on inaccurate or misleading information without verifying the sources. Another significant risk lies in data privacy. Organisations could unintentionally input sensitive information into a public language model, exposing proprietary data to other users. Conversely, they might receive incorrect or misleading content, including fabricated or harmful mal-information scraped from unreliable sources, as seen in widely circulated yet absurd examples like “stop your cheese sliding off your pizza by gluing it to the slice“ claim.”
“Companies should educate employees about these risks. Typically, two approaches are considered by companies. The first involves allowing access to AI tools but urging caution by implementing clear policies on safe AI usage and turning on nudges on key generative AI platforms to remind employees to be cautious about what they submit. The second approach is more restrictive, blocking access to AI tools and granting it only when a well-founded business case is made and approved. While the best strategy may vary by organisation, fostering critical thinking is essential in any case, and regular reminders to employees could play a crucial role in mitigating risks.”
Etay Maor, Chief Security Strategist at Cato Networks & Founding Member of Cato CRTL, had this to say about ChatGPT:
“ChatGPT’s second birthday is a significant milestone, showcasing how far conversational AI has come in a short time. Its capabilities have expanded greatly, now offering more accurate and nuanced responses, aiding professionals across various fields, including cybersecurity. Its ability to assist with complex queries, streamline tasks, and generate content has made it an indispensable tool for many organisations.
However, ChatGPT’s rapid development also raises concerns, particularly regarding its misuse by cybercriminals. Threat actors can exploit such tools to generate convincing phishing emails, draft malicious code, or manipulate social engineering tactics with alarming ease. This dual-use nature of AI highlights the importance of balancing accessibility with safeguards to prevent abuse. For example, OpenAI’s efforts to build ethical guidelines and detection mechanisms are steps in the right direction, but vigilance remains critical.
On the flip side, organisations can harness ChatGPT for positive purposes. In cybersecurity, it can assist in identifying vulnerabilities, generating awareness content, and even simulating attack scenarios for training purposes. By using ChatGPT responsibly, businesses can enhance their security postures and support their teams in combating increasingly sophisticated cyber threats.
We have to remember, ChatGPT is not autonomous, nor is it likely to become so in the foreseeable future. It relies on human input and direction, making it a powerful assistant rather than an independent actor. This distinction is vital, as it underscores the need for human oversight in deploying AI, particularly in high-stakes areas like cybersecurity.
It’s clear that ChatGPT has the potential to shape the future positively, provided its use is guided by ethical considerations. Whether it serves as a tool for innovation or a weapon for harm depends entirely on how humans choose to wield it.”