The EU’s flagship data protection law, the General Data Protection Regulation (GDPR), celebrated its sixth anniversary on 25th May ‘24. Since coming into effect in 2018, its stringent requirements for enhanced security controls and data privacy have consistently raised awareness about the issues surrounding the storage and processing of personal data. This regulation has set a global benchmark, becoming a model for regulators worldwide.
The GDPR was designed to protect individuals’ fundamental rights and freedoms, especially their right to personal data protection. As internet usage became more widespread, the EU Parliament recognised the need for updated guidelines to adapt to a more connected world where data is the common currency. Consequently, the GDPR was created to replace the 1995 Data Protection Directive used across various European countries.
In the past six years, €4.5 billion has been paid in GDPR violation fines, according to research by Nordlayer. Spain, Italy, and Germany have imposed the largest fines. Since the GDPR came into effect, individual data protection authorities (DPAs) have issued 2,072 violation decisions. Spain holds the worst record, with 842 fines totaling €80 million (£68.16 million) since 2018.
Compliance has been an uphill struggle for many organisations, but its impact in helping individuals manage their data better and holding organisations accountable for data mishandling cannot be overstated. The GDPR has reshaped how we manage data, enforcing a much-needed prioritisation of privacy rights. But what does the industry think about its impact, and what does the future hold?
We asked industry experts for their opinions, and here is what they said:
Benjamin Martin, Managing Consultant at Adarma: “On May 25, the European Union’s General Data Protection Regulation (GDPR) will celebrate its sixth anniversary. As we reflect on this milestone, we must consider the new challenges and heightened interest in how recent AI regulations, such as the new EU AI Act, will interact with GDPR. There are areas of intersection between both regulations that will require our attention in the coming months. For example, the EU AI Act mandates human oversight for certain AI systems, and implementing this oversight while handling sensitive data poses a significant challenge. Additionally, companies could face cumulative fines for the same incident under these regulations. We also need to explore how the requirement for risk assessments in the EU AI Act can be effectively integrated with existing GDPR assessment processes. When non-deterministic systems handle personally identifiable information (PII), implementing guardrails becomes crucial to ensure that a user’s PII is not inadvertently shared with others. These are exciting times for those interested in risk mitigation in this new era of AI deployments.”
Philip Brining, Co-Founder & Managing Director of Data Protection People; “There is much controversy about GDPR’s effectiveness, so much so that the government is continuing its moves to overhaul the regime through the Data Protection and Digital Information Bill. But what’s wrong with the GDPR and how well-equipped is it to deal with emerging technologies such as AI?
In my view the GDPR is good but has some omissions. I don’t think it goes far enough when I look at data protection legislation in places such as Jamaica. I would like to see the requirement to report contraventions, custodial sentences, and an “annual return” or annual audit added to the requirements.
People criticise the administrative burden that the GDPR imposes, in particular, the compilation of records of processing activities or RoPAs. My view is that RoPAs are very difficult to put together and maintain but not because they are a bad idea, but because organisations have allowed their data processing activities to balloon with very little control. The challenge of documenting those data processing activities is therefore burdensome – but the requirement to bring them under control is not the thing that should be under pressure and reformed. Organisations should get their houses in order.
Another popular area for GDPR bashers is in relation to its ability to cater for the emergence of new technologies including AI. The thought process I believe, depending on what camp one is in, is either that the GDPR is too restrictive to allow AI to flourish, or that it is too weak to properly regulate AI. My view is that it’s actually pretty well-placed. Considering RoPAs again, is it a fault of the RoPA process that entities find it impossible to document their “black box” AI? Surely it is beneficial to understand how AI is working and arriving at decisions?
Equally, the process of undertaking a data protection impact assessment on an AI project is often hampered by organisations not being able to adequately get under the skin of their AI tools. But surely they need to in order to understand what data protection and privacy risks they may pose. If it is difficult to impose privacy by design and by default on AI projects, is it privacy by design that is the problematic issue, or the inability to ensure that privacy is baked into AI?
The upcoming sixth birthday of the GDPR is overshadowed by the DPDI Bill and criticism that it is ill-equipped to support the development of AI. The GDPR requires organisations to have a good understanding of their data processes which is at odds with the development of AI projects only if that understanding is not there.”
Mayur Upadhyaya, CEO at APIContext: “The upcoming sixth anniversary of GDPR is a timely reminder to assess its effectiveness in the age of AI. While the regulation has significantly improved data protection practices, new challenges are emerging. Obtaining informed consent becomes complex when AI models like Large Language Models (LLMs) rely on vast datasets, potentially containing personal information, collected by third parties. Technology providers, including LLM model makers and API wrappers, must improve self-service functionalities for users to control their data and ensure transparency throughout the AI data processing pipeline. Furthermore, ensuring user control over data extends to children and the right to erasure. LLMs and the APIs that provide access to them present unique challenges for GDPR compliance. The complex nature of these models makes it difficult to track how data is used within the system, potentially hindering users’ ability to exercise their rights under GDPR. Effectively managing data subject rights requests like access, rectification, or erasure becomes particularly complex for both LLM makers and LLM API wrappers. Given the extensive data processed by LLMs, it is crucial to have robust mechanisms that can address these rights comprehensively, ensuring compliance with GDPR standards. Do these entities currently have controls that are fit for purpose? This lack of clarity can lead to confusion and frustration for users trying to exercise their data privacy rights.”
Anna Collard, SVP of Content Strategy & Security Awareness Advocate at KnowBe4 comments; “Since its enactment in 2018, the GDPR has set a global benchmark for data protection, enhancing privacy and data management practices well beyond the European borders. However, the rise of AI presents new challenges, which require a more tailored regulatory approach. GDPR’s provisions on automated decision-making and profiling were pioneering in 2016, but AI’s capabilities have since advanced rapidly. Key GDPR principles like transparency and data minimization are more complex with AI, which requires large data sets and involves algorithms prone to biases. The European AI Act complements the GDPR by addressing AI-specific risks, such as algorithmic transparency and discrimination, aiming to drive towards robust protection of data rights while fostering AI innovation within the EU. Compliance with both regulations is essential for AI systems handling personal data.”