“My Government is committed to making work pay and will legislate to introduce a new deal for working people to ban exploitative practices and enhance employment rights. It will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” That’s what the King said yesterday when he delivered his State Opening of Parliament address to the new government. What does this mean for the cybersecurity industry? The community reacts.
Lane Williams, VP of Global Sales Engineering at Salt Security, said:
“Since generative AI is a bit of a Pandora’s box, many governments are spending time to ensure responsible and ethical development using AI while mitigating potential risks and negative consequences.
To that point, the government is working to identify which AI models are considered the most advanced and potentially impactful. They will also establish requirements around three main areas. Transparency and explainability, which allows the workings of the model to be understandable and transparent to better understand how the decision was made. Fairness and non-discrimination will make sure AI models are not perpetuating biases or discriminating against specific groups. Lastly, accountability and liability will establish who is responsible if an AI model causes harm or damage.
At the end of the process, the government will need to implement and enforce laws that ensure developers comply with the requirements and standards set for AI models.
Regardless of the AI models and controls implemented to mitigate risks, ensuring compliance requires robust tools and processes. Posture Governance emerges as a solution, establishing a framework of compliance rules that actively monitor AI model outputs and proactively generate alerts for any instances of non-compliance. This real-time monitoring and alerting mechanism empowers governments and companies to maintain continuous oversight, ensuring AI applications adhere to established standards and regulations.”
Ruminating on whether the news means CISOs and practitioners must bolster their understanding of AI, Adam Pilton, Senior Cyber Security Consultant at CyberSmart and former Detective Sergeant investigating cybercrime, notes:
“The CISO is a senior leader who crafts and oversees the organisation’s information security strategy, ensuring its data and technology are protected.
The CISO is not expected to know everything but should have sufficient knowledge to be able to identify possible issues and develop strategies and tactics to prevent them.
The responsibility of the CISO is to ensure they have suitably trained people on their team who can advise and implement the required defences as well as advise on measures that can be used to enhance current or future processes and activities.”
Just like the CEO of a car manufacturer does not need to be a mechanic, a CISO does not need to know the inner workings of AI. They must however be sufficiently confident in understanding the fundamentals of artificial intelligence and keep updated on how AI can be used to enhance their daily operations both now and in the future.”
Curtis Wilson, staff data scientist at the Synopsys Software Integrity Group:
“The previous government, in their whitepaper on AI regulation, highlighted the importance of interoperability with EU and US AI regulation. This is something I hope the Labour government commits to as well. With companies operating in global markets, the burden of complying to multiple inconsistent regulatory frameworks would be onerous. This is especially true for smaller companies and start-ups, that might lack the requisite resources to comply. The EU AI Act was able to take this into account and I would hope to see a UK act containing similar provisions.
It’s easy for companies to get too concerned with all these new rules, but the greatest problem facing AI developers is not regulation but a lack of trust in AI. For an AI system to reach its full potential it needs to be trusted by the people who use it. I see regulatory frameworks as an essential component to building that trust. Strict rules will deter careless developers, and help costumers be more confident in trusting and using AI systems.
However, it’s important to remember that AI is a complex subject and in a stage of rapid improvement. When even many developers don’t fully understand the technology and its implications, what chance do policymakers have? I hope the Labour government rely on industry experts when creating legislation, so that it can be both abstract and overseen by competent regulatory bodies – which will be able to more quickly react to a changing technological landscape.”