The UK government’s Frontier AI Taskforce is establishing an AI safety research team that can research and evaluate risks associated with advanced AI development.
As set out in its first progress report on 7th September this year, the taskforce is working with various leading tech organisations, including RAND, ARC Evals, and Trail of Bits. Since then, the taskforce has also partnered with another three leading technical organisations: Advai, Gryphon Scientific and Faculty AI.
These new contracts will tackle important questions about how AI can improve human capabilities in specialised fields and risks around current safeguards, and incorporate their findings into roundtable discussions with civil society groups, government representatives, leading AI companies and research experts at the AI Safety Summit next month.
John Kirk, Deputy CEO at ITG: “Seeing experts collaborate to tackle cautions and fears surrounding AI is key to enhancing confidence for its widespread adoption. AI has the potential to accelerate business operations in all areas, and the UK establishing such a team helps better position it for tech superpower status.
All sectors shall benefit its safe development, and with confidence, the creative industries will be able to enhance campaigns on a global scale, working hand-in-hand with such innovative tech.”
Today’s announcement follows its previous progress report last month, where the Frontier AI Taskforce announced its expert advisory panel establishment, the appointment of two research directors, and several partnerships with organisations.
A UK company focussed on enabling Simple, Safe, Secure AI adoption. Their technology and research focusses on identifying vulnerabilities and limitations in AI to improve and defend these systems.
An applied AI company, providing software, consulting, and services, Faculty has worked with the UK government for nearly a decade, with its other public sector work including partnering with the NHS to build the COVID-19 early warning system, and with the Home Office to detect ISIS online propaganda.
The Frontier AI Taskforce is working with Faculty AI to identify to what degree LLMs can uplift a novice bad actor’s capability, and how future systems may increase this risk.
A physical and life sciences research and consulting company with technical expertise in public health, biodefense, and homeland security, Gryphon Scientific has experience in working at the forefront of scientific advancement alongside governments, including the US, and nations in the Middle East and North Africa.
Gryphon Scientific is working with the Frontier AI Taskforce to identify the potential for LLMs as a tool to drive rapid progress in the life sciences.