Uncategorized

Balancing Innovation and Responsibility: Examining the UK’s Light-Touch Approach to AI Regulation

Since generative artificial intelligence models like ChatGPT and Bard hit the mainstream at the beginning of this year, governments around the world are grappling with the complex question of how to regulate AI.

Developments are coming thick and fast. This week, MEPs working on the EU’s landmark Artificial Intelligence Act published an open letter, promising to introduce new laws that will curtail the powers of ‘very powerful AI’, ensuring instead that it will develop in a ‘human-centric, safe, and trustworthy direction’. Last week, the US Commerce Department launched a public consultation that will seek to understand what policies will help businesses, government and the public be able to trust that AI systems ‘work as claimed, and without causing harm’. Alan Davidson, Assistant Commerce Secretary, said that while his government believed in the promise of AI, there are concerns that AI was not being rolled out safely. In China, new laws will require companies using generative AI to submit their technology to government regulators for a security assessment.

A different direction: Assessing the UK’s approach to AI regulation

The proposals from the US, Europe and China throw the approach taken by the UK government into sharp relief. It has now been almost a month since the long-awaited launch of the UK’s AI white paper, which set out that the government has no plans for new legislation, or to create a new regulator to oversee AI in its entirety. Instead, existing regulators including Ofcom, the ICO and the CMA will be encouraged to ensure AI companies follow a set of five core principles, including safety and security, transparency, fairness, accountability, and contestability. The government stresses that this will ‘help create the right environment for artificial intelligence to flourish safely in the UK’ whilst avoiding ‘heavy-handed legislation which could stifle innovation’.

However, with other countries putting more robust measures in place, will this light touch approach hold out?

AI experts have raised concerns that the UK’s approach carries ‘significant gaps’. The Ada Lovelace Institute criticised the White Paper for not setting new legal obligations on regulators, developers, or users of AI systems, with just a minimal duty on regulators expected in future. It noted that the approach raised ‘more questions than it answers on cutting-edge, general-purpose AI systems like GPT-4 and Bard, and how AI will be applied in contexts like recruitment, education and employment, which are not comprehensively regulated…  The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software.’

Looking to the future

With the election looming in 2024, how would a Labour government approach AI regulation? Upon the release of the White Paper, Lucy Powell criticised the government for ‘reinforcing gaps in [the] existing regulatory system and making the system hugely complex for businesses and citizens to navigate’. She has also, however, described herself as a ‘tech optimist’ and called regulation a means to ‘enable good practice’.

Labour is currently developing its approach to tech and the wider digital economy, which will be set out in a paper due for publication in May. Until then, we cannot yet be certain about the approach that Labour will take. But one thing we can safely assume is that the AI landscape, and how we interact with it, will have evolved significantly by the time the next election rolls around.

It has, after all, taken just a few months from its launch in November for ChatGPT to become thoroughly embedded into our online lives, being used by millions every day to do everything from writing recipes and workout plans, to drafting complex pieces of code. It is continually advancing and will soon be integrated into all Microsoft Office tools. The most recent version, ChatGPT4, is even faster and smarter than its predecessor, and capable of more advanced tasks like suggesting recipes from a picture of an open fridge and using a rough sketch in a notepad as a basis for writing code for a website.

OpenAI (just one of the many companies working in the AI space) is already working on ChatGPT5. Early speculation has suggested that this new model could reach Artificial General Intelligence, at which point it will be, on average, smarter than humans. While the Co-Founder of Open AI, Sam Altman has said that AGI will ‘benefit all of humanity’; others are less sure.

This includes Altman’s former colleague and fellow Co-Founder of OpenAI, Elon Musk, who recently signed an open letter along with over 1,000 other AI experts and researchers, calling for a moratorium on the development of large language models like ChatGPT and Google’s Bard.  “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable”, the letter says, adding that if researchers do not pause their work on AI models more powerful than GPT-4, then “governments should step in”.

While we shouldn’t expect an immediate shift in the position taken by government towards AI regulation, organisations working in and with AI should expect continuing and growing tension between the government’s desire for a light-touch regulatory approach to AI and the need to prevent consumer harm, and ongoing debate across the political and media sphere about AI ethics and the appropriate regulatory approach.

Recent debates around online safety have shown there are lessons to be learned from a failure to act proactively – with companies who did not sufficiently safeguard children from online harms suffering reputational damage as a result. From a reputational and regulatory standpoint, organisations working in AI should therefore consider taking steps to self-regulate, both to protect their own reputation, and to comply with potential future regulations.

If you’d like to understand more about the political and regulatory landscape in relation to AI and the wider technology sector, then please contact Vic Wilkinson at victoria.wilkinson@grayling.com