The UK government today outlined its proposed approach to legislation for regulating artificial intelligence (AI). Unlike the EU, which is developing a new AI law, the UK’s proposed approach would ask existing regulators to apply the principles of AI governance to their respective areas of focus.
A ‘collaborative’ system of AI regulation is the preferred approach among UK regulators, according to a recent study by the Alan Turing Institute, but could be complex to deliver and may overburden their resources, experts told Tech Monitor.
In a policy paper published today, the Department for Digital, Culture, Media and Sport outlined an approach to AI regulation that it describes as ‘context-specific’, ‘pro-innovation and risk-based’, ‘coherent’ and ‘proportionate and adaptable’.
Under the proposals, existing agencies such as Ofcom and the Competition Markets Authority would be responsible for ensuring any AI used by industry, academia or the public sector within their areas of interest is technically secure, functions as designed, is “explainable”, and considers fairness.
Related Articles
Governance
UK finance bill includes first crypto asset and stablecoin regulation
Governance
Will new UK data laws put adequacy agreement with EU at risk?
Governance
New UK government CDO will have data sharing at the top of his agenda
Governance
Uber used ‘kill switch’ to stop authorities accessing data, leaks reveal
Regulators would have to follow core principles around AI, rather than each individual use being regulated and controlled, the policy paper says. They would apply these principles to their respective sectors and build on them with specific guidelines and regulations. Some sectors, such as healthcare and finance will have stricter rules, whereas others will be more relaxed and voluntary.
These cross-sector principles include regulating AI based on its use and the impact it will have on individuals, groups and businesses. It also has to be pro-innovation and risk-based in its regulation, focusing on addressing issues where there is clear evidence of real risk or missed opportunities. And regulation should be tailored to the distinct characteristics of AI, ensuring the overall regulations are easy to understand and follow.
AI regulation in the UK: a collaborative approach
The government’s proposed approach stands in contrast to that of the EU, whose AI Act seeks to establish a new law governing the use of AI across the bloc. “The EU is adopting a risk-based approach,” says Adam Leon Smith, CTO of AI agency Dragonfly and the UK representative in the EU’s AI standards group. “It is specifically prohibiting certain types of AI, and requiring high-risk use cases to be subject to independent conformity assessment.
“The UK is also following a context-specific and risk-based approach, but is not trying to define that approach in primary legislation, instead, it is leaving that to individual regulators.”
Content from our partners
How AI will extend the scale and sophistication of cybercrime
Why balancing security with IT operations demands a holistic approach
The zero day vulnerability trade remains lucrative but risky
A more collaborative approach, in which regulators work together to define principles but apply them separately in their areas of focus, is the preferred approach among regulators, according to a recent study by AI think tank the Alan Turing Institute.
Regulators consulted in the study rejected the prospect of a single AI regulator, said Dr Cosmina Dorobantu, co-director of the public policy programme at The Alan Turing Institute. “Everybody shot that down because it would affect the independence of the regulators,” she explained.
The prospect of a purely voluntary system of AI regulation was also rejected. “AI is a very broad technology,” said Professor Helen Margetts, programme director for public policy at the institute. “Regulation has to be a collaborative effort.”
Nevertheless, the government’s proposed approach is likely to be a complex undertaking, given the number of regulatory agencies in the UK. “One of the more surprising things we learned during the study is that there is no list of regulators,” said Dr Dorobantu. “Nobody keeps a central database. There are over 100, ranging from some with thousands of employees to others with just one person.”
All these regulators will need to develop AI expertise under this proposed approach, the pair explained, and how they should coordinate their activity when regulations overlap will need to be clarified.
The government’s proposed approach could also prove burdensome for the regulators, argues Leon-Smith. “It is unclear if the ICO and Ofcom will be able to handle the increased workload,” he said. “This workload is particularly important given the frequency of change that AI systems undergo, but also the expected impact of the Online Safety Bill on Ofcom.”
The UK’s proposed approach includes a provision that would require all high-risk AI applications to be “explainable”, particularly with respect to bias and potential inaccuracies. This goes further than the EU’s AI Act, Leon-Smith observes.
“The policy paper states that regulators may deem that high-risk decisions that cannot be explained should be prohibited entirely. The EU has not gone so far, merely indicating that information about the operation of the systems should be available to users.”
The government has invited interested parties to provide feedback on the policy paper. It will set out more details of the proposed regulatory framework in a forthcoming whitepaper, it said.
Read more: MEPs are preparing to debate Europe’s AI Act. These are the most contentious issues.
Topics in this article: AI