New UK AI regulation white paper leaves ‘unanswered questions’ on ChatGPT

2023-03-30
关注

A new white paper outlining how the UK government plans to regulate artificial intelligence has been published. It takes a “pro-innovation” approach that aims to build public trust while also making it easier for businesses to innovate around the technology. However, experts warn it still leaves the question of how to regulate tools like ChatGPT unanswered.

Experts warn that the new AI regulation white paper ignores tools like ChatGPT and has no legislation backbone (Photo: pathdoc/Shutterstock)
Experts warn that the new AI regulation white paper ignores tools like ChatGPT and has no legislative backbone. (Photo: pathdoc/Shutterstock)

The “light touch” approach will put the emphasis on individual existing regulators rather than see an overarching body created. Each regulator, from health to energy, will be tasked with creating “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”

There are five key principles introduced in the white paper from the Department for Science, Innovation and Technology (DSIT); transparency, robustness, explainability, fairness and accountability. There will also need to be a pathway for redress if someone is the victim of a harmful AI decision, the government warned.

The AI industry employs some 50,000 people contributing £3.7bn to the economy last year alone, DSIT said, with twice as many companies providing AI products than any other EU country.

The argument for a “pro-innovation” approach, beyond “growing the economy” is the potential benefits AI can bring to so many parts of society, helping doctors identify disease, and aiding farmers in making more sustainable and efficient use of their land. The government hopes to see the technology put into more widespread use.

It says it needs to balance this potential against the real risks posed by AI, particularly around privacy, bias and safety. For example, it could use a mismatched dataset when making a decision over a loan, or in education incorrectly mark a child as failing if it uses misleading training data.

Hundreds of millions of pounds will be invested directly by the government to improve the environment in the UK for AI to flourish safely but organisations are reticent to go “all in” due to the patchwork of legal regimes that increase the associated risks posed by failures or mistakes. To combat this, the government says it will avoid heavy-handed legislation which could stifle innovation, and rather focus on core principles for safety that will apply across the board.

UK AI regulation adapts to changing technology

This, it says, will ensure UK rules can more quickly adapt to a fast-changing technology and ensure the public is protected without placing an undue burden on companies. It will “empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”

AI could make the UK a “smarter, healthier and happier place,” Michelle Donelan, Science, Innovation and Technology Secretary said. But with such a staggering pace of development rules are needed to make sure that happens safely. “Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow,” Donelan said. 

Content from our partners

Are we witnessing a new 'Kodak moment'?

Are we witnessing a new ‘Kodak moment’?

How the logistics sector can address a shift in distribution models

Fashion brands must seek digital solutions that understand the sector’s unique needs

Fashion brands must seek digital solutions that understand the sector’s unique needs

Regulators will issue practical guidance over the next 12 months to organisations developing or deploying artificial intelligence solutions, as well as providing risk assessment templates. There are currently no plans for legislation, but DSIT says that could happen to “ensure regulators consider the principles consistently”.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

The government has already revealed plans for a taskforce to explore and build up the UK’s capabilities in foundation models such as large language model and image generation tools behind apps such as ChatGPT and Stable Diffusion. It has also announced a new £2m regulatory sandbox to test the boundaries of these solutions.

Michael Birtwistle, associate director (data and AI law and policy) at the Ada Lovelace AI research institute, said effective regulation is essential in realising UK AI ambitions, including providing legal clarity and certainty. It is also important, he said, to ensure the public has confidence in AI and that the regulations safeguard our fundamental rights. “Regulation isn’t a barrier to responsible AI innovation, it’s a prerequisite,” he declared.

Questions left unanswered about generative AI

While broadly welcoming the regulation and approach, Birtwistle expressed concern over obvious gaps that could leave certain harms unaddressed and that overall the regulations are “underpowered relative to the urgency and scale of the challenge”.

“The UK approach raises more questions than it answers on cutting-edge, general-purpose AI systems like GPT-4 and Bard, and how AI will be applied in contexts like recruitment, education and employment, which are not comprehensively regulated,” he said. “The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software. We’d like to see more urgent action on these gaps.”

Microsoft, Google, Salesforce and others have all recently announced plans to fully integrate large language model-based AI tools into high-profile software such as Microsoft 365 and browsers. Apps are also increasingly utilising AI in delivering content or providing support.

“Initially, the proposals in the White Paper will lack any statutory footing,” said Birtwistle. “This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future.”

There are also issues around funding, particularly in higher-risk areas such as health and law, with the Ada Lovelace Institute saying that without substantial investment in existing regulators, there will be issues with effective regulation of AI use. “The problems we have identified are serious but they are not insurmountable,” said Birtwistle. “Our previous research sets out a range of evidence and recommendations to ensure the UK’s regulatory framework for AI works for people and society. We will continue to use this to inform and work in dialogue with the Government and other groups to achieve this.”

Read more: OpenAI fixes ChatGPT bug that may have breached GDPR

Topics in this article : AI

参考译文
英国新的人工智能监管白皮书在ChatGPT上留下了“未回答的问题”
英国政府发布了一份新的白皮书,概述了英国政府计划如何监管人工智能。它采取了一种“支持创新”的方法,旨在建立公众信任,同时也使企业更容易围绕技术进行创新。然而,专家警告称,这仍然没有解决如何监管ChatGPT等工具的问题。“轻触式”方法将把重点放在单个现有监管机构上,而不是创建一个包罗一切的机构。从卫生到能源,每个监管机构的任务都是创建“量身定制的、针对具体情况的方法,以适应人工智能在其部门的实际使用方式”。英国科学、创新和技术部(DSIT)在白皮书中介绍了五个关键原则;透明度、稳健性、可解释性、公平性和问责制。政府警告说,如果有人是有害的人工智能决策的受害者,还需要一条补救途径。DSIT表示,人工智能行业雇佣了约5万人,仅去年一年就为经济贡献了37亿英镑,提供人工智能产品的公司数量是其他欧盟国家的两倍。除了“促进经济增长”之外,支持“支持创新”方法的理由是,人工智能可以给社会的许多方面带来潜在好处,帮助医生识别疾病,帮助农民更可持续、更有效地利用土地。政府希望看到这项技术得到更广泛的应用。该公司表示,它需要在这种潜力与人工智能带来的真实风险之间取得平衡,尤其是在隐私、偏见和安全方面。例如,在对贷款做出决定时,它可能会使用不匹配的数据集,或者在教育中,如果它使用误导性的培训数据,则会错误地将孩子标记为不及格。政府将直接投资数亿英镑,以改善英国人工智能安全蓬勃发展的环境,但由于法律制度的拼凑,增加了失败或错误带来的相关风险,组织不愿“全心投入”。为了解决这个问题,政府表示,它将避免可能扼杀创新的严厉立法,而是专注于适用于所有领域的安全核心原则。它表示,这将确保英国的规则能够更快地适应快速变化的技术,并确保公众得到保护,而不会给企业带来不必要的负担。它将“授权现有的监管机构——如健康与安全执行局、平等与人权委员会以及竞争与市场管理局——提出适合人工智能在其部门实际使用的方式的量身定制的、针对具体情况的方法。”科学、创新和技术大臣米歇尔·多兰说,人工智能可以让英国成为一个“更聪明、更健康、更快乐的地方”。但在如此惊人的发展速度下,需要制定规则来确保安全。多尼兰说:“我们的新方法基于强有力的原则,这样人们就可以相信企业会释放这种明天的技术。”监管机构将在未来12个月内向开发或部署人工智能解决方案的组织发布实用指南,并提供风险评估模板。目前还没有立法计划,但DSIT表示,这可能会发生,以“确保监管机构始终如一地考虑这些原则”。政府已经透露了一个特别工作组的计划,以探索和建立英国在基础模型方面的能力,如大型语言模型和ChatGPT和Stable Diffusion等应用程序背后的图像生成工具。它还宣布了一个新的200万英镑的监管沙盒,以测试这些解决方案的边界。 Ada Lovelace人工智能研究所(AI research institute)副主任(数据和人工智能法律和政策)迈克尔·伯特威斯尔(Michael Birtwistle)表示,有效的监管对于实现英国的人工智能雄心至关重要,包括提供法律的清晰度和确定性。他说,确保公众对人工智能有信心,确保法规保障我们的基本权利也很重要。他宣称:“监管不是负责任的人工智能创新的障碍,而是先决条件。”虽然Birtwistle对监管和方法表示广泛欢迎,但他对明显的漏洞表示担忧,这些漏洞可能会导致某些危害得不到解决,而且总体而言,监管“相对于挑战的紧迫性和规模而言力度不足”。英国的方法在GPT-4和Bard等尖端通用人工智能系统上提出的问题比它回答的问题更多,以及人工智能将如何应用于招聘、教育和就业等环境,这些环境没有得到全面监管。”“政府的实施时间为一年或更长时间,这将使风险得不到解决,就像人工智能系统正在以速度融入我们的日常生活一样,从搜索引擎到办公套件软件。我们希望看到针对这些差距采取更紧急的行动。微软、谷歌、Salesforce和其他公司最近都宣布计划将基于大型语言模型的人工智能工具完全集成到微软365和浏览器等知名软件中。应用程序也越来越多地利用人工智能来传递内容或提供支持。“最初,白皮书中的建议将缺乏任何法定依据,”Birtwistle说。“这意味着人工智能系统的监管机构、开发者或用户没有新的法律义务,未来监管机构的责任可能只有最低限度。”此外,资金方面也存在问题,尤其是在健康和法律等高风险领域。阿达·洛夫莱斯研究所(Ada Lovelace Institute)表示,如果没有对现有监管机构的大量投资,人工智能使用的有效监管就会出现问题。他说:“我们发现的问题很严重,但不是不可克服的。“我们之前的研究提出了一系列证据和建议,以确保英国的人工智能监管框架对人们和社会有效。我们将继续利用这一信息,并与政府和其他组织对话,以实现这一目标。”
  • uk
您觉得本篇内容如何
评分

相关产品

Fluke 福禄克 1663 UK-TPL KIT 兆欧表

"Save money when you buy the Fluke 1663 UK Installation Tester, Fluke T130 Voltage and Continuity Tester, Voltage, Wiring PolarityEarth Resistance Measurement = YesDownloadable Data = YesInterface Type = UK

Schneider 施耐德 XXSID441UK 接近传感器

Telemecanique OsiSense XX Ultrasonic Sensors detects all types of objects in all types of applications. Ultrasonic sensors enable detection, without contact, of any object in severe industrial environments, irrespective of its material, nature, color, or degree of transparency. In addition, there are three modes for assuring efficient detection and ultrasonic sensors enable detection from 0 to 50mm, 2.5 times farther than standard products on the market. Telemecanique OsiSense XX Ultrasonic Sensor applications include assembly, conveying, packaging, and handling.

诺盈自动化 NYCFQ-UL、NYCFQ-UK 料位计

磁浮球液位计在密闭的非导磁性管内安装有一个或多个干簧管,然后将此管穿过一个或多个中空且内部有环形磁铁的浮球,环形磁性浮子随液体的 上升或下降将带动浮球一起上下移动,从而使该非导磁性管内的干簧管产生吸合或断开的动作,输出与 液位相对应的电阻信号并转化为4-20mA标准信号输出,也可发出接点开(关)转换信号。

Sifam Tinsley UK SiFM Series 频率计

FM 48\/ 72 \/ 96 \/ 144型指针式频率计,安装在塑型聚碳酸酯外壳内,适用于测量45至450hz范围内的频率。这些仪器在配电板和发电机组面板上有几个优点。数米可在面板上切割出安装(马赛克安装)。功能,•线性刻度 •玻璃填充聚碳酸脂Lousing •刀边指针 •容易复制的玻璃和边框

Samsung Electronics 三星电子 SEP-1001RWP/UK 摄像机

三星SEW-3037W是一个易于使用的无线婴儿监护系统,包括一个高分辨率3.5英寸液晶显示器和无线夜视摄像头,带有不可见红外发光二极管,确保夜间安全监控。相机采用VGA(640x480)分辨率,提供清晰清晰的图像,内置麦克风和扬声器可实现双向通话,您可以与宝宝交谈并听到声音。SEW-3037W确保了高达800英尺的远距离传输,允许您在远处监视您的宝宝。从面板上看,你的显示器前面的LED没有显示出你的声音。摄像机的平移和倾斜可以通过监视器上的按钮从父设备进行操作,这样您就可以远程调整视图。水平、倾斜和缩放。无干扰。夜视。双向对话。远程夜光灯型号=SEP-1001RWP\/UKAnalogue\/Network=NetworkType=PanningForm Factor=PTZ Dome最大分辨率=640 x 480NIR=YesAudio Support=YesWireless=Yes焦距=3mm传感器类型=1\/5英寸(CMOS)

Automationdirect.com UK1A-EN-0A 超声波接近传感器

超声波接近传感器,直径18mm,15-30VDC,3线,NPN,50-400 mm标称传感距离,可选(常开或常闭)输出,10Hz开关频率,2米电缆出口。包括安装六角螺母。

Rockwell 罗克韦尔 440T-MSALS10UK 机械安全联锁开关

"Access Interlock - Single Key, Engraved Key Code Labeling, Lever Actuator. Key trapped to release lever."

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

英国政府公布3.7亿英镑的科学技术框架

提取码
复制提取码
点击跳转至百度网盘