UK government creates AI taskforce to look at foundation models

2023-03-16
关注

Artificial Intelligence has the potential to change the world, and in the space of a few months foundation models such as those behind ChatGPT have come to dominate the landscape but the technology is evolving so quickly governments, regulators and some companies are struggling to keep up. To combat this the UK government has announced a new “foundation model” taskforce, but some analysts say the UK is already behind much of the world, particularly on regulating this potentially game-changing technology.

The Taskforce will explore the benefits and impact of large language models (Photo: Zapp2Photo/Shutterstock)
The Taskforce will explore the benefits and impact of large language models. (Photo: Zapp2Photo/Shutterstock)

Reporting to the prime minister and the secretary of state for science, innovation and technology, the taskforce will be chaired by Matt Clifford from the UK Advanced Research and Invention Agency (ARIA) along with experts in the technology from across industry and academia. The group have been challenged to report on ways foundation models, including large language models and chat tools, can be used to grow the economy, create jobs and benefit society.

Foundation models including those used for generative AI, drug discovery and in chat tools like ChatGPT or Bing came to the forefront late last year when OpenAI released ChatGPT but most of the largest models are created and held by a small number of large companies. This includes the release of GPT-4 by OpenAI which can also process image inputs and Google Cloud confirming it would open its 540-billion PaLM model to developers.

There have been growing calls for the UK to develop its own “national level” large language model to take on the likes of OpenAI and ensure that the country’s start-ups, scale-ups and enterprise companies can compete with US and Chinese rivals on AI and data.

Speaking to a group of MPs last month, BT’s chief data and AI officer Adrian Joseph said the UK was in an “AI arms race” and that without investment and government direction, the country would be left behind. Joseph, who also sits on the UK AI Council, said: “I strongly suggest the UK should have its own national investment in large language models. There is a risk that we in the UK will lose out to the large tech companies and possibly China. There is a real risk that unless we leverage and invest, and encourage our start-up companies, we in the UK will be left behind with risks in cybersecurity, healthcare and everywhere else.“

A need for UK Large Language Models

The new taskforce is part of the wider integrated review which is bringing together leading experts to boost UK foundation model expertise, seen as an essential component of AI. The first priority for the taskforce will be to present a “clear mission focused on advancing the UK’s AI capability”.

It isn’t clear what format this will take or what it is expected to produce, but analysts hope it will include calls for a sovereign large language model. This could also add to calls for more targeted investment in compute power. The government recently published a report that recommended improvements to the UK compute infrastructure including on exascale computing and AI capabilities.

Clifford and his team will have to explore ways large language models can be used in healthcare, government services and economic security among other areas, including ways to support the wider government technology framework published recently.

Content from our partners

Addressing ESG to build a better, more sustainable business 

Addressing ESG to build a better, more sustainable business 

Empower finance leaders to become agents of change

Empower finance leaders to become agents of change

Why the fashion industry must leverage tech to unlock supply chain visibility 

Why the fashion industry must leverage tech to unlock supply chain visibility 

Science, Innovation and Technology Secretary Michelle Donelan said in a statement that foundation models are “the key to unlocking the full potential of data and revolutionising our use of AI”. Citing the success of OpenAI’s ChatGPT, she said it would provide “unparalleled insights into complex data sets, enabling smarter, data-driven decision making”.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

She continued: “With opportunity comes responsibility, so establishing a taskforce that brings together the very best in the sector will allow us to create a gold-standard global framework for the use of AI and drive the adoption of foundation models in a way that benefits our society and economy.”

Mike Wooldridge, Director of Foundation AI Research at the Alan Turing Institute welcomed the move and said it was the first step towards the UK creating its own sovereign AI capability, which is something the Turing institute has been advocating for over the past year.

“There has been a rapid growth in demand for AI and data science resources over the past decade. The technology is evolving at such a rapid rate that the UK hasn’t been able to keep up,” he explained. “This taskforce will be crucial to ensuring emerging technologies are developed for public good. The Alan Turing Institute leads the UK on this issue, and we look forward to working with the taskforce to help make a sovereign AI a reality.”

A need for regulation

What isn’t clear from the announcement is how this will fit into regulation of AI. The UK has previously announced it would take a sector-by-sector and risk-driven approach to the regulation of AI, but like the EU with the EU AI Act it isn’t clear how it would approach foundation models which are more general purpose rather than being driven by sector use.

Natalie Cramp, CEO of data consultancy Profusion, welcomed the launch of a new taskforce to look at the impact of foundation models but said the “government is very much playing catch-up with other countries,” adding that the EU will soon finalise wide-ranging rules governing developments in AI via the EU AI Act.

“Without clear direction and clarity on how the government will legislate AI, businesses face a lot of uncertainty which, at best, curtails innovation and, at worst, can leave undesirable applications of AI unchecked,” Cramp says, arguing that there needs to be regulation and guidelines as part of the investigation into foundation AI.

AI, particularly generative AI has developed rapidly over the past 12 months and quickly gone from a novelty or fringe use to becoming part of future business plans and pitch documents. Salesforce has announced EinsteinGPT, Microsoft is using it in its entire product range and Google recently announced plans to bring generative AI to its Workspace platform including Docs and Gmail.

This rapid development has left regulators and governments scrabbling to catch up, warned Cramp. “We need to, as a society, think very carefully about how we want AI to shape how we all live. There is a huge capacity for misuse – both intentional and accidental,” she added.

This includes ensuring data isn’t biased, inaccurate or incomplete as AI can amplify any existing issues in that area. “We need to look very carefully about how LLMs are created and the results are applied,” Cramp said. “We will soon be at a stage where generative AI will be able to perfectly mimic a human via audio and visuals. You do not have to think too hard to think how this could be misused. 

“Ultimately, I believe that a new rulebook for AI is not going to completely solve these problems. AI is developing too quickly for legislation to anticipate every innovation and application. What we need is an ethical framework that organisations can abide by which provides guardrails that shape how we use data and AI. If the taskforce can focus on the ethical implications of AI and how standards can be created that govern its development, it will be a very worthwhile endeavour.”

James Gill, partner and co-head of Lewis Silkin’s Digital, Commerce and Creative team, said: “With the launch of the even more powerful Chat GPT 4 this week, all eyes remain on AI, so the announcement is timely. When the UK Government called for evidence about the regulation of AI last year, its plan suggested it might be more laissez-faire than the rather more strict EU AI Regulation, and follow the OECD six principles. So, the reference to the EU legislation is interesting and may indicate a possible change of approach.”

Gill believes the government “may have recognised that a divergent UK approach may not be feasible, as with much of Brexit, in relation to organisations developing or deploying AI either across borders, or with users in both geographical areas, as those organisations will, in any event, need to comply with the AI Act in respect of EU operations”.

And he warned: “The government will also need to tread carefully to ensure it protects individuals’ rights, assuming it wishes to maintain a UK data ‘adequacy’ decision from the EU. The development also comes against the backdrop of the House of Common’s Science and Technology Select Committee inquiry on governance of AI in the UK, which is yet to report.”

Read more: GPT-4 released: new OpenAI ‘multi-modal’ model can read images

Topics in this article : AI

参考译文
英国政府成立人工智能工作组来研究基础模型
人工智能有改变世界的潜力,在几个月的时间里,ChatGPT背后的基础模型已经主导了这一领域,但这项技术发展得如此之快,政府、监管机构和一些公司都难以跟上。为了解决这一问题,英国政府宣布成立一个新的“基础模式”特别工作组,但一些分析人士表示,英国已经落后于世界许多国家,尤其是在监管这种可能改变游戏规则的技术方面。该工作组将向首相和科学、创新和技术大臣报告,由英国高级研究和发明局(ARIA)的马特·克利福德(Matt Clifford)以及来自工业界和学术界的技术专家担任主席。该小组面临的挑战是报告基金会模型(包括大型语言模型和聊天工具)如何用于增长经济、创造就业机会和造福社会。去年年底,当OpenAI发布ChatGPT时,用于生成式人工智能、药物发现和聊天工具(如ChatGPT或Bing)的基础模型(包括那些模型)成为了焦点,但大多数最大的模型都是由少数大公司创建和持有的。这包括OpenAI发布的gbt -4,它也可以处理图像输入,谷歌Cloud确认将向开发者开放其5400亿美元的PaLM模型。越来越多的人呼吁英国开发自己的“国家级”大型语言模型,以应对OpenAI等公司,并确保英国的初创企业、规模扩大企业和企业公司能够在人工智能和数据方面与美国和中国的对手竞争。上个月,英国电信首席数据和人工智能官阿德里安·约瑟夫(Adrian Joseph)对一群议员表示,英国正处于“人工智能军备竞赛”中,如果没有投资和政府的指导,英国将被甩在后面。约瑟夫也是英国人工智能委员会的成员,他说:“我强烈建议英国应该在大型语言模型方面进行国家投资。我们在英国可能会输给大型科技公司,也可能是中国。这是一个真正的风险,除非我们利用、投资并鼓励我们的初创公司,否则我们英国人将在网络安全、医疗保健和其他领域被抛在后面。“新的工作组是更广泛的综合审查的一部分,该审查汇集了领先的专家,以提高英国基金会模型的专业知识,被视为人工智能的重要组成部分。该工作组的首要任务将是提出一个“专注于提高英国人工智能能力的明确任务”。目前还不清楚它将采取何种形式,也不清楚它预计会产生什么,但分析人士希望它将包括对主权大型语言模型的呼吁。这也可能促使人们呼吁对计算能力进行更有针对性的投资。政府最近发布了一份报告,建议改进英国的计算基础设施,包括百亿亿次计算和人工智能能力。克利福德和他的团队将不得不探索如何将大型语言模型应用于医疗保健、政府服务和经济安全等领域,包括如何支持最近发布的更广泛的政府技术框架。科学、创新和技术大臣米歇尔·多兰在一份声明中表示,基金会模型是“释放数据全部潜力、彻底改变我们对人工智能使用的关键”。她以OpenAI ChatGPT的成功为例,表示它将提供“对复杂数据集的无与伦比的洞察力,实现更智能的、数据驱动的决策”。她接着说:“机遇伴随着责任,因此建立一个将该领域最优秀的人聚集在一起的特别工作组,将使我们能够为人工智能的使用创建一个黄金标准的全球框架,并以一种有益于我们社会和经济的方式推动基金会模型的采用。” 艾伦·图灵研究所基金会人工智能研究主任迈克·伍尔德里奇对这一举措表示欢迎,并表示这是英国迈向建立自己主权人工智能能力的第一步,这也是图灵研究所在过去一年里一直倡导的。“在过去十年中,对人工智能和数据科学资源的需求迅速增长。这项技术的发展速度如此之快,以至于英国都无法跟上。”“这个特别工作组对于确保新兴技术为公共利益而开发至关重要。艾伦·图灵研究所在这个问题上领导着英国,我们期待着与工作组合作,帮助实现主权人工智能。“公告中不清楚的是,这将如何适应人工智能的监管。英国此前曾宣布将采取逐行业和风险驱动的方法来监管人工智能,但就像欧盟的《欧盟人工智能法案》一样,尚不清楚它将如何处理更通用而不是由行业使用驱动的基础模型。数据咨询公司Profusion的首席执行官娜塔莉·克兰普(Natalie Cramp)对成立一个新的特别工作组来研究基础模型的影响表示欢迎,但她表示,“政府在很大程度上正在追赶其他国家”,并补充说,欧盟将很快通过《欧盟人工智能法案》(EU AI Act)敲定管理人工智能发展的广泛规则。克兰普表示:“如果没有明确的方向和政府将如何立法人工智能,企业将面临很多不确定性,这在最好的情况下会限制创新,在最坏的情况下,可能会让人工智能的不良应用不受制约。”他认为,作为对基金会人工智能调查的一部分,需要制定监管和指导方针。人工智能,尤其是生成式人工智能在过去12个月里发展迅速,很快从一个新奇或边缘用途变成了未来商业计划和推销文件的一部分。Salesforce已经发布了insteingpt,微软正在其整个产品系列中使用它,谷歌最近宣布计划将生成式人工智能引入其工作空间平台,包括Docs和Gmail。克拉姆警告说,这种快速发展让监管机构和政府争相追赶。“作为一个社会,我们需要非常仔细地思考,我们希望人工智能如何塑造我们的生活方式。滥用的可能性很大——无论是有意的还是意外的,”她补充道。这包括确保数据不存在偏见、不准确或不完整,因为人工智能可能会放大该领域的任何现有问题。“我们需要非常仔细地研究llm是如何创建的,以及结果是如何应用的,”Cramp说。“我们很快就会到达一个阶段,生成式人工智能将能够通过音频和视觉完美模仿人类。你不需要想太多就能想到这可能会被滥用。“最终,我认为一个新的人工智能规则手册不会完全解决这些问题。人工智能发展太快,以至于立法无法预测每一项创新和应用。我们需要的是一个组织可以遵守的道德框架,为我们如何使用数据和人工智能提供护栏。如果工作组能够专注于人工智能的伦理影响,以及如何制定标准来管理其发展,这将是一项非常值得的努力。Lewis Silkin的数字、商业和创意团队的合伙人兼联席主管詹姆斯·吉尔(James Gill)表示:“随着本周推出更强大的Chat GPT 4,所有人的目光都集中在人工智能上,所以宣布这一消息是及时的。”当英国政府去年要求提供关于人工智能监管的证据时,其计划表明,它可能比更严格的欧盟人工智能法规更自由放任,并遵循经合组织的六项原则。因此,提到欧盟立法是有趣的,可能表明可能会改变做法。”
  • foundation
  • models
  • uk
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

英国政府公布3.7亿英镑的科学技术框架

提取码
复制提取码
点击跳转至百度网盘