‘Unintended harms’ of generative AI pose national security risk to UK, report warns

2023-12-19
关注

  •  

Unintended consequences of generative AI use could cause significant harm to the UK’s national security, a new report has warned.

Generative AI could lead to increasingly sophisticated deepfake content being produced, a new report warns. (Photo by Tero Vesalainen/Shutterstock)

The paper from the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute highlights key areas of concern that need to be addressed to protect the nation from threats posed by these powerful technologies.

The unintended security risks of generative AI

Titled Generative AI and National Security: Risk Accelerator or Innovation Enabler?, the report authors point out that conversations about threats have focused primarily on understanding the risks from  groups or individuals who set out to inflict harm using generative AI, such as through cyberattacks or by generating child sex abuse material. It is expected that generative AI will amplify the speed and scale of these activities, and Tech Monitor reported this week that security professionals have highlighted the increased risk posed by AI-powered phishing attacks, which enable cybercriminals to generate more authentic-looking communications to lure in victims.

But the report also urges policymakers to plan for the unintentional risks posed by improper use and experimentation with generative AI tools, and excessive risk-taking as a result of over-trusting AI outputs. These risks could stem from the adoption of AI in critical national infrastructure or its supply chains, and the use of AI in public services.

Private sector experimentation with AI could also lead to problems, with the fear of missing out on AI advances potentially clouding judgments about higher-risk use cases, the authors argue.

Generative AI might offer opportunities for the national security community says Ardi Janjeva, research associate from CETaS at The Alan Turing Institute. But he believes it is “currently too unreliable and susceptible to errors to be trusted in the highest stakes contexts”.

Janjeva said: “Policymakers must change the way they think and operate to make sure that they are prepared for the full range of unintended harms that could arise from improper use of generative AI, as well as malicious uses.”

The research team consulted with over 50 experts across government, academia, civil society and leading private sector companies, with most deeming that unintended harms are not receiving adequate attention compared with adversarial threats national security agencies are accustomed to facing.

Content from our partners

Navigating the intersection of AI and sustainability

Navigating the intersection of AI and sustainability

How businesses can thrive in the age of generative AI

How businesses can thrive in the age of generative AI

AI is transforming efficiencies and unlocking value for distributors

AI is transforming efficiencies and unlocking value for distributors

The report analyses political disinformation and electoral interference and raises particular concerns about the cumulative effect of different types of generative AI technology working to spread misinformation at scale by creating realistic deepfake videos. Debunking a false AI-generated narrative in the hours or days preceding an election would be particularly challenging, the report warns.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

It cites the example of an AI-generated video of a politician delivering a speech at a venue they never attended may be seen as more plausible if presented with an accompanying selection of audio and imagery, such as the politician taking questions from reporters and text-based journalistic articles covering the content of the supposed speech.

How to combat AI’s unintended consequences

The Alan Turing Institute says the CETaS report has been released to build on the momentum created by the UK’s AI Safety Summit, which saw tech and political leaders come together to discuss how artificial intelligence can be implemented without causing societal harm.

It makes policy recommendations for the new AI Safety Institute, announced prior to the summit, and other government departments and agencies which could help address both malicious and unintentional risks.

This includes guidance about evaluating AI systems, as well as the appropriate use of generative AI for intelligence analysis. The report also highlights that autonomous AI agents, a popular early use case for the technology, could accelerate both opportunities and risks in the security environment and offer recommendations to ensure their safe and responsible use.

Professor Mark Girolami, chief scientist at the Alan Turing Institute, said: “Generative AI is developing and improving rapidly and while we are excited about the many benefits associated with the technology, we must exercise sensible caution about the risks it could pose, particularly where national security is concerned.

“With elections in the US and the UK on the horizon, it is vital that every effort is made to ensure this technology is not misused, whether intentional or not.”

Read more: The UK is building a £225m AI supercomputer

  •  

  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C011EN RF 音频麦克风

C011型直通台式边界层话筒采用了非常坚固的外壳设计。它们自上而下由实心黄铜制成,确保在最极端环境下的可靠性。它们具有一个内置的幻影电源模块,该模块具有完全的射频保护,以防止在800兆赫-1.2兆赫频段工作的GSM设备的干扰。极性模式:全向频率响应:50赫兹-20千赫灵敏度:-42dB+\/-3dB@1千赫(0dB=1 V\/Pa)阻抗:200欧姆 S\/n比率:58dB最大SPL:120dB 1%THD电源要求:9-48伏幻像电源终端:外接3针XLR

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘