How AI will extend the scale and sophistication of cybercrime

2022-07-28
关注

Artificial intelligence has been described as a ‘general purpose technology’. This means that, like electricity, computers and the internet before it, AI is expected to have applications in every corner of society. Unfortunately for organisations seeking to keep their IT secure, this includes cybercrime.

In 2020, a study by European police agency Europol and security provider Trend Micro, identified how cybercriminals are already using AI to make their attacks more effective, and the many ways AI will power cybercrime in future.

“Cybercriminals have always been early adopters of the latest technology and AI is no different,” said Martin Roesler, head of forward-looking threat research at Trend Micro, when the report was published. “It is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works.”

Just as tech leaders need to understand how AI can help their organisations achieve their own aims, it is crucial to understand how AI will bolster the sophistication and scale of criminal cyberattacks, so they can begin to prepare against them.

AI offers cybercriminals a numbers of ways to make their social engineering attacks more effective. (Image by Urupong / iStock)

How AI is used for cybercrime today

AI is already being used by cybercriminals to improve the effectiveness of traditional cyberattacks. Many applications focus on bypassing the automated defences that secure IT systems.

One example, identified in the Europol report, is the use of AI to craft malicious emails that can bypass spam filters. In 2015, researchers discovered a system that used ‘generative grammar’ to create a large dataset of email texts. “These texts are then used to fuzz the antispam system and adapt to different filters in order to identify content that would no longer be detected by spam filters,” the report warns.

Researchers have also demonstrated malware that uses a similar approach to antivirus software, employing an AI agent to find weak spots in the software’s malware detection algorithm.

AI can be used to support other hacking techniques, such as guessing passwords. Some tools use AI to analyse a large dataset of passwords recovered from public leaks and hacks on major websites and services. This reveals how people modify their passwords over time – such as adding numbers on the end or replacing ‘a’ with ‘@’.

Work is also underway to use machine learning to break CAPTCHAs found on most websites to ensure the user is human, with Europol discovering evidence of active development on criminal forums in 2020. It is not clear how far advanced this development is but, given enough computing power, AI will eventually be able to break CAPTCHAs, Europol predicts.

AI and social engineering

Other uses of AI for cybercrime focus on social engineering, deceiving human users into clicking malicious links or sharing sensitive information.

First, cybercriminals are using AI to gather information on their targets. This includes identifying all the social media profiles of a given person, including by matching their user photos across platforms.

Once they have identified a target, cybercriminals are using AI to trick them more effectively. This includes creating fake images, audio and even video to make their targets think they are interacting with someone they trust.

One tool, identified by Europol, performs real-time voice cloning. With a five second voice recording, hackers can clone anyone’s voice and use it to gain access to services or deceive other people. In 2019, the chief executive of a UK-based energy company was tricked into paying £200,000 by scammers using an audio deep fake.

Even more brazen, cybercriminals are using video deep fakes – which make another person’s face appear over their own – in remote IT job interviews in order to get access to sensitive IT systems, the FBI warned last month.

In addition to these individual methods, cybercriminals are using AI to help automate and optimise their operations, says Bill Conner, CEO of cybersecurity provider SonicWall. Modern cybercriminal campaigns involve a cocktail of malware, ransomware-as-a-service delivered from the cloud, and AI-powered targeting.

These complex attacks require AI for testing, automation and quality assurance, Conner explains. “Without the AI it wouldn’t be possible at that scale.”

The future of AI-powered cybercrime

The use of AI by cybercriminals is expected to increase as the technology becomes more widely available. Experts predict that this will allow them to launch cyberattacks at far greater scale than is currently possible. For example, criminals will be able to use AI to analyse more information to identify targets and vulnerabilities, and attack more victims at once, Europol predicts.

They will also be able to generate more content with which to deceive people. Large language models, such as OpenAI’s GPT-3, which can be used to generate realistic text and other outputs, may have a number of cybercriminal applications. These could include mimicking an individual’s writing style or creating chatbots that victims confuse for real people.

AI-powered software development, which businesses are beginning to use, could also be employed by hackers. Europol warns that AI-based ‘no code’ tools, which convert natural language into code, could lead to a new generation of ‘script kiddies’ with low technical knowledge but the ideas and motivation for cybercrime.

Malware itself will become more intelligent as AI is embedded within it, Europol warns. Future malware could search documents on a machine and look for specific pieces of information, such as employee data or protected intellectual property.

Ransomware attacks, too, are predicted to be enhanced with AI. Not only will AI help ransomware groups find new vulnerabilities and victims, but will also help them avoid detection for longer, by ‘listening’ for the measures companies use to detect intrusions to their IT systems.

As the ability of AI to mimic human behaviour evolves, so too will its ability to break certain biometric security systems, such as those which identify a user based on the way they type. It could also spoof realistic user behaviour – such as being active during specific hours – so that stolen accounts aren’t flagged by behavioural security systems.

Lastly, AI will enable cybercriminals to make better use of compromised IoT devices, predicts Todd Wade, an interim CISO and author of BCS’ book on cybercrime. Already employed to power botnets, these devices will be all the more dangerous when coordinated by AI.

How to prepare for AI cybercrime

Protecting against AI-powered cybercrime will require responses at the individual, organisational and society-wide levels.

Employees will need to be trained to identify new threats such as deep fakes, says Wade. “People are used to attacks coming in a certain way,” he says, “They are not used to the one-off, maybe something that randomly appears on a Zoom call or WhatsApp message and so are not prepared when it happens.”

In addition to the usual cybersecurity best practices, organisations will need to employ AI tools themselves to match the scale and sophistication of future threats. “You are going to need AI tools just to keep up with the attacks and if you don’t use these tools to combat this there is no way you’ll keep up,” says Wade.

But the way in which AI is developed and commercialised will also need to be managed to ensure it cannot be hijacked by cybercriminals. In its report, Europol called on governments to ensure that AI systems adhere to ‘security-by-design’ principles, and develop specific data protection frameworks for AI.

Today, many of the AI capabilities discussed above are too expensive or technically complex for the typical cybercriminal. But that will change as the technology develops. The time to prepare for widespread AI-powered cybercrime is now.

More on the future of cybercrime:

The zero day vulnerability trade remains lucrative but risky

Can DAOs survive an onslaught of cybercrime?

Topics in this article: cybercrime, deepfakes, Ransomware, sponsored

参考译文
人工智能将如何扩大网络犯罪的规模和复杂性
人工智能被描述为一种“通用技术”。这意味着,就像之前的电力、计算机和互联网一样,人工智能有望在社会的每个角落得到应用。不幸的是,对于那些寻求保护其IT安全的组织来说,这包括网络犯罪。2020年,欧洲刑警组织(Europol)和安全提供商趋势科技(Trend Micro)进行了一项研究,确定了网络罪犯已经在如何使用人工智能使他们的攻击更有效,以及人工智能在未来将以多种方式支持网络犯罪。趋势科技前瞻性威胁研究主管马丁•勒斯勒(Martin Roesler)在报告发布时表示:“网络罪犯一直是最新技术的早期采用者,人工智能也不例外。”“它已经被用于猜测密码、破解验证码和克隆语音,还有更多的恶意创新正在进行中。”就像科技行业的领导者需要了解人工智能如何帮助他们的组织实现自己的目标一样,理解人工智能将如何提高网络犯罪攻击的复杂性和规模也至关重要,这样他们才能开始准备应对这些攻击。人工智能已经被网络罪犯用来提高传统网络攻击的有效性。许多应用程序专注于绕过IT系统安全的自动化防御。欧洲刑警组织报告中提到的一个例子是,利用人工智能制造可以绕过垃圾邮件过滤器的恶意电子邮件。2015年,研究人员发现了一个使用“生成语法”创建大量电子邮件文本数据集的系统。报告警告说:“这些文本然后被用来模糊反垃圾邮件系统,适应不同的过滤器,以识别垃圾邮件过滤器不再检测到的内容。”研究人员还演示了使用与杀毒软件类似的方法的恶意软件,使用人工智能代理来发现软件的恶意软件检测算法中的弱点。人工智能可以用来支持其他黑客技术,比如猜密码。一些工具使用人工智能来分析从主要网站和服务的公开泄露和黑客攻击中恢复的大量密码数据集。这揭示了随着时间的推移,人们是如何修改密码的——比如在密码的末尾加上数字,或者把“a”换成“@”。利用机器学习破解大多数网站上的验证码,以确保用户是人类的工作也在进行中,欧洲刑警组织(Europol)在2020年发现了犯罪论坛上积极发展的证据。欧洲刑警组织预测,目前尚不清楚这一发展有多先进,但只要有足够的计算能力,人工智能最终将能够破解验证码。人工智能用于网络犯罪的其他用途集中于社会工程,欺骗人类用户点击恶意链接或分享敏感信息。首先,网络罪犯正在使用人工智能来收集目标的信息。这包括识别一个人的所有社交媒体资料,包括匹配他们在不同平台上的用户照片。一旦他们确定了目标,网络罪犯就会使用人工智能更有效地欺骗他们。这包括制作虚假的图像、音频甚至视频,让他们的目标认为他们正在与他们信任的人互动。其中一个由欧洲刑警组织识别的工具可以执行实时语音克隆。通过5秒的语音录音,黑客可以克隆任何人的声音,并利用它获得服务或欺骗其他人。2019年,英国一家能源公司的首席执行官被骗子用深度假音频骗走了20万英镑。美国联邦调查局(FBI)上月警告称,更无耻的是,网络犯罪分子还在远程IT求职面试中使用深度视频造假技术,以便进入敏感的IT系统。深度视频造假技术让另一个人的脸出现在自己的脸上。 网络安全供应商SonicWall的首席执行官比尔•康纳表示,除了这些单独的方法,网络犯罪分子还利用人工智能来帮助自动化和优化他们的操作。现代的网络犯罪活动包括恶意软件、从云端提供的勒索软件服务以及人工智能攻击。康纳解释说,这些复杂的攻击需要人工智能进行测试、自动化和质量保证。“如果没有人工智能,就不可能有这么大的规模。”随着人工智能技术的广泛应用,网络犯罪分子对人工智能的使用预计将会增加。专家们预测,这将使他们能够发动比目前更大规模的网络攻击。例如,欧洲刑警组织预测,犯罪分子将能够使用人工智能分析更多信息,以确定目标和漏洞,并一次攻击更多的受害者。他们也将能够产生更多的内容来欺骗人们。大型语言模型,如OpenAI的GPT-3,可以用来生成真实的文本和其他输出,可能有许多网络犯罪应用。这可能包括模仿个人的写作风格,或创建聊天机器人,让受害者混淆真实的人。企业正在开始使用的人工智能软件开发也可能被黑客利用。欧洲刑警组织警告称,将自然语言转换为代码的基于人工智能的“无代码”工具,可能会催生新一代“脚本小子”,这些人技术知识不高,但有网络犯罪的想法和动机。欧洲刑警组织警告称,由于嵌入了人工智能,恶意软件本身将变得更加智能。未来的恶意软件可能会搜索机器上的文档,并寻找特定的信息片段,如员工数据或受保护的知识产权。据预测,随着人工智能的发展,勒索软件的攻击也会增强。人工智能不仅能帮助勒索软件组织找到新的漏洞和受害者,还能通过“监听”公司用于检测其IT系统入侵的措施,帮助他们在更长的时间内避免被发现。随着人工智能模仿人类行为的能力不断发展,它破解某些生物识别安全系统的能力也将不断提高,比如那些根据用户打字方式识别用户的系统。它还可以欺骗真实的用户行为——比如在特定的时间内活跃——这样被盗账户就不会被行为安全系统标记出来。最后,临时首席信息官、英国广播公司(BCS)关于网络犯罪的书的作者Todd Wade预测,人工智能将使网络罪犯更好地利用受攻击的物联网设备。这些设备已经被用来驱动僵尸网络,当人工智能进行协调时,它们将变得更加危险。防范人工智能驱动的网络犯罪将需要个人、组织和全社会层面的响应。韦德表示,员工需要接受培训,以识别深度造假等新威胁。他说:“人们习惯于以某种方式受到攻击。他们不习惯一次性的攻击,比如在Zoom通话或WhatsApp消息中随机出现的攻击,因此当攻击发生时,他们没有做好准备。”除了通常的网络安全最佳做法外,企业还需要自己使用人工智能工具,以应对未来威胁的规模和复杂性。韦德说:“你将需要人工智能工具来跟上攻击的步伐,如果你不使用这些工具来应对攻击,你就不可能跟上。”但人工智能的开发和商业化方式也需要加以管理,以确保它不会被网络犯罪分子劫持。在其报告中,欧洲刑警组织呼吁各国政府确保人工智能系统遵守“设计安全”原则,并为人工智能开发特定的数据保护框架。如今,对于典型的网络罪犯来说,上面讨论的许多人工智能能力要么太昂贵,要么技术太复杂。但随着技术的发展,这种情况将会改变。现在是应对人工智能驱动的网络犯罪的时候了。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘