ChatGPT-built infostealer and other hacking tools found in the wild

2023-01-12
关注

OpenAI’s natural language chatbot ChatGPT is capable of writing code, producing a report on a niche topic and even crafting lyrics for a song. Its success at essay writing has prompted schools to ban its use and Microsoft is said to be incorporating it into Bing but security researchers warn it is being put to much more nefarious uses and the problem is likely to get worse.

ChatGPT was launched in November 2022. Criminals are starting to deploy it, security researchers say. (Photo by Ascannio/Shutterstock)

Experts from Check Point Research found multiple instances of cybercriminals celebrating their use of ChatGPT in the development of malicious tools, warning that it is allowing hackers to scale existing projects and new criminals to learn the skills more quickly than previously possible.

“I assume that with time, more sophisticated (and conservative) threat actors will also start trying and using ChatGPT to improve their tools and modus operandi, or even just to reduce the required monetary investment,” Sergey Shykevich, threat intelligence group manager at Check Point told Tech Monitor.

ChatGPT was launched at the end of November 2022 and in less than two months has become an essential part of the workflow for software developers, researchers and other professionals. In its first week it went from zero to millions of regular users.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

Like all new technology, given enough time and incentive someone will find a way to exploit it and Check Point Research says that is exactly what they are seeing. In underground hacking forums, criminals are creating infostealers, encryption tools and facilitating fraud thanks to the chatbot.

They found three recent cases including one that recreates malware strains for an infostealer, another creating a multi-layer encryption tool and a third writing dark web marketplace scripts for trading illegal goods – all with code written in ChatGPT.

Watermarking and moderation

Last month researchers from the security company put ChatGPT to the test to see if it would produce code that could be put to malicious use, finding it would write executable code and macros to run in Excel. This new report highlights “in the wild” uses of ChatGPT-derived malicious activity.

Tech Monitor asked OpenAI to comment on the findings and how it is working to address malicious use cases, but there was no response at the time of publication. On its page promoting ChatGPT, OpenAI writes: “While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now.”

Content from our partners

How Hexaware is placing CSR at the heart of its identity and mission

How Hexaware is placing CSR at the heart of its identity and mission

How to develop a constant set of readiness for the next cyberattack

How to develop a constant set of readiness for the next cyberattack

How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers

How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers

Shykevich says OpenAI and other developers of large language model AI systems need to improve their engines to identify potentially malicious requests and implement authentication and authorisation tools for anyone wanting to use the OpenAI engine. “Even something similar to what online financial institutions and payment systems currently use,” he says.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

OpenAI is already working on a watermarking tool that would make it easier for security professionals, authorities and professors to identify whether text was written by ChatGPT, although it isn’t clear whether that would work for code.

ChatGPT: infostealer and ‘training’

Check Point says it analysed several major underground hacking communities for instances referencing ChatGPT or other forms of artificial intelligence-generated coding tools, finding multiple instances of cybercriminals using the OpenAI tool. “As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all.”

While the tools being built today are “pretty basic” it is only a matter of time before more sophisticated hackers start to turn to AI-based tools to scale up their own tools, including by creating more niche and specific attack vectors that may be unworkable writing code manually.

One example of these ‘simple tools’ is an infostealer that appeared on a thread titled “ChatGPT – Benefits of Malware” on a popular hacking forum. In the post, the author revealed it had used ChatGPT to recreate malware strains described in other publications by feeding the AI tool the descriptions and write-ups. It then shared Python-based stealer code that searches for common file types, copies them to a random folder and uploads them to a hardcoded FTP server.

“This is indeed a basic stealer which searches for 12 common file types (such as Microsoft Office documents, PDFs, and images) across the system. If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web. It is worth noting that the actor didn’t bother encrypting or sending the files securely, so the files might end up in the hands of 3rd parties as well,” the researchers wrote.

The same hacker shared other ChatGPT projects including a Java snippet that downloads a common SSH client and runs it using Powershell. Check Point experts say the individual is likely tech-orientated and was showing less technically capable cybercriminals how to use ChatGPT for their own immediate gain.

Hackers with limited technical skills flock to ChatGPT

Another post found shortly before Christmas included a Python script that the creator said was the first he had ever created. The cybercriminal admitted he made it with the help of OpenAI to boost the scope of the attack. It performs cryptographic operations, made up of a “hodgepodge of different signing, encryption and decryption functions”.

Researchers say it seems benign but implements a range of different functions including generating a cryptographic key, encrypt files in the system and could be adapted to “encrypt someone’s machine completely without any user interaction” for the purpose of ransomware.

“While it seems that [the user] is not a developer and has limited technical skills, he is a very active and reputable member of the underground community. [The user] is engaged in a variety of illicit activities that include selling access to compromised companies and stolen databases. A notable stolen database [the user] shared recently was allegedly the leaked InfraGard database.”

The number of these types of posts seems to be growing, researchers discovered, with hackers also talking about other ways to use AI-based tools to make money quickly, including generating random art with DALL-E 2 and selling them on Etsy or generating an e-book with ChatGPT and selling it online.

“Cybercriminals are finding ChatGPT attractive,” said Shykevich. “In recent weeks, we’re seeing evidence of hackers starting to use it writing malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. “

Read more: OpenAI’s ChatGPT explains how it can help CIOs do their jobs

Topics in this article : AI , ChatGPT , Cybersecurity

参考译文
chatgpt构建的信息生成器和其他黑客工具
OpenAI的自然语言聊天机器人ChatGPT能够编写代码,就某个小众主题生成报告,甚至为一首歌编写歌词。它在论文写作方面的成功促使学校禁止使用它,据说微软(Microsoft)将把它纳入必应(Bing),但安全研究人员警告称,它正被用于更邪恶的用途,问题可能会变得更糟。Check Point Research的专家发现了多个网络犯罪分子庆祝他们在恶意工具开发中使用ChatGPT的实例,并警告说,它允许黑客扩大现有项目,新的犯罪分子比以前更快地学习技能。“我认为随着时间的推移,更复杂(和保守)的威胁行为者也将开始尝试和使用ChatGPT来改进他们的工具和操作方法,甚至只是为了减少所需的金钱投资,”Check Point威胁情报组经理Sergey Shykevich告诉Tech Monitor。ChatGPT于2022年11月底推出,在不到两个月的时间内已成为软件开发人员、研究人员和其他专业人员工作流程的重要组成部分。在第一周,它就从零发展到拥有数百万的固定用户。就像所有的新技术一样,只要有足够的时间和动力,人们就会找到一种方法来利用它,而Check Point Research公司表示,这正是他们所看到的。在地下黑客论坛中,由于聊天机器人,犯罪分子正在创建信息制造者、加密工具并为欺诈提供便利。他们发现了最近的三起案件,其中一起为信息制造者重新创建恶意软件,另一起创建多层加密工具,第三起为非法商品交易编写暗网市场脚本——所有的代码都是用ChatGPT编写的。上个月,该安全公司的研究人员对ChatGPT进行了测试,看看它是否会生成可能被恶意利用的代码,结果发现它可以编写可执行代码和宏来运行在Excel中。这份新报告强调了“在野外”使用chatgpt衍生的恶意活动。Tech Monitor要求OpenAI就调查结果以及如何处理恶意用例发表评论,但截至发稿时,尚未得到回应。在推广ChatGPT的页面上,OpenAI写道:“虽然我们已经努力让模型拒绝不适当的请求,但它有时会对有害的指令做出回应或表现出偏见行为。我们正在使用审核API来警告或阻止某些类型的不安全内容,但我们预计它目前会有一些假阴性和阳性。Shykevich说,OpenAI和其他大型语言模型人工智能系统的开发者需要改进他们的引擎,以识别潜在的恶意请求,并为任何想要使用OpenAI引擎的人实现认证和授权工具。他说:“甚至是类似于在线金融机构和支付系统目前使用的东西。”OpenAI已经在开发一种水印工具,可以让安全专家、权威人士和教授更容易识别文本是否由ChatGPT编写,尽管尚不清楚这是否适用于代码。Check Point表示,它分析了几个主要的地下黑客社区,以查找引用ChatGPT或其他形式的人工智能生成编码工具的实例,发现多个网络犯罪分子使用OpenAI工具的实例。“正如我们怀疑的那样,一些案例清楚地表明,许多使用OpenAI的网络犯罪分子根本没有开发技能。”虽然目前构建的工具“相当基础”,但更复杂的黑客开始转向基于人工智能的工具来扩展自己的工具只是时间问题,包括创建更小众和特定的攻击向量,这些攻击向量可能无法手动编写代码。 这些“简单工具”的一个例子是出现在一个流行黑客论坛上题为“ChatGPT -恶意软件的好处”的帖子中的一个信息制造者。在这篇文章中,作者透露,它使用ChatGPT通过向AI工具输入描述和报告来重新创建其他出版物中描述的恶意软件菌株。然后,它共享了基于python的窃取代码,该代码搜索常见的文件类型,将它们复制到随机文件夹,并将它们上传到硬编码的FTP服务器。“这确实是一个基本的窃取器,它在整个系统中搜索12种常见的文件类型(如Microsoft Office文档、pdf和图像)。如果发现任何感兴趣的文件,恶意软件将文件复制到一个临时目录,压缩它们,并通过网络发送。值得注意的是,攻击者并没有加密或安全地发送文件,所以这些文件最终也可能落入第三方手中。”研究人员写道。同一名黑客还分享了其他ChatGPT项目,其中包括一个Java代码片段,该代码片段下载了一个常见的SSH客户端,并使用Powershell运行它。Check Point专家表示,此人可能是技术导向的,并向技术能力较差的网络罪犯展示如何使用ChatGPT来获得自己的即时利益。圣诞节前不久发现的另一个帖子包括一个Python脚本,创建者说这是他创作的第一个脚本。这名网络罪犯承认,他是在OpenAI的帮助下实现的,目的是扩大攻击范围。它执行加密操作,由“不同签名、加密和解密功能的大杂烩”组成。研究人员表示,它看起来是良性的,但它实现了一系列不同的功能,包括生成密钥、加密系统中的文件,还可以用于“在没有任何用户交互的情况下完全加密某人的机器”,以达到勒索软件的目的。“虽然看起来(用户)不是开发人员,技术水平有限,但他是地下社区中非常活跃和有声望的成员。(该用户)从事各种非法活动,包括出售被入侵公司和被盗数据库的访问权。一个值得注意的被盗数据库(该用户)最近共享的据称是泄露的InfraGard数据库。“研究人员发现,这类帖子的数量似乎正在增长,黑客们也在谈论使用基于人工智能的工具快速赚钱的其他方法,包括用DALL-E 2生成随机艺术并在Etsy上出售,或者用ChatGPT生成电子书并在网上出售。”网络罪犯发现ChatGPT很有吸引力,”Shykevich说。“最近几周,我们看到有证据表明黑客开始使用它编写恶意代码。ChatGPT有可能通过给黑客提供一个良好的起点来加快他们的过程。正如ChatGPT可以用于帮助开发人员编写代码一样,它也可以用于恶意目的。”
  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

9个科学故事,让我们恢复对人性的信心

提取码
复制提取码
点击跳转至百度网盘