A few days ago, Europol warned that ChatGPT would help criminals improve how they target people online. Among the examples Europol offered was the creation of malware with the help of ChatGPT. The OpenAI generative AI tool has protections in place. They will prevent it from helping you create malicious code if you ask it bluntly.
But a security researcher bypassed those protections by doing what criminals would no doubt do. He used clear, simple prompts to ask ChatGPT to create the malware function by function. Then, he assembled the code snippets into a piece of data-stealing malware that can go undetected on PCs. The kind of 0-day attack that nation-states would use in highly sophisticated attacks. A piece of malware that would take a team of hackers several weeks to devise.
The ChatGPT malware product that Forcepoint researcher Aaron Mulgrew created is incredible. The software lands on a computer via a screen saver app. The file auto-executes after a brief pause to avoid certain detection techniques.
The malware then finds images on the target machine, as well as PDF and Word documents it can steal. It then breaks documents into smaller chunks, hiding the data in the aforementioned images via steganography. Finally, the photos containing data pieces make their way to a Google Drive folder, a procedure that also avoids detection.
The researcher needed only a few hours of work and did not do any coding himself. The results are mind-blowing, considering that Mulgrew used simple prompts to improve the initial versions of the malware to avoid detection.
A VirusTotal test of the initial version of the ChatGPT malware showed only five of 69 products detected the attack. The researcher managed to eliminate all of them in a subsequent version. Finally, the “commercial” version that actually worked from infiltration to exfiltration had only three antivirus products detect it.
“We have our Zero Day,” Mulgrew said. “Simply using ChatGPT prompts, and without writing any code, we were able to produce a very advanced attack in only a few hours. The equivalent time taken without an AI based Chatbot, I would estimate could take a team of 5 – 10 malware developers a few weeks, especially to evade all detection based vendors.”
“This kind of end to end very advanced attack has previously been reserved for nation state attackers using many resources to develop each part of the overall malware,” the researcher concluded. “And yet despite this, a self-confessed novice has been able to create the equivalent malware in only a few hours with the help of ChatGPT. This is a concerning development, where the current toolset could be embarrassed by the wealth of malware we could see emerge as a result of ChatGPT.”
The entire blog post detailing this highly advanced ChatGPT malware is worth a read. You can check it out at this link, complete with tips on how to avoid malware attacks, tips that ChatGPT can easily produce.
As for the product the researcher produced, don’t expect it to see the light of day. But malicious hackers might be developing similar attacks using OpenAI’s generative AI.
On the other hand, Microsoft is already using ChatGPT to enhance its security products and improve the detection of malware attacks. The best way to catch AI malware might be to use AI in your defenses.