Click to Skip Ad
Closing in...

This AI malware worm is capable of turning ChatGPT against you

Published Mar 8th, 2024 5:34PM EST
AI malware is already a reality.
Image: Moor Studio/Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

As worrisome as it might be that generative AI models such as ChatGPT and Gemini might one day become sentient or take our jobs, there are far more pressing concerns. For instance, three security researchers from the US and Israel recently created a malware worm which specifically targets generative AI services in order to perform malicious activities such as extracting private data, spreading propaganda, or performing phishing attacks.

The good news is that the researchers developed this worm — which they called Morris II after the 1988 Morris worm — “as a whistleblower to the possibility of creating GenAI worms in order to prevent their appearance.” In other words, you’re not in danger of being attacked by Morris II. The goal here is to warn tech companies of potential threats.

That said, the AI malware this team developed is still rather terrifying.

You can read more about the study in this paper published by the researchers, but the gist here is that an attacker can use a similar computer worm to target generative AI services by inserting adversarial self-replicating prompts into inputs that the models process and replicate as output, at which point they can be used to engage in malicious activity.

In the study, the researchers demonstrated the application of their malware by targeting AI-powered email assistants. In one case, they were able to weaponize an image attachment in an email to spam end users. In another, they used a text in an email to “poison” the database of an email app client, jailbreak ChatGPT and Gemini, and exfiltrate sensitive data.

“This work is not intended to argue against the development, deployment, and integration of GenAI capabilities in the wild. Nor is it intended to create needed panic regarding a threat that will doubt the adoption of GenAI,” the researchers explain in their study. “The objective of this paper is to present a threat that should be taken into account when designing GenAI ecosystems and its risk should be assessed concerning the specific deployment of a GenAI ecosystem (the usecase, the outcomes, the practicality, etc.).”

If you want to learn more about the AI malware worm, watch the video below:

Jacob Siegal
Jacob Siegal Associate Editor

Jacob Siegal is Associate Editor at BGR, having joined the news team in 2013. He has over a decade of professional writing and editing experience, and helps to lead our technology and entertainment product launch and movie release coverage.