ChatGPT first launched in late 2022, becoming a sensation overnight. Three months later, it received a massive GPT-4 upgrade, and it’ll soon be able to browse the live web. We’re witnessing the dawn of artificial intelligence (AI), like in the movies. ChatGPT at the level of artificial general intelligence (AGI) yet, but it sure looks like we’re on our way. And as is the case with any new technology that can be abused for criminal purposes, ChatGPT can and will be used by criminals to take advantage of unsuspecting internet users.
Europol has already come out with a report on the matter, highlighting three big ways criminals can use ChatGPT against you.
ChatGPT and similar generative AI products can create all sorts of content at speed when prompted. It might be a cover letter, an essay, or code for an app. ChatGPT can churn out all that and more according to your specifications.
The better your instruction prompts are, the better the result will be. And if you’re not happy with ChatGPT’s drafts, you can always provide feedback so the chatbot can deliver better versions.
Rinse and repeat, and ChatGPT can help out with various chores. It’s no wonder Google and Microsoft are putting so much effort into AI solutions into productivity apps. Or that people who can hardly code use ChatGPT to assist them.
That’s where Europol’s report comes in. The European Union’s police force warned us that malicious individuals can use ChatGPT to assist with various criminal activities.
Per Reuters, Europol highlighted three possible criminal uses of ChatGPT.
“As the capabilities of LLMs (large language models) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook,” Europol said. “ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes.”
Most people have encountered phishing content so far. And many people can tell when they’re looking at a phishing email. The almost official-looking email you get from your banking institution or even Netflix has visible inconsistencies. It might be the way the email reads. Or the punctuation errors that gives it away long before you check the sender’s identity.
ChatGPT can fix all that by churning out copy that sounds just like a regular message from one of those companies. Complete with correct punctuation. And since GPT-4 comes with multimodal support, hackers don’t have to speak English to target English-speaking users with malicious content.
Another malicious ChatGPT use concerns the spread of propaganda and disinformation. ChatGPT “allows users to generate and spread messages reflecting a specific narrative with relatively little effort,” Europol said. All one would have to do is feed a set of instructions to ChatGPT and have it create text that would serve such malicious purposes.
Those looking to create manipulative content wouldn’t have to worry about any safeguards built into ChatGPT. And the AI would not resist. One would only have to give it the proper instructions.
Finally, Europol highlights another way criminals can benefit from ChatGPT’s extensive abilities. They could ask the AI to produce malicious code. Again, the bot would probably deliver the expected results. Even if OpenAI created safeguards to prevent such abuse, one would have to get creative with the set of instructions for ChatGPT to create a malicious app.
The good news in all of this is that OpenAI is aware there’s scope for ChatGPT abuse. That’s why ChatGPT isn’t fully connected to the internet. And the plugins that will serve as its eyes and ears will have to be vetted.
As for criminals looking to take advantage of ChatGPT’s powers, there’s no doubt they’ll try to use AI for nefarious activities. Like them, Europol and other law enforcement agencies can devise uses for ChatGPT-like bots to help them catch those criminals.