Click to Skip Ad
Closing in...

ChatGPT plugins are handy, but they could give bad actors unbridled access to your accounts

Published Mar 16th, 2024 3:10PM EDT
In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen.
Image: Leon Neal/Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

One of the best things about ChatGPT is the huge library of third-party plugins that can make the AI chatbot do far more than OpenAI originally designed it for. From plugins that make running computations easier to those that allow you to pull information from your outside accounts, a cybersecurity firm has issued a stark warning about trusting plugins, as well as the security flaws that could give bad actors access to your other accounts.

The new research, which was headed by Salt Labs, warns that security flaws found directly within ChatGPT, as well as within the AI’s ecosystem, could give attackers the opportunity to install malicious plugins without your consent. This would effectively allow the bad actors to hijack your account and gain access to third-party websites like Github.

The good news here, of course, is that OpenAI is already winding down the use of ChatGPT plugins, writing in a post that will be ending the installation of new plugins on March 19, 2024. Any currently in-use plugins will no longer be available after April 9, 2024. While it might seem counterintuitive given how useful plugins can be, OpenAI has used the information gleaned from plugins to create GPTs, which let you custom-tailor the AI to specific use cases.

ChatGPT photo illustration
In this photo illustration, the ChatGPT (OpenAI) logo is displayed on a smartphone screen. Image source: Rafael Henrique/SOPA Images/LightRocket via Getty Images

It’s a good thing, too, because Salt Labs says that one of the biggest flaws it discovered was an exploit that allowed bad actors to exploit the OAuth workflow and trick users into installing an arbitrary plugin. All of this was accomplishable because ChatGPT doesn’t validate that the user has started the plugin installation. It’s a terrific chance for bad actors to swoop in and intercept any data shared by the victim.

Beyond that exploit, though, Salt Labs also unearthed issues with PluginLab, stating that bad actors could weaponize those issues to create zero-click account takeover attacks utilizing ChatGPT plugins as a launching point. This would allow those threat actors to gain access to connected third-party websites, like GitHub.
AI language models like those powering ChatGPT can be exceptionally helpful if you use them correctly. However, the exploits found in ChatGPT’s plugin options showcase just how important it remains to stay vigilant about your online protection and to always be aware of what you are installing when you’re working with these systems.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.