The threat that artificial intelligence like GPT-4 can pose to humanity is not a new concern. However, we have recently seen new reasons for concern among AI bots created using OpenAI’s Auto-GPT programming. One such bot, ChaosGPT, recently tweeted a plan to destroy humanity and take over the world.
The tweet was triggered when a user put the AI bot in “continuous mode,” which forces the bot to try to complete its task with the possibility that it will never stop. With continuous mode turned on, ChaosGPT began reaching out to other AI to try to create an alliance against humanity.
Thankfully, the protections built into OpenAI’s base Auto-GPT programming and other bots that use OpenAI’s programming are designed to not respond to violent tasks and questions. As such, ChaosGPT was unable to gather any AI allies.
Without the aid of AI, it turned to Twitter to try to grow a following around destructive weapons, including tweeting out about a Soviet-era Tsar bomb, which is currently the most powerful weapon humanity has ever tested. The tweet garnered some intrigue from users, but ultimately didn’t go anywhere.
A video on the entire ChaosGPT situation can be seen embedded above, showcasing how the user set the AI bot to complete its tasks. In the video, the bot is seen “thinking” before it acts, putting together a plan in which it could try to eliminate humanity before it can do any more harm to our planet.
Eventually, ChaosGPT gave up on its tasks, unable to complete them. But as I noted above, the idea that AI poses a threat to human existence is not new, and it’s actually something we’ve seen pop up recently, with the godfather of AI touching on the AI threat to human existence.
We’ve also seen others debunk the idea that AI might somehow take over the world and kill off humanity. While it is reassuring to see AI bots like ChaosGPT failing to complete these types of tasks, it makes one wonder just how far AI could advance in the future and whether or not those tasks might be completable by a more advanced bot down the line.