Click to Skip Ad
Closing in...

AI could pose a risk on the same scale as nuclear war, UK prime minister warns

Published Nov 2nd, 2023 9:51PM EDT
threat of AI is growing
Image: phonlamaiphoto / Adobe

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

If you’ve paid any attention to the news over the past several months, then it is likely you’ve heard about the dangers that AI poses to humanity. Some believe that AI could overthrow humanity if given the chance, while others believe that’s all just a bunch of hogwash. Now, during a global AI safety summit, the UK prime minister as called on national leaders to step up their game and protect humanity from the dangers of AI.

The prime minister, Rishi Sunak, says that the risk AI poses is on the same scale as that of a pandemic or nuclear war, The Guardian reports. Sunak says that he is concerned about the risk that AI poses to the public as more and more advanced AI models are released. We’ve seen similar warnings from the godfather of AI that AI could take over if humans aren’t careful.

No matter where you fall in that spectrum, world leaders are scurrying to find some way to get their hands on the steering wheel of AI advancements. And that’s where this new global AI safety summit comes in. It’s at this summit that Sunak hopes to convince leaders from the United States and other countries to work together on making sure public AI tools are safe before they are released, thereby controlling the risk of danger AI offers to the world.

spot robot
National leaders are calling for some kind of safety system to ensure AI risks are kept in check. Image source: Boston Dynamics

What exactly that entails is unclear. However, it is impossible to avoid the many problems that AI models like ChatGPT offer. Not only do they tend to make up stuff – something AI researchers call “hallucinating,” but they have also shown some exceptional useability at getting around barriers like paywalls and things in the past.

Sure, OpenAI and others have taken time to fix those areas of issue, but they still exist in some form or another across the board. So, how do world leaders plan to ensure AI is safe and not plotting to overthrow humanity? That’s the curve ball. Nobody has a significant game plan in place just yet.

Sunak will also speak alongside Elon Musk sometime this week about the ongoing risks and dangers of AI, which is interesting, considering Musk’s plans to merge AI and humanity into one entity by drilling holes in our skulls and installing brain implants. Whether the other nations will follow Sunak and come up with some safety measures is still unclear. But at least someone is trying, I guess.

Josh Hawkins has been writing for over a decade, covering science, gaming, and tech culture. He also is a top-rated product reviewer with experience in extensively researched product comparisons, headphones, and gaming devices.

Whenever he isn’t busy writing about tech or gadgets, he can usually be found enjoying a new world in a video game, or tinkering with something on his computer.

More Tech