Click to Skip Ad
Closing in...

Elon Musk and experts sign letter to pause development of more powerful AI

Published Mar 29th, 2023 12:13PM EDT
OpenAI announced GPT-4 on March 14.
Image: OpenAI

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

A growing number of high-profile technology entrepreneurs, CEOs, and AI experts aren’t too keen on the acceleration and large-scale deployment of artificial intelligence tools that have been going on lately.

In an open letter that has been signed by AI experts and other high-profile names like Elon Musk and Apple co-founder Steve Wozniak, the group has called to “Pause Giant AI Experiments.” The group says that AI poses “profound risks to society and humanity” and that, due to that risk, the development and deployment of AI “should be planned for and managed with commensurate care and resources.”

The group argues that this is not happening and that, instead, labs are in a race to beat each other to the next generation of AI without properly considering the consequences. Due to this, the letter asks for all AI labs to “pause for at least 6 months the training of AI systems more powerful than GPT-4.” The group says that governments should even step in if necessary to prevent further development.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

The letter also says that governments around the world should work together in order to establish governance systems around AI, including new regulatory authorities, technology to properly distinguish AI from human creations, and more.

These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

The letter comes in response to the recent surge of AI technology from companies like OpenAI’s ChatGPT and GPT-4 that Microsoft has deployed across Bing and its 365 productivity suite. There are also already rumors that OpenAI is nearing the end of testing GPT-5 which will reach artificial general intelligence (AGI).

Joe Wituschek Tech News Contributor

Joe Wituschek is a Tech News Contributor for BGR.

With expertise in tech that spans over 10 years, Joe covers the technology industry's breaking news, opinion pieces and reviews.

More Tech

Latest News