Click to Skip Ad
Closing in...

AI regulation might prompt OpenAI to remove ChatGPT from Europe

Published May 25th, 2023 12:17PM EDT
ChatGPT photo illustration
Image: Rafael Henrique/SOPA Images/LightRocket via Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

OpenAI CEO Sam Altman recently warned that he has no qualms about removing ChatGPT from Europe if legislation designed to regulate AI becomes law. The legislation in question is the AI Act and includes several provisions that Altman argues are overly broad and overreaching.

“The current draft of the EU AI Act would be over-regulating,” Altman said in remarks picked up by Reuters. “But we have heard it’s going to get pulled back,” he added.

AI has been around for a long time, but now that powerful user-facing AI apps are all the rage — from ChatGPT to Midjourney — lawmakers worry that regulatory safeguards are necessary. Notably, many revered figures in the tech industry have also expressed concern about the power of AI to cause mayhem.

Just recently, for example, former Google CEO Eric Schmidt said that unfettered access to powerful AI poses an “existential risk.”

“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology,” Schmit said.

Meanwhile, there have already been a few examples that illustrate the havoc AI can cause. You might recall the viral AI-generated photo of the Pope wearing a Balenciaga jacket from a few months ago. And just this week, an AI-generated image of a fire at the Pentagon went viral.

As to the safeguards the EU wants to implement, there would be an array of “design, information and environmental requirements” OpenAI would have to adhere to.

A press release on the matter reads in part:

Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

Other provisions would require OpenAI to disclose the training methods it uses to make ChatGPT as powerful as it is. OpenAI, of course, isn’t happy with any of this.

“Either we’ll be able to solve those requirements or not,” Altman told TIME. “If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible.”

Yoni Heisler Contributing Writer

Yoni Heisler has been writing about Apple and the tech industry at large with over 15 years of experience. A life long expert Mac user and Apple expert, his writing has appeared in Edible Apple, Network World, MacLife, Macworld UK, and TUAW.

When not analyzing the latest happenings with Apple, Yoni enjoys catching Improv shows in Chicago, playing soccer, and cultivating new TV show addictions.

More Tech