I’m not afraid of a future where AI might lead us to a post-apocalyptic world where humans don’t survive as a species. But I do realize the dangers of AI getting out of control. Anyone with the right resources can intentionally or accidentally develop the next-gen version of ChatGPT. Artificial general intelligence (AGI) would rival humanity and could find itself at odds with our race.
Before we get there, we have other reasons to worry about AI. First of all, there’s the privacy argument. Currently, most people’s talks with ChatGPT and its rivals will be used to train those language models. Then there are worries about copyright. ChatGPT and the like have been trained using large amounts of data that might have questionable provenance.
Also, let’s not forget that AI can make mistakes and be manipulated. This can lead to distortions of truth during critical times, like next year’s presidential elections.
With all that in mind, it’s never been clearer that, out of all the tech products in the world, AI needs the strongest possible regulation. It’s already starting to happen, as we’ve just witnessed the first accord on AI development between more than a dozen countries. However, the European Union (EU) might come up with the first set of landmark rules for AI. Negotiations between the member states will start on Wednesday.
The EU has already delivered several meaningful laws that govern the tech sector. The General Data Protection Regulation (GDPR) brought stronger privacy rules for the block. Then, there’s the brand new Digital Markets Act (DMA). It will force the world’s biggest tech giants to open up their various platforms. The DMA should improve competition in the region and benefit consumers.
Let’s also remember that the EU also compelled Apple to bring USB-C to the iPhone this year. The European block chose USB-C as the universal charging port for most battery-powered devices.
But those rules might be much simpler to devise than coming up with regulations for AI products like ChatGPT. The EU is meeting on December 6th to discuss biometric surveillance and generative AI products like AI. However, Reuters points out that a deal might not be reached immediately.
Talks on the EU’s AI Act should start in the afternoon, local time. They’re likely to run into the early hours of Thursday. The report says the most likely outcome is “a provisional deal on principles but not crucial details.” After that, a final deal would have to be reached. Only then could the EU come up with legislation to govern AI in the region.
The pressure is on the EU to reach a deal before the end of the year so the AI legislation can move forward before next June’s EU parliamentary elections.
“The world is watching us: citizens, stakeholders, NGOs, and the private sector want us to agree on a meaningful piece of legislation regarding AI, including GPAI [general purpose AI],” Dutch minister for digitalization Alexandra van Huffelen told Reuters.
The report says that without a deal, the AI Act is likely to be shelved. The EU could lose the chance of setting AI rules before other countries or regions of the world develop similar regulations.
There is a risk for a deal not to be reached. The various EU members have some conflicting demands on AI regulation. EU ambassadors and lawmakers already met last week, but there are differences that need to be resolved.
Reuters says the two biggest ones concern the use of AI in biometric surveillance and foundation models for generative AI programs like ChatGPT.
EU lawmakers want to ban the use of AI In biometric surveillance. However, European governments want exceptions. These concern national security, defense, and the military.
Separately, France, Germany, and Italy made a late proposal on products like ChatGPT. They want developers of generative AI models, like OpenAI, to self-regulate.
From the sounds of it, Wednesday’s meeting will be just a start. Hopefully, the EU can agree at least on the general principles of the AI Act so it can move forward.