OpenAI released ChatGPT last November without worrying too much about user privacy, copyright, or accuracy implications. But we’re starting to see more users and regulators realize that generative AI tech needs oversight on all these matters. The latest move comes from a large group of content creators from Germany who are worried about ChatGPT’s potential copyright infringement.
More than 140,000 authors and performers urged the European Union on Wednesday to beef up draft artificial intelligence (AI) rules and include stronger copyright protections.
“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs,” said the letter Reuters saw. “Generative AI needs to be at the centre of any meaningful AI market regulation.”
The report says letter signatories include trade unions for the creative sector Verti and DGB. Also, associations for photographers, designers, journalists, and illustrators signed the document.
The European Commission proposed AI rules last year and should finalize the details in the coming months. But the German group wants the EU to beef up the regulations to cover generative AI across the entire product cycle.
The group also wants providers of technology like ChatGPT to be liable for the content the chatbots deliver. That includes content that might infringe on personal rights and copyrights. It also covers generative content that might lead to misinformation and discrimination.
Finally, the letter asks for regulations that would prevent companies that provide ChatGPT-like platforms, like Microsoft, Google, Amazon, and Meta to operate platforms that distribute digital content.
The German letter isn’t the first to address issues with OpenAI’s ChatGPT. Italy has banned ChatGPT over privacy matters, and Canada is conducting a similar privacy-based investigation. Furthermore, a mayor in Australia has considered a defamation suit against OpenAI. Separately, News Corp. Australia CEO asked for creators of ChatGPT-like platforms to pay for the news content they use to train their chatbots.
It looks like it’ll be only a matter of time until companies like OpenAI and Google will have to deal with AI regulations for user privacy, copyright, and misinformation. That might hinder the training of smarter AI models, at least initially. And AI access might become more expensive for the end user. But it’s abundantly clear that we can’t have AI without regulation.