After an Italian privacy watchdog banned OpenAI’s popular ChatGPT in the country and launched an investigation into the company, a similar authority from Canada announced a separate investigation over OpenAI and ChatGPT’s privacy-related practices. The investigation follows complaints about OpenAI’s handling of personal information.
OpenAI uses large amounts of internet data to train the large language models (LLM) that power ChatGPT. Moreover, ChatGPT collects user data from interactions with end users. OpenAI hasn’t provided specific information about what happens with that data, nor has it allowed users to delete their personal information.
The Office of the Privacy Commissioner of Canada (OPC) has launched an investigation into OpenAI and ChatGPT. The action comes after the OPC received a complaint “alleging the collection, use and disclosure of personal information without consent.”
“AI technology and its effects on privacy is a priority for my Office,” Privacy Commissioner Philippe Dufresne said in a statement a few days ago. “We need to keep up with – and stay ahead of – fast-moving technological advances, and that is one of my key focus areas as Commissioner.”
Per Fox News, OpenAI executives, including CEO Sam Altman, attended a video call with the watchdog’s commissioners. The execs promised to address the ChatGPT privacy concerns via new measures. But it’s unclear what these features will be.
OpenAI will likely have to take similar action in Europe, which has stricter privacy rules than other places.
Italy might be only the first country in the region to look into ChatGPT’s privacy practices. Politico reports that countries including Belgium, Ireland, France, and Norway are also looking at the privacy implications of the generative AI product that took the tech world by storm.
France’s CNIL data protection authority received at least two privacy violation complaints against ChatGPT.
These are still the early days of generative AI products. Google confirmed recently that it will add ChatGPT-like powers to its Google Search product in the future. Hopefully, Google will have stronger privacy practices in place once the service launches as watchdogs and regulators catch up with the nascent industry.
Microsoft, which incorporates ChatGPT tech in various products, set an excellent privacy example for a security AI product. Security Copilot doesn’t share any data with Microsoft’s servers. All interactions with the ChatGPT-powered service remain on an organization’s servers. Microsoft will not use that data to train the AI elements of Security Copilot.