Two days after one open letter which called for a moratorium on the development of more powerful AI models like ChatGPT so that regulators can catch up, Italy's data protection authority has just published a reminder that some countries already have laws that apply to AI — ordering OpenAI to stop processing Italy's population data with immediate effect.

The Italian DPA said that it is concerned that the maker of ChatGPT is in violation of the European Union's General Data Protection Regulation (GDPR).
Specifically, it said it has issued the order to block ChatGPT because of concerns that OpenAI processed people's data illegally and that the lack of any system does not prevent minors from accessing the technology.
The San Francisco-based company (OpenAI) has 20 days to respond to the order or face certain penalties. Fines for breaches of the EU data protection regime can be up to 4% of annual turnover or €20 million, whichever is greater.
It is worth noting that since OpenAI does not have a legal department established in the EU, and thus any data protection authority is empowered to intervene, under the GDPR, if it sees risks to local users. Therefore, since Italy has started, other countries may follow.
GDPR must be applied whenever EU users' personal data is processed. And it's clear that OpenAI's big language model gathers this kind of information, since it can, for example, generate biographies of named people on demand. Although OpenAI declined to provide details on the training data used for the latest version of the GPT-4 tool, it has revealed that previous models were trained with data found on the Internet, including from forums such as Reddit. So if you're registered somewhere online, chances are the bot knows your name.
