Italy: ChatGPT violates European privacy rules
ChatGPT
The Italian data protection authority has informed OpenAI that the AI chatbot ChatGPT violates European data protection rules , following the authority’s multi-month investigation into the AI chatbot.
The Italian Data Protection Authority stated preliminary conclusions regarding the violation of European Union law in a new statement, and this comes at a time when the authority is moving forward with the investigation that it began last year.
The body, known as Garante, is one of the EU’s proactive bodies in assessing AI platform compliance with the bloc’s data privacy regime.
Last year, the blocked ChatGPT for alleged violations of EU privacy rules.
The authority allowed the artificial intelligence chatbot to return to work after OpenAI addressed issues related to users’ right to refuse consent to the company’s use of personal data to train its algorithms.
The regulatory body indicated at the time that it was continuing its investigations. It said in a new statement that it had since concluded that there were elements indicating potential data privacy violations, without providing further details.
Microsoft-backed OpenAI has 30 days to present its defense arguments, Garante said, adding that its investigation takes into account work conducted by a European task force that includes national privacy watchdogs.
- Italy was the first country in Western Europe to limit ChatGPT after its rapid development caught the attention of lawmakers and regulators.
- Any company that violates the rules faces fines of up to 20 million euros, or 4 percent of its global sales, under the European Union’s General Data Protection Regulation.
- Data protection authorities can issue orders requiring changes to how data is processed in order to put an end to confirmed breaches.
As a result, OpenAI may be forced to change the way it operates or withdraw its service from European Union member states, as privacy authorities seek to impose changes that the company does not want.
EU lawmakers and governments in December agreed to temporary terms to regulate AI systems, such as ChatGPT, moving one step closer to setting the rules governing the technology.