The Italian data protection authority on Friday issued an immediate order for OpenAI to halt local data processing. The ban is due to mounting concerns that the company is breaching the European Union’s General Data Protection Regulation (GDPR) due to the way it handles data and a lack of controls in place for minors.

Though Italy may be the first country to pull the trigger on the free usage of ChatGPT, it’s clear that concerns around AI have been mounting rapidly. Italy’s ban comes days after thousands of AI experts called for an immediate pause on such technology until policymakers have time to adjust regulations in line with innovation. 

Yet the ban highlights that such AI regulations do already exist, at least in some parts of the world. Italy’s lawmakers already had the legal authority to place such a ban on OpenAI thanks to GDPR and its laws that protect citizens’ personal data. But what does this ban mean for the future of AI? 

How legal is AI under GDPR?

Europe’s GDPR laws are probably some of the most stringent worldwide, placing high emphasis on personal data protection ahead of commercial gain. Prior to the announcement of Italy’s ChatGPT ban, European lawmakers were already trying to unpack the many sticky questions that are arising alongside the growth of advanced AI technology. 

For one, the global nature of digital personal data is increasingly harder to track and manage. 

For example, ChatGPT-4 previously disclosed that its algorithms were trained on data scraped from the internet that included open forums like Reddit. It’s also known to produce false information on directly named individuals. Aside from the obvious concerns around misinformation it also makes companies like OpenAI vulnerable to a host of problems related to rights of Europeans’ personal data rights.  

It also opens up a can of worms when it comes to AI technology more broadly under GDPR guidelines. The ban on OpenAI suggests that machine learning algorithms in general may fall foul of GDPR by default, as the technology is reliant on immense amounts of data that is likely to include personal information at some point. 

Not just an EU concern 

The open moratorium calling to pause the development of advanced AI included high-profile names that have been notorious advocates for technical progress and innovation. This is coupled with TikTok’s appearance in front of US Congress, stemming from similar data usage concerns. 

Europe and its GDPR legislation may currently be ahead of other countries when it comes to data privacy regulations, but events of the recent weeks suggest that the progress of digital technologies and their reliance on our data requires closer inspection. 

Though the way data is used by technology, commercially or otherwise, is a sticky question that can’t be answered overnight, Politico did throw this hot potato over to ChatGPT itself with regards to GDPR regulations specifically. 

The response leaves us with plenty of food for thought: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content. The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms.”