ARTICLE AD
Meta has confirmed that it will pause plans to start training its AI systems using data from its users in the European Union (EU) and U.K.
The move follows pushback from the Irish Data Protection Commission (DPC), Meta’s lead regulator in the EU, which is acting on behalf of around a dozen data protection authorities (DPAs) across the bloc. The U.K.’s Information Commissioner’s Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised.
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said in a statement today. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
While Meta is already tapping user-generated content to train its AI in markets such as the U.S, Europe’s stringent GDPR regulations has created obstacles for Meta — and other companies — looking to improve their AI systems with user-generated training material.
However, the company started notifying users of an upcoming change to its privacy policy last month, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos and their associated captions. The company argued that it needed to do this to reflect “the diverse languages, geography and cultural references of the people in Europe.”
These changes were due to come into effect on June 26, 2024 — 12 days from now. But the plans spurred not-for-profit privacy activist organization NOYB (“none of your business”) to file 11 complaints with constituent EU countries, arguing that Meta is contravening various facets of GDPR. One of those relates to the issue of opt-in versus opt-out, vis à vis where personal data processing does take place, users should be asked their permission first rather than requiring action to refuse.
Meta, for its part, was relying on a GDRP provision called “legitimate interest” to contend that its actions are compliant with the regulations. This isn’t the first time Meta has used this legal basis in defence, having previously done so to justify processing European users’ for targeted advertising.
It always seemed likely that regulators would at least put a stay of execution on Meta’s planned changes, particularly given how difficult the company had made it for users to “opt out” of having their data used. The company says that it has sent out more than 2 billion notifications informing users of the upcoming changes, but unlike other important public messaging that are plastered to the top of users’ feeds, such as prompts to go out and vote, these notifications appear alongside users’ standard notifications — friends’ birthdays, photo tag alerts, group announcements, and more. So if someone doesn’t regularly check their notifications, it was all too easy to miss this.
And those who do see the notification won’t automatically know that there is a way to object or opt-out, it simply invites users to click through to find out how Meta will use their information. There is nothing to suggest that there is an option here.
Meta: AI notificationImage Credits: MetaIn an updated blog post today, Meta’s global engagement director for privacy policy Stefano Fratta said that it was “disappointed” by the request it has received from the DPC.
“This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Fratta wrote. “We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.”