LinkedIn has stopped grabbing U.K. users’ data for AI

2 hours ago 3
ARTICLE AD

The U.K.’s data protection watchdog has confirmed that Microsoft-owned LinkedIn has stopped processing user data for AI model training for now.

Steven Almond, executive director of regulatory risk for the Information Commissioner’s Office, wrote in a statement on Friday: “We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”

Eagle-eyed privacy experts had already spotted a quiet edit LinkedIn made to its privacy policy after a backlash over grabbing people’s info to train AIs — adding the U.K. to the list of European regions where it does not offer an opt-out, as it says it is not processing local users’ data for this purpose.

“At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice,” LinkedIn general counsel Blake Lawit, wrote in an updated company blog post originally published on September 18.  

The professional social network had previously specified it was not processing information of users located in the European Union, EEA or Switzerland — where the bloc’s General Data Protection Regulation (GDPR) applies. However U.K. data protection law is still based on the EU framework, so when it emerged that LinkedIn was not extending the same courtesy to U.K. users, privacy experts were quick to shout foul.

U.K. digital rights non-profit the Open Rights Group (ORG), channelled its outrage at LinkedIn’s action into a fresh complaint to the ICO about consentless data processing for AI. But it was also critical of the regulator for failing to stop yet another AI data heist.

In recent weeks, Meta, the owner of Facebook and Instagram, lifted an earlier pause on processing its own local users’ data for training its AIs and returned to default harvesting U.K. users’ info. That means users with accounts linked to the U.K. must once again actively opt out if they don’t want Meta using their personal data to enrich its algorithms.

Despite the ICO previously raising concerns about Meta’s practices, the regulator has so far stood by and watched the ad tech giant resume this data harvesting.

In a statement put out on Wednesday, ORG’s legal and policy officer, Mariano delli Santi, warned about the imbalance of letting powerful platforms get away with doing what they like with people’s information so long as they bury an opt-out somewhere in settings. Instead, he argued, they should be required to obtain affirmative consent up front.

“The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI,” he wrote. “Opt-in consent isn’t only legally mandated, but a common-sense requirement.”

We’ve reached out to the ICO and Microsoft with questions and will update this report if we get a response.

Read Entire Article