TikTok to open in-app Election Centers for EU users to tackle disinformation risks

9 months ago 51
ARTICLE AD

TikTok will launch localized election resources in its app to reach users in each of the European Union’s 27 Member States next month and direct them towards “trusted information”, as part of its preparations to tackle disinformation risks related to regional elections this year.

“Next month, we will launch a local language Election Centre in-app for each of the 27 individual EU Member States to ensure people can easily separate fact from fiction. Working with local electoral commissions and civil society organisations, these Election Centres will be a place where our community can find trusted and authoritative information,” TikTok wrote today.

“Videos related to the European elections will be labelled to direct people to the relevant Election Centre. As part of our broader election integrity efforts, we will also add reminders to hashtags to encourage people to follow our rules, verify facts, and report content they believe violates our Community Guidelines,” it added in a blog post discussing its preparations for 2024 European elections.

The blog post also discusses what it’s doing in relation to targeted risks that take the form of influence operations seeking to use its tools to covertly deceive and manipulate opinions in a bid to skew elections — i.e. such as by setting up networks of fake accounts and using them to spread and boost inauthentic content. Here it has committed to introduce “dedicated covert influence operations reports” — which it claims will “further increase transparency, accountability, and cross-industry sharing” vis-a-vis covert infops.

The new covert influence ops reports will launch “in the coming months”, per TikTok — presumably being hosted inside into its existing Transparency Center.

TikTok is also announcing the upcoming launch of nine more media literacy campaigns in the region (after launching 18 last year, making a total of 27 — so it looks to be plugging the gaps to ensure it has run campaigns across all EU Member States).

It also says it’s looking to expand its local fact-checking partners network — currently it says it works with nine organizations, which cover 18 languages. (NB: The EU has 24 “official” languages, and a further 16 “recognized” languages — not counting immigrant languages spoken.)

Notably, though, the video sharing giant isn’t announcing any new measures related to election security risks linked to AI generated deepfakes.

In recent months, the EU has been dialling up its attention on generative AI and political deepfakes and calling for platforms to put in place safeguards against this type of disinformation.

TikTok’s blog post — which is attributed to Kevin Morgan, TikTok’s head of safety & integrity for EMEA — does warn that generative AI tech brings “new challenges around misinformation”. It also specifies the platform does not allow “manipulated content that could be misleading” — including AI generated content of public figures “if it depicts them endorsing a political view”. However Morgan offers no detail of how successful (or otherwise) it currently is at detecting (and removing) political deepfakes where users choose to ignore the ban and upload politically misleading AI generated content anyway.

Instead he writes that TikTok puts a requirement on creators to label any realistic AI generated content — and flags the recent launch of a tool to help users apply manual labels to deepfakes. But the post offers no details about TikTok’s enforcement of this deepfake labelling rule; nor any further detail on how it’s tackling deepfake risks, more generally, including in relation to election threats.

“As the technology evolves, we will continue to strengthen our efforts, including by working with industry through content provenance partnerships,” is the only other tidbit TikTok has to offer here.

We’ve reached out to the company with a series of questions seeking more detail about the steps it’s taking to prepare for European elections, including asking where in the EU its efforts are being focused and any ongoing gaps (such as in language, fact-checking and media literacy coverage), and we’ll update this post with any response.

New EU requirement to act on disinformation

Elections for a new European Parliament are due to take place in early June and the bloc has been cranking up the pressure on social media platforms, especially, to prepare. Since last August, the EU has new legal tools to compel action from around two dozen larger platforms which have been designated as subject to the strictest requirements of its rebooted online governance rulebook.

Before now the bloc has relied on self regulation, aka the Code of Practice Against Disinformation, to try to drive industry action to combat disinformation. But the EU has also been complaining — for years — that signatories of this voluntary initiative, which include TikTok and most other major social media firms (but not X/Twitter which removed itself from the list last year), are not doing enough to tackle rising information threats, including to regional elections.

The EU Disinformation Code launched back in 2018, as a limited set of voluntary standards with a handful of signatories pledging some broad-brush responses to disinformation risks. It was then beefed up in 2022, with more (and “more granular”) commitments and measures — plus a longer list of signatories, including a broader range of players whose tech tools or services may have a role in the disinformation ecosystem.

While the strengthened Code remains non-legally binding, the EU’s executive and online rulebook enforcer for larger digital platforms, the Commission, has said it will factor in adherence to the Code when it comes to assessing compliance with relevant elements of the (legally binding) Digital Services Act (DSA) — which requires major platforms, including TikTok, to take steps to identify and mitigate systemic risks arising from use of their tech tools, such as election interference.

The Commission’s regular reviews of Code signatories’ performance typically involve long, public lectures by commissioners warning platforms need to ramp up their efforts to deliver more consistent moderation and investment in fact-checking, especially in smaller EU Member States and languages. Platforms’ go-to respond to the EU’s negative PR is to make fresh claims to be taking action/doing more. And then the same pantomime typically plays out six months or a year later.

This ‘disinformation must do better’ loop might be set to change, though, as the bloc finally has a law in place to force action in this area — in the form of the DSA, which begun applying on larger platforms last August. Hence why the Commission is currently consulting on detailed guidance for election security. The guidelines will be aimed at the nearly two dozen firms designated as very large online platforms (VLOPs) or very large online search engines (VLOSEs) under the regulation and which thus have a legal duty to mitigate disinformation risks.

The risk for in-scope platforms, if they fail to move the needle on disinformation threats, is being found in breach of the DSA — where penalties for violators can scale up to 6% of global annual turnover. The EU will be hoping the regulation will finally concentrate tech giants’ minds on robustly addressing a societally corrosive problem — one which adtech platforms, with their commercial incentives to grow usage and engagement, have generally opted to dally over and dance around for years.

The Commission itself is responsible for enforcing the DSA on VLOPs/VLOSEs. And will, ultimately, be the judge of whether TikTok (and the other in-scope platforms) have done enough to tackle disinformation risks or not.

In light of today’s announcements, TikTok looks to be stepping up its approach to regional information-based and election security risks to try to make it more comprehensive — which may address one common Commission complaint — although the continued lack of fact-checking resources covering all the EU’s official languages is notable. (Though the company is reliant on finding partners to provide those resources.)

The incoming Election Centers — which TikTok says will be localized to the official language of every one of the 27 EU Member States — could end up being significant in battling election interference risks. Assuming they prove effective at nudging users to respond more critically to questionable political content they’re exposed to by the app, such as by encouraging them to take steps to verify veracity by following the links to authoritative sources of information. But a lot will depend on how these interventions are presented and designed.

The expansion of media literacy campaigns to cover all EU Member States is also notable — hitting another frequent Commission complaint. But it’s not clear whether all these campaigns will run before the June European elections (we’ve asked).

Elsewhere, TikTok’s actions look to be closer to treading water. For instance, the platform’s last Disinformation Code report to the Commission, last fall, flagged how it had expanded its synthetic media policy to cover AI generated or AI-modified content. But it also said then that it wanted to further strengthen its enforcement of its synthetic media policy over the next six months. Yet there’s no fresh detail on its enforcement capabilities in today’s announcement.

Its earlier report to the Commission also noted that it wanted to explore “new products and initiatives to help enhance our detection and enforcement capabilities” around synthetic media, including in the area of user education. Again, it’s not clear whether TikTok has made much of a foray here — although the wider issue is the lack of robust methods (technologies or techniques) for detecting deepfakes, even as platforms like TikTok make it super easy for users to spread AI generated fakes far and wide.

That asymmetry may ultimately demand other types of policy interventions to effectively deal with AI related risks.

As regards TikTok’s claimed focus on user education, it hasn’t specified whether the additional regional media literacy campaigns it will run over 2024 will aim to help users identify AI generated risks. Again, we’ve asked for more detail there.

The platform originally signed itself up to the EU’s Disinformation Code back in June 2020. But as security concerns related to its China-based parent company have stepped up it’s found itself facing rising mistrust and scrutiny in the region. On top of that, with the DSA coming into application last summer, and a huge election year looming for the EU, TikTok — and others — look set to be squarely in the Commission’s crosshairs over disinformation risks for the foreseeable future.

Although it’s Elon Musk-owned X that has the dubious honor of being first to be formally investigated over DSA risk management requirements, and a raft of other obligations the Commission is concerned it may be breaching.

Read Entire Article