Meta will start labeling AI generated images on Instagram and Facebook

7 months ago 37
ARTICLE AD



When it comes to Artificial Intelligence, it feels like we’re really crossing the Rubicon here. We’ve progressed from Alexa refusing to set a timer and Waze telling you to drive into a lake and into the world of dangerous, realistically-looking deepfakes. We already have serious issues with graphic deepfakes coming for the likes of high-school students, underage actresses, and Taylor Swift with little recourse to stop them once they’ve spread all over social media. There is currently no federal legislation regarding non-consensual, sexually explicit deepfakes and only one bill, introduced to the House of Representatives last year, that has stalled.

Deepfakes involve more than just explicit photos, though. There’s also a very real danger of nefarious persons using AI to sow misinformation to susceptible audiences. And once any of these images hit social media, it’s nearly impossible to contain them. In an effort to combat confusion (and worse), Meta has announced that in “the coming months,” they’re going to start labeling AI-created images uploaded to Facebook, Instagram, and Threads. Until then, they’re taking the YouTube/Tik Tok approach of relying on users to self-identify their artificial creations.

When an AI-generated image of the pope in a puffy white coat went viral last year, internet users debated whether the pontiff was really that stylish. Fake images of former President Donald Trump being arrested caused similar confusion, even though the person who generated the images said they were made with artificial intelligence. Soon, similar images posted on Instagram, Facebook or Threads may carry a label disclosing they were the product of sophisticated AI tools, which can generate highly plausible images, videos, audio and text from simple prompts.

Meta, which owns all three platforms, said on Tuesday that it will start labeling images created with leading artificial intelligence tools in the coming months. The move comes as tech companies — both those that build AI software and those that host its outputs — are coming under growing pressure to address the potential for the cutting-edge technology to mislead people.

Those concerns are particularly acute as millions of people vote in high-profile elections around the world this year. Experts and regulators have warned that deepfakes — digitally manipulated media — could be used to exacerbate efforts to mislead, discourage and manipulate voters.

Meta and others in the industry have been working to develop invisible markers, including watermarks and metadata, indicating that a piece of content has been created by AI. Meta said it will begin using those markers to apply labels in multiple languages on its apps, so users of its platforms will know whether what they’re seeing is real or fake.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Nick Clegg, Meta’s president of global affairs, wrote in a company blog post. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”

The labels will apply to images from Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock — but only once those companies start including watermarks and other technical metadata in images created by their software. Images created with Meta’s own AI tools are already labeled “Imagined with AI.”

That still leaves gaps. Other image generators, including open-source models, may never incorporate these kinds of markers. Meta said it’s working on tools to automatically detect AI content, even if that content doesn’t have watermarks or metadata. What’s more, Meta’s labels apply to only static photos. The company said it can’t yet label AI-generated audio or video this way because the industry has not started including that data in audio and video tools.

For now, Meta is relying on users to fill the void. On Tuesday, the company said that it will start requiring users to disclose when they post “a photorealistic video or realistic-sounding audio that was digitally created or altered” and that it may penalize accounts that fail to do so.

“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” Clegg said.

That expands on Meta’s requirement, introduced in November, that political ads include a disclosure if they digitally generated or altered images, video or audio. TikTok and YouTube also require users to disclose when they post realistic AI-generated content. Last fall, TikTok said it would start testing automatically applying labels to content that it detects was created or edited with AI.

[From NRP]

Umm…it’s a start? Or is Meta just doing a little CYA? The “gaps” mentioned in the article seem pretty wide to me. Ugh. I’m not gonna lie: I’m terrified of the shenanigans and malarkey that will absolutely go down to mess up the elections processes worldwide. More than 60 countries will have national elections in 2024, which will represent just about half of the global population. The stakes are high and as we’ve seen over the last decade, it does not take much for misinformation to run rampant and be accepted as truth. I can absolutely see a scenario in which certain bad faith political actors use AI-generated propaganda featuring their opponents and when it’s labeled as such, they start crying “fake news” and claim they’re being targeted unfairly. Buckle up, friends. It’s going to be another stressful ride.

pic.twitter.com/VeAdxFduYJ

— Dr. Sarah Padilla Hanisko (@LuvsLikePi) January 14, 2024

I applaud Posta for using AI tech, but it's important to also use this correctly.A few noticeable errors:1st lady has 3 hands, of which 1 of the fingers look misshapen. 2nd lady has no computer. There's a ghost hand typing on the same keyboard the 3rd lady's is using. https://t.co/wJNKZsrCmb pic.twitter.com/81iSTOSjKD

— Andriana Wanjiru Dominguez-CDMP 🇪🇨🇬🇷🇮🇹🇰🇪 (@ANdaihera) January 31, 2024

pic.twitter.com/jidZx4G8U7

— rare insults (@insultsrare) January 9, 2024

Read Entire Article