Clashing approaches to combat AI’s ‘perpetual bulls**t machine’

3 weeks ago 35
ARTICLE AD

The AI stage at TechCrunch Disrupt 2024 got off to a fiery but constructive start on a panel about combating disinformation. But in a spirited exchange of views tempered by expressions of respect and agreement, all three panelists had harsh words for social media and generative AI.

None was harsher, though, than Imran Ahmed, CEO of the Center for Countering Digital Hate.

“We’ve always had BS in politics, and a lot of politicians use lying as an art, a tool of doing politics. What we have now is is quantitatively different, and to such a scale that it’s like comparing the conventional arms race of BS in politics to the nuclear race,” he said.

“It’s the economics that have changed so radically: The marginal cost of the production of a piece of disinformation has been reduced to zero by generative AI, and the marginal costs of the distribution of disinformation [is also zero],” he continued. “So what you have, theoretically, is a perfect loop system in which generative AI is producing, it’s distributing, and then it’s assessing the performance — A/B testing and improving. You’ve got a perpetual bulls–t machine. That’s quite worrying!”

Brandie Nonnecke, director of UC Berkeley’s CITRIS Policy Lab, pointed out that self-regulation in the form of voluntary limits and transparency reports is totally insufficient.

“I don’t think that these transparency reports really do anything, in part because in these transparency reports, they’ll say, look at what a great job we’re doing: We removed 10s of 1000s of pieces of harmful content. Well, what didn’t you remove? What’s still floating around that you didn’t catch? It gives a false sense that they’re actually doing due diligence, when I think underneath that all is a big mess of them trying to figure out how to deal with all of this content,” she said.

Pamela San Martin, co-chair of the Facebook Oversight Board, agreed in principle but warned not to throw the baby out with the bath water. “I think that it would be completely untrue to say that any social media platform is doing everything they have to do — especially I would not say that about Meta,” she said.

“I agree what you said, but we thought this year that had 80 elections would be the year of AI and elections, that all the elections throughout the world would be flooded of AI deepfakes, that that would be what control the narrative,” she continued. “We have seen a rise in it, but we have not seen elections being completely flooded with AI generated content. Why do I say that? Not because I disagree, it is very concerning, but I also think that we have to keep in mind that if we start make taking measures out of fear, we will lose the good part of AI.”

Devin Coldewey is a Seattle-based writer and photographer. He first wrote for TechCrunch in 2007.


 

His personal website is coldewey.cc.

Subscribe for the industry’s biggest tech news

Related

Read Entire Article