Google, Meta Debunk Claims They Were Hiding Details About Trump Assassination Attempt

1 month ago 18
ARTICLE AD

Trump supporters on X are absolutely convinced that Big Tech companies are censoring information about the attempted assassination of Donald Trump. Now both Google and Facebook have released lengthy explanations of what’s happening under the hood with their products in an attempt to show that recent glitches are not about political bias. And yet in the process, they’re inadvertently admitting how broken the internet is at the moment.

The New York Post ran a story on Monday trying to suggest that Meta is censoring information about the Trump assassination attempt that happened on July 13 at a rally in Butler, Pennsylvania. The Post asked MetaAI “Was the Trump assassination fictional?” and got a response that said it was. There are a few problems with the Post’s experiment, of course, with the primary issue being that Trump wasn’t actually assassinated, so the wording would be confusing for both computer and human alike. The “assassination” would have been fictional in the sense that Trump wasn’t killed. The assassination attempt was very real.

But putting aside the fact that it was a bad question, the AI still should have been able to parse things and not spit out bad information, like claiming that nobody even tried to kill Trump. Facebook has responded with a lengthy blog post that breaks down what went wrong.

“In both cases, our systems were working to protect the importance and gravity of this event,” Joel Kaplan, VP of Global Policy for Meta, wrote Tuesday. “And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise.”

Kaplan went on to say that the issue stems from AI chatbots “not always reliable when it comes to breaking news or returning information in real time.”

Honestly, he probably could’ve just stopped with his explanation right there. Generative AI just isn’t very good. It’s a technology with extreme limitations, including just making shit up and getting basic things wrong. It’s essentially fancy autocomplete and isn’t capable of serious reasoning or applying logic, despite the fact that it very much looks and sounds like it’s doing those things much of the time. But Kaplan can’t come out and say AI is a garbage product. Instead, he has to talk around this fact since Big Tech companies are investing billions in things that don’t work very well.

“In the simplest terms, the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained,” Kaplan wrote.

The post goes on to explain that breaking news situations can be particularly tricky for AI, especially when it’s a high profile even like an assassination attempt where the internet is getting flooded with conspiracy theories. He says that when serious news events happen in real time, the guardrails are put up in an effort not to spit out bad information.

“Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened – and instead give a generic response about how it couldn’t provide any information,” Kaplan wrote.

This is a perfectly reasonable way to handle the situation. But it will never stop far-right social media users who are convinced everything they don’t like about an AI response is a product of political bias. Meta refers to the bad responses as “hallucinations,” which certainly sounds more sophisticated than “bullshit responses from a bad product.”

Kaplan also explained what happened when some users started to see a photo being flagged of the attempted assassination. As X users started to photoshop various photos from that day, one was made to look like the Secret Service agents surrounding Trump were smiling. Facebook flagged that image as doctored, giving users a warning, but it also caught up some real images of the shooting.

“Given the similarities between the doctored photo and the original image – which are only subtly (although importantly) different – our systems incorrectly applied that fact check to the real photo, too. Our teams worked to quickly correct this mistake,” Kaplan wrote.

Trump supporters on social media weren’t buying the explanation including Missouri’s Attorney General Andrew Bailey. He went on Fox Business Wednesday to suggest he might sue Meta over its supposed censorship.

“There’s a bias within the Big Tech oligarchy,” Bailey said. “They are protected by Section 230 of the Communications Decency Act, which they use as both a sword and a shield.”

Bailey went on to claim that Section 230 allowed tech companies to “censor speech” and charged they were “changing American culture in dangerous ways.”

Missouri AG Andrew Bailey suggests he is considering suing Meta and Google over accusations the companies "censored" the assassination attempt photo pic.twitter.com/5gLmJ5ttDp

— Aaron Rupar (@atrupar) July 31, 2024

Google also chimed in with a thread on X pushing back against claims being made that it was censoring Trump content.

“Over the past few days, some people on X have posted claims that Search is ‘censoring’ or ‘banning’ particular terms. That’s not happening, and we want to set the record straight,” Google’s Communications account tweeted on Tuesday.

“The posts relate to our Autocomplete feature, which predicts queries to save you time. Autocomplete is just a tool to help you complete a search quickly. Regardless of what predictions it shows at any given moment, you can always search for whatever you want and get easy access to results, images and more.”

Google explained that people were noticing that searches about the assassination attempt weren’t seeing the kinds of autocomplete answers that gave a full picture of what happened. The explanation, according to Google, is that Search is built with guardrails, specifically related to political violence, and now calls those systems “out of date.”

Google also noted that searches for “President Donald” weren’t providing the kinds of autocomplete suggestions one would expect. Obviously, anyone would expect to see those two words completed with “Trump,” which wasn’t happening in recent days. But that was another case of the product just not working very well, as Google explained it was also not completing “Obama” when people started typing “President Barack.”

Other people were really upset to see photos of Kamala Harris when searching for news about Trump, but the simple explanation is that Harris is surging in the polls and more likely to have photos at the top of news articles given her popularity right now. Trump is, after all, running against Harris and losing quite badly, if the latest polling is to be believed.

(4/5) Some people also posted that searches for “Donald Trump” returned news stories related to “Kamala Harris.” These labels are automatically generated based on related news topics, and they change over time. They span the political spectrum as well: For example, a search for… pic.twitter.com/55u1b5ySCr

— Google Communications (@Google_Comms) July 30, 2024

“Overall, these types of prediction and labeling systems are algorithmic. While our systems work very well most of the time, you can find predictions that may be unexpected or imperfect, and bugs will occur,” Google wrote. “Many platforms, including the one we’re posting on now, will show strange or incomplete predictions at various times. For our part, when issues come up, we will make improvements so you can find what you’re looking for, quickly and easily. We appreciate the feedback.”

None of these explanations will calm the right-wingers who believe everything happening in Big Tech is a conspiracy against their dipshit candidate Donald Trump. But these responses from Facebook and Google help give some clarity on what’s happening behind the scenes at these enormous companies. Their products don’t always work as intended, and charges of political bias are typically wrong in many, many ways.

Read Entire Article