ARTICLE AD
Meta says that a flood of violent and graphic Reels content recommended to Instagram users was an error that has now been resolved. Reels is Instagram’s take on short-form video that is hoping to become a viable competitor to TikTok as that app remains under threat of a U.S. ban.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” Meta said in a statement to the Wall Street Journal.
The apology comes a day after users on social networks including X started reporting that their Reels feeds were suddenly showing a large number of violent videos. The videos typically display a full-screen content warning and require the user to consent to viewing them before they can be played (counterintuitively, that warning probably makes people more curious to see what is behind the curtain). The Journal in its report described some of the recommended content:
A Wall Street Journal reporter’s account featured scores of videos of people being shot, mangled by machinery, and ejected from theme park rides, often back to back. The videos originated on pages that the reporter didn’t follow with names such as “BlackPeopleBeingHurt,” “ShockingTragedies” and “PeopleDyingHub.”
So many people were seeing the influx that the phenomenon quickly became something of a meme:
I've seen reels on Instagram you people wouldn't believe, cartel emptying full magazines on some man, a person blindfolded being beaten to death, woman giving birth in a pool, a woman sat on fire on street, all those reels will be lost in time, like tears in rain. pic.twitter.com/dWABCc3yqA
— Rashed (@Lipsofashes) February 26, 2025
Meta CEO Mark Zuckerberg announced wide-scale changes to the company’s moderation policies shortly before President Trump took office, in an act seen as a move to appease the new administration. Those included, among other things, replacing fact-checking with a system reminiscent to X’s Community Notes; removing only the most high-risk content, like direct threats of violence; and dropping automated scanning of most prohibited content in favor of letting users report potential violations manually, because Zuckerberg said its automated systems made too many mistakes.
Conservatives have long alleged unfair censorship by social media platforms, even as conservative-leaning content often ranks the most popular on Facebook in particular.
While Zuckerberg has received much criticism over the new moderation policies—especially as many noted the new policy allows users to call homosexuality a mental illness— the crucial thing to note here is that Meta did not drop moderation entirely. Under its new policies, the company says it will still remove content that is particularly violent, such as “videos depicting dismemberment, visible innards or charred bodies,” as well as “sadistic remarks towards imagery depicting the suffering of humans and animals.” It does allow some violent content to remain if it is deemed educational or informative.
Meta, in its new statement, did not explain why the violent content suddenly appeared in so many users’ feeds. But the company is constantly tweaking its algorithms, particularly to maximize engagement, and it has been well documented that people are enamored by death, true crime, and other visceral content. A morbid curiosity draws people to want to understand and learn about death and how they might avoid ending up in dangerous situations themselves.
It is possible that Meta’s recent loosening of its moderation policies inadvertently allowed the violent imagery to be promoted (it is unclear if Meta’s automated flagging still applies to violent content). The Journal noted, “View counts on some of the promoted videos suggested that Instagram’s recommendations had massively boosted their viewership, with view counts on them often exceeding that of the accounts’ other posts by millions of views.” Signals that social networks traditionally use to identify engaging content include how long people continue watching them before skipping to the next video and how many times they are shared with friends. Shocking, outrageous content has long performed well online for this very reason (some may remember the shock site Bestgore.com): It garners engagement.
Another recent story is illustrative of how people are enamored by dark content, even if it leaves them feeling bad or anxious. 404 Media documented how an anonymous YouTuber took advantage of AI image and text generation to pump their channel full of fake “true crime” stories, exploiting people’s interest in such lurid content to make some easy cash. One of the videos from the channel, which was taken down by YouTube, was titled “Coach Gives Cheerleader HIV after Secret Affair, Leading to Pregnancy.” 404 reported that many of the millions of viewers who watched the channel believed the videos were real.
Social networks demote violent content for many reasons, including because it can often be illegal, be used to intimidate and scare, and, of course, because it is not advertiser-friendly. Meta’s “free speech” pivot has its limits, as demonstrated here. It is still turning the dials in the ways it sees fit. Perhaps that is why President Trump’s new administration is still showing signs of skepticism towards the industry.