ARTICLE AD
Bluesky on Friday published its moderation report for the past year, noting the sizable growth the social network experienced in 2024 and how that affected its Trust & Safety team’s workload. It also noted that the largest number of reports came from users reporting accounts or posts for harassment, trolling, or intolerance — an issue that’s plagued Bluesky as it’s grown, and has even led to wide-scale protests at times over individual moderation decisions.
The company’s report did not address or explain why it did or did not take action on individual users, including those on the most-blocked list.
The company added over 23 million users in 2024, as Bluesky became a new destination for former Twitter/X users for various reasons. Throughout the year, the social network benefitted from several changes at X, including its decision to change how blocking works and train AI on user data. Other users left X after the results of the U.S. presidential election, based on how X owner Elon Musk’s politics began to dominate the platform. The app also surged in users while X was temporarily banned in Brazil back in September.
To meet the demands caused by this growth, Bluesky increased its moderation team to roughly 100 moderators, it said, and is continuing to hire. The company also began offering team members psychological counseling to help them with the difficult job of being constantly exposed to graphic content. (An area we hope AI will one day address, as humans are not built to handle this type of work.)
In total, there were 6.48 million reports to Bluesky’s moderation service, up 17x from 2023 when there were only 358,000 reports.
Starting this year, Bluesky will begin to accept moderation reports directly from its app. Similar to X, this will allow users to track actions and updates more easily. Later, it will support appeals in-app, too.
When Brazilian users flooded into Bluesky in August, the company was seeing as many as 50,000 reports per day, at the peak. This led to a backlog in addressing moderation reports and required Bluesky to hire more Portuguese-language staff, including through a contract vendor.
In addition, Bluesky began automating more categories of reports beyond just spam to help it address the influx, though this sometimes led to false positives. Still, the automation helped drop the processing time to just “seconds” for “high-certainty” accounts. Before automation, most reports were handled within 40 minutes. Now, human moderators are kept in the loop to address the false positives and appeals, if not always handling the initial decision.
Bluesky says that 4.57% of its active users (1.19 million) made at least one moderation report in 2024, down from 5.6% in 2023. Most of these — 3.5 million reports — were for individual posts. Account profiles were reported 47,000 times, often for a profile picture or banner photo. Lists were reported 45,000 times; DMs were reported 17,700 times, with feeds and Starter Packs receiving 5,300 and 1,900 reports, respectively.
Most reports were over anti-social behavior, like trolling and harassment — a signal from Bluesky users that they want to see a less toxic social network, as compared with X.
Other reports were for the following categories, Bluesky said:
Misleading content (impersonation, misinformation, or false claims about identity or affiliations): 1.20 million Spam (excessive mentions, replies, or repetitive content): 1.40 million Unwanted sexual content (nudity or adult content not properly labeled): 630,000 Illegal or urgent issues (clear violations of the law or Bluesky’s terms of service): 933,000 Other (issues that don’t fit into the above categories): 726,000The company also offered an update on its labeling service, which involves labels added to posts and accounts. Human labelers added 55,422 “sexual figure” labels, followed by 22,412 “rude” labels, 13,201 “spam” labels, 11,341 “intolerant” labels, and 3,046 “threat” labels.
In 2024, 93,076 users submitted a total of 205,000 appeals over Bluesky’s moderation decision.
There were also 66,308 account takedowns from moderators and 35,842 automated account takedowns. Bluesky additionally fielded 238 requests from law enforcement, governments, and legal firms. The company responded to 182 of these and complied with 146. Most of the requests were law enforcement requests from Germany, the U.S., Brazil, and Japan, it said.
Bluesky’s full report also delves into other types of issues, including trademark and copyright claims and child safety/CSAM reports. The company noted it submitted 1,154 confirmed CSAM reports to the National Centre for Missing and Exploited Children (NCMEC).
Sarah has worked as a reporter for TechCrunch since August 2011. She joined the company after having previously spent over three years at ReadWriteWeb. Prior to her work as a reporter, Sarah worked in I.T. across a number of industries, including banking, retail and software.