Yoel Roth, formerly the director of trust and safety at Twitter, shares concerns during the match about the future of the open social web and the ability to combat misinformation, spam, and other illegal content such as child sex abuse material (CSAM). In a recent interview, Roth was worried about the lack of moderation tools available to Fediverse. This is an open social web that includes apps such as Mastodon, Threads, and Pixelfed and open platforms such as Bluesky.
He also recalled key moments of trust and safety on Twitter, like decisions to ban President Trump from the platform, decisions that misinformation by Russian bot farms spread, and Twitter users, including CEO Jack Dorsey, fell prey to the bots.
In the podcast revolution using @Rabble, Ross noted that efforts to build online communities more democratically across the open social web are also ones with the least resources when it comes to moderation tools.
“…I'm looking at Mastodon and looking at other services based on ActivityPub [protocol]when I saw Blueski in the early days and when Meta started developing it, what we saw was that many services that were most challenging to community-based control gave the community the most technical tools to manage their policies,” Ross said.
He also saw “a rather big backslide” on the open social web regarding the transparency and legitimacy of decisions Twitter once had. Perhaps many back then opposed Twitter's decision to ban Trump, but the company explained its rationale for doing so. Currently, social media providers are very concerned about preventing bad actors from playing games, so rarely explaining themselves.
On the other hand, many open social platforms have users not received notifications about banned posts, and the posts just disappear. There were no signs of the posts being present.
“We don't blame the startups as startups or new software because they lack all the bells and whistles, but what we did is to take a step back to governance if the overall point of the project was to increase the democratic legitimacy of governance. Ross is a wonder.
TechCrunch Events
San Francisco | October 27-29, 2025
The economics of moderation
He also raised questions about the economics of moderation and how the federal government's approach on this aspect is not yet sustainable.
For example, an organization called IFTAS (Independent Federal Trusts and Security) was working to build Fediverse moderation tools, providing Fediverse with access to tools to combat CSAM, but in early 2025 many projects had to be shut down.
“I saw it coming two years ago. Iftus saw it coming. Everyone who works in this field is primarily voluntary of time and effort. That's so far, because at some point people have families and need to stack up costs when they need to run ML models to detect certain types of bad content. “It's all become expensive and the economics of this coalitioned approach to trust and safety never fits perfectly. And in my opinion, it's not yet.”
Meanwhile, Bluesky chooses to hire moderators and employ them with confidence and safety, but is limited to mitigating their own apps. Additionally, they provide tools that allow people to customize their own moderation preferences.
“They're doing this work on a massive scale. There's obviously room for improvement. I want to see that they're a little more transparent. But basically, they're doing the right thing,” Ross said. However, as services become more decentralized, he points out, he faces questions about when Bluesky is a responsibility to protect individuals rather than community needs.
For example, using doxxing may not know that your personal information is spreading online due to the way you configure your moderation tool. However, even if the user is not in the main Bluesky app, it should still be someone else's responsibility to enforce these protections.
Where to draw privacy lines
Another issue facing Fediverse is that the decision to support privacy can thwart attempts to mitigate. Twitter has not saved personal data, but it was not necessary, but when we accessed services, device identifiers, etc., we collected the user's IP address and more. These helped the company when it needed to do forensic analysis of things like troll farms in Russia.
Meanwhile, Fediverse administrators may not collect the necessary logs or view them if they believe they are a violation of user privacy.
However, the reality is that without data it is difficult to determine who is truly a bot.
Roth has provided several examples since the Twitter era, noting that users have become more likely to respond to differences of opinion by saying “bots.” He says he set up an alert first, manually reviewed all these posts, and looked into hundreds of instances of “bot” accusations, but no one was right. Even Twitter co-founder and former CEO Jack Dorsey has been victimized and retweeted a post from a Russian actor who claimed to be Crystal Johnson, a black woman from New York.
“The company's CEO liked this content and amplified it and had no way of knowing as a user that Crystal Johnson was in fact a Russian troll,” Ross said.
The role of AI
One of the timely topics of discussion was how AI is changing the landscape. Ross referenced Stanford's recent research. This finds that in political contexts, large-scale linguistic models (LLMs) may even be more persuasive than humans when properly tuned.
In other words, a solution that relies solely on content analysis itself is not sufficient.
Instead, businesses need to track other behavioral signals. He suggested that if some entities have multiple accounts, they should use automation to post or post in a strange day that corresponds to different time zones.
“These are potential behavioral signals even for truly persuasive content, and I think that's where you have to start doing this,” Ross said. “If you're starting with content, you're playing against major AI models and you're already losing.”