Next week, Meta will begin one of the company's most important overhauls on how to check the platform information.
On March 18th, Meta will begin releasing versions of Community Notes for Facebook, Instagram and US thread users. The program copied the crowdsourcing fact-checking system announced by Twitter in 2021, and became the only way to fix misleading information after Elon Musk changed the platform to X.
Meta executives say they are focusing on getting community notes correctly in the US before deploying the feature to other countries. Given that the US is the most profitable market for meta, it is a high stakes region to test key new features, but Meta may be hesitant to deploy community notes in other regions, such as the European Union, where the Commission is currently investigating X for the effectiveness of its community note features.
The move could also show Meta CEO Mark Zuckerberg's enthusiasm to appease the Trump administration.
US Facebook users will soon see community notes. Image credit: Meta
Click on the note to see more information. Image credit: Meta
Zuckerberg first announced these changes in January as part of a broader effort to provide oxygen to more perspectives on his platform. Since 2016, Meta has relied on third-party fact checkers to verify platform information, but Meta's vice president of public policy Neil Potts explained to reporters Wednesday that the system was too unbiased, not scalable enough, and made too many mistakes.
For example, Potts said Meta applied false fact-checking labels to climate change opinion articles that appeared in Fox News and The Wall Street Journal. In another case, Zuckerberg recently said on a Joe Rogan podcast that Meta should not dismiss concerns as misinformation about the Covid-19 vaccine.
Meta hopes that community memos will act as a more scalable fact-checking system that addresses public perceptions that it is biased, makes fewer mistakes and ultimately deals with more misinformation. However, Meta is not a replacement for community standards, but a company's rules that determine whether a post is considered hate speech, fraud, or other prohibited content.
Overhauling the meta fact-checking system occurs when many tech companies are trying to address historical prejudices against conservatives. X leads industry efforts, and Elon Musk claims to focus his social platform on “free speech.” Openai recently announced that it is changing the way AI models are embracing “intellectual freedom,” saying it works to avoid censoring certain perspectives.
Rachel Lambert, Meta's director of product management, said in a briefing Wednesday that Meta is based on a new fact-checking system in X's open source algorithms on community notes.
Meta began applying for contributors to the Community Notes Network in February. Meta contributors can suggest a claim that they fact-check directly on Facebook, Instagram, or thread posts. Other contributors then rated the notes as helpful or useless, in part to determine whether the community notes will be visible to others.
Contributors can assess the usefulness of the note. Image credit: Meta
Image credits: Meta
Like the X system, Meta's Community Notes system evaluates which contributors disagree with a post. Using this information, Meta will only display notes if the opposite sides agree that the notes are useful.
Even if the majority of meta contributors believe that a community note is needed, this does not mean that it will be displayed. Additionally, Meta says that even if the community notes are displayed in the post, they do not downrank the post or account in the algorithm.
For years, crowdsourcing systems like Community Notes have been seen as promising solutions for dealing with misinformation on social media, but they have their downsides.
Research published in the journal Science Advances found that, on the positive side, people tend to view community notes as more reliable than flags from third-party fact-checkers.
In another large study of X's fact-checking system, researchers at the University of Luxembourg found that posts with community notes reduce the spread of misleading posts by 61% on average.
However, many posts do not have any notes attached or take too long. X, and soon meta asks for community notes to reach consensus among contributors with conflicting perspectives, so often means fact checks are added only after the post reaches thousands or millions.
A study from the same University of Luxembourg also found that community notes could be too late to intervene at the early and most viral stages of post lifespan.
Recent research from the Counseling Digital Hate Center highlights the challenges. Researchers took samples of submissions with election misinformation to X and found that contributors suggested accurate and relevant information about these submissions 81% of the time.
However, only 9% of posts that received proposals received consensus among contributors. That is, most of these posts were not shown on fact-checking.