Meta announced on Monday that it would take additional steps to crack down on accounts that share “non-original” content on Facebook. This year, Meta has already removed around 10 million profiles that are pretending to be large content creators, it said.
Additionally, they took action against 500,000 accounts who were engaged in “spam behavior and false involvement.” These actions include demoting account comments and reducing content distribution to prevent account monetization.
The update from Meta follows just days after YouTube said it has clarified its policy on non-original content, including mass-produced, repeated videos.
Like YouTube, Meta says that users who are involved in other people's content, creating reaction videos, participating in trends, and adding their own take are not punished. Instead, the focus of the meta is on reposting other people's content, either by spam accounts or by people pretending to belong to the original creator.
Accounts that abuse the system by repeatedly reusing other people's content will lose access to Facebook's monetization program for a period of time, and see a decrease in the distribution of posts, the company said. When Facebook detects duplicate videos, it also reduces the distribution of copies, ensuring the original creator receives opinions and credits.
Additionally, the company said it is testing a system that adds links to duplicate videos that point viewers to original content.
Image credits: Meta
The update arrives for falsely over-enforcement of policies through automated means, as a metaweather criticism from users across the platform, including Instagram. The petition, with nearly 30,000 signatures, asks Meta to fix the issue with an account that was accidentally invalidated and a lack of human support. Meta has yet to publicly address the issue despite caution from Press and other well-known creators.
Meta's latest crackdown focuses on accounts that steal other people's content for profit, but issues with non-original content are on the rise.
With the rise of AI technology, platforms are flooded with AI slops. This is a term that refers to low-quality media content created using generated AI. For example, YouTube makes it easy to find AI audio overlaid on photos, video clips, or other reused content thanks to text-to-video AI tools.
The meta update appears to focus solely on reused content, but the post suggests that AI slops may also be taking into account. In the section that provides “tips” for companies to create original content, Meta states that, as creators use content from other sources, they should focus on “real storytelling” rather than just “stitching clips together” or adding watermarks when using content from other sources.
Without saying that, these types of non-inventive videos are also what AI tools have made it easier to create. This is because low-quality videos often feature a series of Just Images or Clips (Real or AI) with added AI narration.
In the post, Meta warns authors not to reuse content from other apps or sources. This has been a long-standing rule. Also, please note that the video captions must be of high quality. This could mean reducing the use of automated AI captions that are not edited by the author.
Image credits: Meta
According to Meta, these changes will unfold gradually over the next few months, so Facebook creators will have time to adjust. If the creator thinks their content is not being distributed, you can view new post-level insights on Facebook's professional dashboard to see why.
Creators can also check on the Support Home screen to see if there is a risk of content recommendation or monetization penalty from the page or the main menu of their professional profile.
Meta typically shares information about content takedowns in its quarterly transparency reports. In the last quarter, Meta said that 3% of Facebook's global monthly active users were fake accounts and had taken action against 1 billion fake accounts from January to March 2025.
More recently, Meta, like X, has moved away from the fact-checking content itself, which is favored by US community notes. This allows users and contributors to determine whether a post is accurate and follows Meta's community standards.