Connect with us

Latest News

Meta deletes over 10 million fake accounts in major crackdown

Published

on

Meta Platforms Inc., the parent company of Facebook, has intensified efforts to combat inauthentic activity across its platforms, revealing that it removed more than 10 million fake profiles and 500,000 spam accounts within the first half of 2025.

The company said the move is part of a broader crackdown on impersonation, artificial engagement, and content duplication, a strategy aimed at rewarding originality and ensuring greater visibility for genuine creators.

In a blog post on Monday, Meta disclosed that accounts found to be recycling or reposting unoriginal content without substantial modifications will face penalties, including limited reach and potential loss of monetisation privileges.

“We’re making progress. In the first half of 2025, we took action on around 500,000 accounts engaged in spammy behaviour or fake engagement. We also removed about 10 million profiles impersonating large content producers,” the company stated.

According to Meta, repeated sharing of unoriginal or duplicated material, including videos, photos, or written content, undermines the quality of the platform by diluting authentic voices and stifling opportunities for emerging creators.

To better support originality, Meta is launching new content attribution tools designed to trace shared media back to its original source. The company said this will help ensure that credit is given to rightful content creators and support the broader ecosystem of authentic storytelling.

The company emphasized that superficial edits, such as simply stitching video clips or adding watermarks, will no longer qualify as meaningful. “Pages and profiles that consistently publish original content tend to reach broader audiences on Facebook,” the statement noted.

Meta also warned that content bearing watermarks from third-party platforms could face consequences such as reduced reach or loss of monetisation tools.

The company rolled out post-level performance insights via the Professional Dashboard, enabling creators to track the reach and engagement of individual posts. A dedicated “Support Home” feature will also alert users to any content or monetisation restrictions affecting their accounts.

YouTube, owned by Google, adjusted its monetisation policies, stating that mass-produced or repetitive content may no longer qualify for ad revenue. The announcement initially sparked confusion among content creators, some of whom interpreted it as a crackdown on AI-generated content.

However, YouTube clarified that the update was not a ban on AI use in content creation.

“We welcome creators using AI tools to enhance their storytelling, and channels that use AI in their content remain eligible to monetise,” YouTube stated.

Both Meta and YouTube emphasized that the updates aim to enhance the quality of content across their platforms, foster a fairer digital environment, and offer stronger protections for original creators in an increasingly competitive ecosystem.