Technology correspondent
Meta says that this “fixing” is a problem, due to which Facebook groups have been wrongly suspended – but denied that there is a widespread issue on its platforms.
In online forums, group administrators say they have received automatic messages by incorrectly, that they violated the policies to remove their groups.
Some Instagram users have complained of similar problems with their own accounts, blaming many meta artificial intelligence (AI) systems.
Meta has accepted a “technical error” with Facebook groups, but says that it has not seen evidence of significant increase in wrong enforcement of his rules on his platforms.
A Facebook group, where users share memes about bug, were told that it does not follow the standards on “dangerous outfits or individuals”, according to a post by its founder.
The group, which has more than 680,000 members, was removed, but now it has been restored.
Another administrator, who runs a group on AI, with 3.5 million members, posted his group and his own account on the redit to suspend his own account for a few hours, Meta later told him: “Our technology made the mistake of suspending your group.”
Thousands of signatures
It comes when Meta faces questions of thousands of people on a large -scale ban or suspension of accounts on Facebook and Instagram.
A petition of “incorrecting Meta incorrectly disabled accounts with human customer assistance” has collected around 22,000 signatures at the time of writing on Change.org.
Meanwhile, a redit thread dedicated to the issue has many people to share their stories of being banned in recent months.
Some have posted about losing access to pages with significant passionate value, while others have revealed that they lost accounts associated with their businesses.
There are even claims that users have been banned after being accused by meta to dissolve their policies on child sexual abuse.
Users have blamed the AI moderation tool of the meta, it is almost impossible to talk to a person about their accounts, when they have been suspended or banned.
BBC News has not verified those claims independently.
In a statement, Meta said: “We act on accounts that violate our policies, and people can appeal that if they feel that we have made a mistake.”
It said that it used a combination of people and technology to find and remove accounts that broke their rules, and did not know about spikes in the suspension of the wrong account.
Instagram States on its website AI is “Central for our content review process”. It says that AI can detect and remove material against their community standards before reporting to someone, while materials are sent to human reviewers on some occasions.
Meta adds Accounts can be disabled after a serious violation, such as posting hair sexual abuse material.
A spokesperson said, “We act on the accounts violating our policies, and people can appeal that if they think we have made a mistake,”.
The social media giant also told the BBC that it uses a combination of technology and people share the data that breaks its rules, and share data about what action it takes Community standard enforcement report,
In its final edition, from January to March this year, Meta said that it took action on 4.6 meters of child sexual abuse – the lowest since the early months of 2021. The next version of the transparency report is going to be published in a few months.
Meta says its child sexual abuse policy The materials generated by children and “non-permissible illustrations with human equality”, such as art, AI or imaginary characters.
Meta also told BBC It uses technology to identify potentially suspected behaviorsSuch as adult accounts are being repeatedly discovered by adolescent accounts, or adults.
This may not be able to contact young people in the future as a result of those accounts, or their accounts are completely removed.