CMSC
-0.1300
Meta's decision to swap professional US fact-checkers with crowd-sourced moderation has raised fears that Facebook and Instagram could become magnets for misinformation similar to the Elon Musk-owned X, researchers say.
Meta's chief executive Mark Zuckerberg announced Tuesday the tech giant was ending its third-party fact-checking program in the United States and turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X.
The decision comes after years of criticism from supporters of President-elect Donald Trump, among others, that conservative voices were being censored or stifled under the guise of fighting misinformation, a claim professional fact-checkers vehemently reject.
The announcement, which included plans for slashing content moderation and "restoring free expression" on its platforms, included an acknowledgement from Zuckerberg that the policy shift meant "we're going to catch less bad stuff."
"Abandoning formal fact-checking for crowdsourcing tools like Community Notes has failed platforms in the past," Nora Benavidez, senior counsel at the nonprofit watchdog Free Press, told AFP.
"Twitter tried it and can't withstand the volume of misinformation and other violent, violative content," she added.
After his 2022 purchase of Twitter, rebranded as X, Musk gutted trust and safety teams and introduced Community Notes, a crowd-sourced moderation tool that the platform has promoted as the way for users to add context to posts.
Researchers say the lowering of the guardrails on X, and the reinstatement of once-banned accounts of known peddlers of misinformation, has turned the platform into a haven for misinformation.
- 'Mistaken beliefs' -
Studies have shown Community Notes can work to dispel some falsehoods such as vaccine misinformation, but researchers caution that it works best for topics where there is broad consensus.
"Although research supports the idea that crowdsourcing fact-checking can be effective when done correctly, it is important to understand that this is intended to supplement fact-checking from professionals -- not to replace it," said Gordon Pennycook, from Cornell University.
"In an information ecosystem where misinformation is having a large influence, crowdsourced fact-checking will simply reflect the mistaken beliefs of the majority," he added.
Meta's new approach ignores research that shows "Community Notes users are very much motivated by partisan motives and tend to over-target their political opponents," added Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech.
By comparison, a study published last September in the journal Nature Human Behavior showed that warning labels from professional fact-checkers reduced belief in –- and the sharing of -– misinformation even among those "most distrusting of fact-checkers."
Ending the fact-checking program opens the floodgates for harmful misinformation, researchers say.
"By axing his factcheckers, Zuckerberg has ripped out yet another of his companies' safety measures on platforms such as Facebook and Instagram," said Rosa Curling, co-executive director of UK-based legal activist firm Foxglove, which has backed a lawsuit against Meta in Kenya.
"If he's all-in on the Musk playbook, the next step will be slashing yet more of his content moderator numbers," including those that take down violent content and hate speech.
- 'Unsafe' -
As part of the overhaul, Meta has said it will relocate its trust and safety teams from liberal California to the more conservative state of Texas.
AFP currently works in 26 languages with Facebook's fact-checking programme, including in the United States and the European Union.
Meta has also announced major updates to its moderation policies, in a move that advocacy groups said lowers the bar against hate speech and harassment of minorities.
The latest version of Meta's community guidelines said its platforms allow users to accuse people of "mental illness or abnormality" based on their gender or sexual orientation.
Abandoning industry-standard hate speech policies makes Meta's platforms "unsafe places," said Sarah Kate Ellis, president of the advocacy group GLAAD.
Without these policies, "Meta is giving the green light for people to target LGBTQ people, women, immigrants, and other marginalized groups with violence, vitriol, and dehumanizing narratives," Ellis added.
burs-ac/dw
J.P.Estrada--TFWP