Meta is leaving its users to wade through hate and disinformation


Experts warn that Meta’s decision to end its third-party fact-checking program could allow disinformation and hate to fester online and permeate the real world.

The company announced today that it’s phasing out a program launched in 2016 where it partners with independent fact-checkers around the world to identify and review misinformation across its social media platforms. Meta is replacing the program with a crowdsourced approach to content moderation similar to X’s Community Notes.

Meta is essentially shifting responsibility to users to weed out lies on Facebook, Instagram, Threads, and WhatsApp, raising fears that it’ll be easier to spread misleading information about climate change, clean energy, public health risks, and communities often targeted with violence.

“It’s going to hurt Meta’s users first”

“It’s going to hurt Meta’s users first because the program worked well at reducing the virality of hoax content and conspiracy theories,” says Angie Drobnic Holan, director of the International Fact-Checking Network (IFCN) at Poynter.

“A lot of people think Community Notes-style moderation doesn’t work at all and it’s merely window dressing so that platforms can say they’re doing something … most people do not want to have to wade through a bunch of misinformation on social media, fact checking everything for themselves,” Holan adds. “The losers here are people who want to be able to go on social media and not be overwhelmed with false information.”

In a video, Meta CEO Mark Zuckerberg claimed the decision was a matter of promoting free speech while also calling fact-checkers “too politically biased.” Meta also said that its program was too sensitive and that 1 to 2 out of every 10 pieces of content it took down in December were mistakes and might not have actually violated company policies.

Holan says the video was “incredibly unfair” to fact-checkers who have worked with Meta as partners for nearly a decade. Meta worked specifically with IFCN-certified fact-checkers who had to follow the network’s Code of Principles as well as Meta’s own policies. Fact-checkers reviewed content and rated its accuracy. But Meta — not fact-checkers — makes the call when it comes to removing content or limiting its reach.

Poynter owns PolitiFact, which is one of the fact-checking partners Meta works with in the US. Holan was the editor-in-chief of PolitiFact before stepping into her role at IFCN. What makes the fact-checking program effective is that it serves as a “speed bump in the way of false information,” Holan says. Content that’s flagged typically has a screen placed over it to let users know that fact-checkers found the claim questionable and asks whether they still want to see it.

That process covers a broad range of topics, from false information about celebrities dying to claims about miracle cures, Holan notes. Meta launched the program in 2016 with growing public concern around the potential for social media to amplify unverified rumors online, like false stories about the pope endorsing Donald Trump for president that year.

Meta’s decision looks more like an effort to curry favor with President-elect Trump. In his video, Zuckerberg described recent elections as “a cultural tipping point” toward free speech. The company recently named Republican lobbyist Joel Kaplan as its new chief global affairs officer and added UFC CEO and president Dana White, a close friend of Trump, to its board. Trump also said today that the changes at Meta were “probably” in response to his threats.

“Zuck’s announcement is a full bending of the knee to Trump and an attempt to catch up to [Elon] Musk in his race to the bottom. The implications are going to be widespread,” Nina Jankowicz, CEO of the nonprofit American Sunlight Project and an adjunct professor at Syracuse University who researches disinformation, said in a post on Bluesky.

Twitter launched its community moderation program, called Birdwatch at the time, in 2021, before Musk took over. Musk, who helped bankroll Trump’s campaign and is now set to lead the incoming administration’s new “Department of Government Efficiency,” leaned into Community Notes after slashing the teams responsible for content moderation at Twitter. Hate speech — including slurs against Black and transgender people — increased on the platform after Musk bought the company, according to research by the Center for Countering Digital Hate. (Musk then sued the center, but a federal judge dismissed the case last year.)

Advocates are now worried that harmful content might spread unhindered on Meta’s platforms. “Meta is now saying it’s up to you to spot the lies on its platforms, and that it’s not their problem if you can’t tell the difference, even if those lies, hate, or scams end up hurting you,” Imran Ahmed, founder and CEO of the Center for Countering Digital Hate, said in an email. Ahmed describes it as a “huge step back for online safety, transparency, and accountability” and says “it could have terrible offline consequences in the form of real-world harm.” 

“By abandoning fact-checking, Meta is opening the door to unchecked hateful disinformation about already targeted communities like Black, brown, immigrant and trans people, which too often leads to offline violence,” Nicole Sugerman, campaign manager at the nonprofit Kairos that works to counter race- and gender-based hate online, said in an emailed statement to The Verge today.

Meta’s announcement today specifically says that it’s “getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.”

Scientists and environmental groups are wary of the changes at Meta, too. “Mark Zuckerberg’s decision to abandon efforts to check facts and correct misinformation and disinformation means that anti-scientific content will continue to proliferate on Meta platforms,” Kate Cell, senior climate campaign manager at the Union of Concerned Scientists, said in an emailed statement.

“I think this is a terrible decision … disinformation’s effects on our policies have become more and more obvious,” says Michael Khoo, a climate disinformation program director at Friends of the Earth. He points to attacks on wind power affecting renewable energy projects as an example.

Khoo also likens the Community Notes approach to the fossil fuel industry’s marketing of recycling as a solution to plastic waste. In reality, recycling has done little to stem the tide of plastic pollution flooding into the environment since the material is difficult to rehash and many plastic products are not really recyclable. The strategy also puts the onus on consumers to deal with a company’s waste. “[Tech] companies need to own the problem of disinformation that their own algorithms are creating,” Khoo tells The Verge.



Source link

About The Author

Scroll to Top