Meet the
Next Fact-Checker, Debunker and Moderator: You
Meta is
joining X and YouTube in shifting moderation to users. Are you ready?
By Stuart A.
Thompson and Kate Conger
Stuart
Thompson covers the spread of false and misleading information online from New
York. Kate Conger covers X from San Francisco.
Jan. 7, 2025
https://www.nytimes.com/2025/01/07/technology/meta-facebook-content-moderation.html
Meta would
like to introduce its next fact-checker — the one who will spot falsehoods, pen
convincing corrections and warn others about misleading content.
It’s you.
Mark
Zuckerberg, Meta’s chief executive, announced Tuesday that he was ending much
of the company’s moderation efforts, like third-party fact-checking and content
restrictions. Instead, he said, the company will turn over fact-checking duties
to everyday users under a model called Community Notes, which was popularized
by X and lets users leave a fact-check or correction on a social media post.
The
announcement signals the end of an era in content moderation and an embrace of
looser guidelines that even Mr. Zuckerberg acknowledged would increase the
amount of false and misleading content on the world’s largest social network.
What is a
Community Note?
On X, the
notes are appended beneath posts to add missing details or corrections. This
one, highlighted in red, appeared on a post by Elon Musk.
“I think
it’s going to be a spectacular failure,” said Alex Mahadevan, the director of a
media literacy program at the Poynter Institute called MediaWise, who has
studied Community Notes on X. “The platform now has no responsibility for
really anything that’s said. They can offload responsibility onto the users
themselves.”
Such a turn
would have been unimaginable after the presidential elections in 2016 or even
2020, when social media companies saw themselves as reluctant warriors on the
front lines of a misinformation war. Widespread falsehoods during the 2016
presidential election triggered public backlash and internal debate at social
media companies over their role in spreading so-called fake news.
The
companies responded by pouring millions into content moderation efforts, paying
third-party fact-checkers, creating complex algorithms to restrict toxic
content and releasing a flurry of warning labels to slow the spread of
falsehoods — moves seen as necessary to restore public trust.
The efforts
worked, to a point — fact-checker labels were effective at reducing belief in
falsehoods, researchers found, though they were less effective on conservative
Americans. But the efforts also made the platforms — and Mr. Zuckerberg in
particular — political targets of President-elect Donald J. Trump and his
allies, who said content moderation was nothing short of censorship.
Now, the
political environment has changed. With Mr. Trump set to take control of the
White House and regulatory bodies that oversee Meta, Mr. Zuckerberg has pivoted
to repairing his relationship with Mr. Trump, dining at Mar-a-Lago, adding a
Trump ally to Meta’s board of directors and donating $1 million to Mr. Trump’s
inauguration fund.
“The recent
elections also feel like a cultural tipping point towards once again
prioritizing speech,” Mr. Zuckerberg said in a video announcing the moderation
changes.
Mr.
Zuckerberg’s bet on using Community Notes to replace professional fact-checkers
was inspired by a similar experiment at X that allowed Elon Musk, its
billionaire owner, to outsource the company’s fact-checking to users.
X now asks
everyday users to spot falsehoods and write corrections or add extra
information to social media posts. The exact details of Meta’s program are not
known, but on X, the notes are at first visible only to users who register for
the Community Notes program. Once a note receives enough votes deeming it
valuable, it is appended to the social media post for everyone to see.
“A social
media platform’s dream is completely automated moderation that they, one, don’t
have to take responsibility for and, two, don’t have to pay anyone for,” said
Mr. Mahadevan, the director of MediaWise. “So Community Notes is the absolute
dream of these people — they’ve basically tried to engineer a system that would
automate fact-checking.”
Mr. Musk,
another Trump ally, was an early champion for Community Notes. He quickly
elevated the program after firing most of the company’s trust and safety team.
Studies have
shown Community Notes works at dispelling some viral falsehoods. The approach
works best for topics on which there is broad consensus, researchers have
found, such as misinformation about Covid vaccines.
In that
case, the notes “emerged as an innovative solution, pushing back with accurate
and credible health information,” said John W. Ayers, the vice chief of
innovation in the division of infectious disease and global public health at
the University of California, San Diego, School of Medicine, who wrote a report
in April on the topic.
But users
with differing political viewpoints have to agree on a fact-check before it is
publicly appended to a post, which means that misleading posts about
politically divisive subjects often go unchecked. MediaWise found that fewer
than 10 percent of Community Notes drafted by users ended up being published on
offending posts. The numbers are even lower for sensitive topics like
immigration and abortion.
Researchers
found that the majority of posts on X receive most of their traffic within the
first few hours, but it can take days for a Community Note to be approved so
that everyone can see it.
Since its
debut in 2021, the program has sparked interest from other platforms. YouTube
announced last year that it was starting a pilot project allowing users to
submit notes to appear below misleading videos. The helpfulness of those
fact-checks is still assessed by third-party evaluators, YouTube said in a blog
post.
Meta’s
existing content moderation tools have seemed overwhelmed by the deluge of
falsehoods and misleading content, but the interventions were seen by
researchers as fairly effective. A study published last year in the journal
Nature Human Behavior showed that warning labels, like those used by Facebook
to caution users about false information, reduced belief in falsehoods by 28
percent and reduced how often the content was shared by 25 percent. Researchers
found that right-wing users were far more distrustful of fact-checks, but that
the interventions were still effective at reducing their belief in false
content.
“All of the
research shows that the more speed bumps, essentially, the more friction there
is on a platform, the less spreading you have of low-quality information,” said
Claire Wardle, an associate professor of communication at Cornell University.
Researchers
believe that community fact-checking is effective when paired with in-house
content moderation efforts. But Meta’s hands-off approach could prove risky.
“The
community-based approach is one piece of the puzzle,” said Valerie
Wirtschafter, a fellow at the Brookings Institution who has studied Community
Notes. “But it can’t be the only thing, and it certainly can’t be just rolled
out as like an untailored, whole-cloth solution.”
Dec. 12,
2024
Stuart A.
Thompson writes about how false and misleading information spreads online and
how it affects people around the world. He focuses on misinformation,
disinformation and other misleading content. More about Stuart A. Thompson
Kate Conger
is a technology reporter based in San Francisco. She can be reached at
kate.conger@nytimes.com. More about Kate Conger
Sem comentários:
Enviar um comentário