How hate speech campaigners found Facebook’s weak
spot
The social network’s crisis has been a long time in
the making and shows no sign of going away
Mark Zuckerberg was forced to act quickly when
advertisers pulled business from Facebook.
Alex Hern
@alexhern
Published
onMon 29 Jun 2020 18.08 BST
It took
less than two hours for Facebook to react and it did so for good reason.
At 5pm on
Friday, Unilever, one of the world’s largest advertisers, with a portfolio of
products that ranges from Marmite to Vaseline, suddenly announced it was
pulling all adverts from Facebook, Instagram and Twitter in the US.
Given the
“polarised atmosphere in the US”, the company said, and the significant work
left to be done “in the areas of divisiveness and hate speech … continuing to
advertise on these platforms at this time would not add value to people and
society”.
At 6.47pm,
Facebook scrambled.
Mark
Zuckerberg, it said, would be “going live on his Facebook page” to discuss the
company’s racial justice work. Thirteen minutes after that, the most powerful
chief executive in the world appeared on screens.
Humbled, he
announced a series of new policies, including a ban on hateful content that
targets immigrants, and further restrictions on posts making false claims about
voting.
Asad
Moghal, a senior digital and content manager at Byfield Consultancy, said
Unilever’s action was always going to force Zuckerberg to respond. “When such
an international giant decides that inaction is no longer an option to tackle
racist and discriminatory language, then the social media businesses need to
listen up.
“By taking
financial action, a company the size of Unilever can effect change and force
the hand of Twitter and Facebook; the business has decided it needs to protect
its brand reputation and can longer be associated with platforms that deliver
hate speech and divisive content. But what will really effect change is if this
move creates a domino effect and other big-name corporations remove investment
from the platforms.”
The swathe
of announcements marked the first concessions from Facebook towards the aims of
a coalition, Stop Hate for Profit, that was formed in the wake of the killing
of George Floyd in May.
But the
group’s leaders say the tweaks do not go far enough, and are reiterating their
calls for a month-long global advertiser boycott starting on Wednesday.
The real
danger for Facebook is if other brands decide they can do without the platform
too.
This crisis
has been a long time in the making – and shows no sign of going away.
Facebook
has historically taken a softer line on hate speech than it has on other
controversial content, such as that containing nudity, in part out of a belief
in the inherent ambiguity of offensive speech, and in part due to the
difficulty of automating such work.
Identifying
hate speech is reliant on knowledge of context, custom and culture which can be
hard to teach human moderators, let alone machines.
In recent
years, Facebook has made strides in that area. In the third quarter of 2017,
according to its community standards report, Facebook found just under a
quarter of hate speech by itself; the other three-quarters was only removed
after the site’s users manually flagged it to moderators, who then took action.
By this
spring, the proportions had reversed: 88% of hate speech removed from the site
was found by Facebook’s own tools, allowing it to remove or restrict almost
four times as much hate speech as it had two years earlier.
But working
against Facebook’s technical expertise was another factor: the US president.
As far back
as 2015, according to reporting by the Washington Post, the social network has
struggled with how to deal with a man who, first as a candidate and then as
president, pushed the limits of what was allowed to be posted.
Instead,
Facebook has steadily tweaked its own rules to avoid angering the president:
introducing in 2015 an exception for “political discourse” to allow a video calling
for a ban on Muslims entering the US to stay up, for instance, or limiting
efforts to tackle “false news” out of a fear that doing so would
disproportionately hit right-leaning pages and posters.
In the
protests prompted by Floyd’s death, Trump again tested the boundaries, posting
on Facebook and Twitter a message that “when the looting starts, the shooting
starts”.
Twitter,
noting the racist history of the phrase, and interpreting it as a potential
call for violence, enforced a policy it had enacted last summer for just such
an occurrence: the company restricted the tweet, preventing it from being
replied to or liked, and hid it behind a warning declaring that the tweet broke
its rules. But it left it up, citing the inherent newsworthiness of a statement
by an elected official with millions of followers.
On
Facebook, however, the post was untouched. In a post on his personal page,
Zuckerberg wrote that he interpreted the statement not as incitement to
violence but as “a warning about state action”. “Unlike Twitter,” he wrote, “we
do not have a policy of putting a warning in front of posts that may incite
violence because we believe that if a post incites violence, it should be
removed regardless of whether it is newsworthy, even if it comes from a politician.”
The
decision became a flashpoint for lingering unease about Facebook’s wider
problems with tackling hate on its platform – as did Zuckerberg’s decision, a
week earlier, to appear on Fox News to defend a different Trump post, on
mail-in voting, saying he did not think his company should become the “arbiter
of truth”.
Facebook
staff began to speak out on social media, holding a virtual walkout to
emphasise that “doing nothing is not acceptable”.
The
company’s precariously employed moderators joined in, risking their
contracted-out jobs to decry the “white exceptionality and further
legitimisation of state brutality”.
Even
scientists funded by Zuckerberg’s personal charity the Chan Zuckerberg
Initiative spoke out, calling Trump’s post “a clear statement of inciting
violence”.
With some
fanfare, Zuckerberg appointed in May an oversight board – a roster of experts
that will have the power to overrule Facebook’s moderation decisions.
It includes
Helle Thorning-Schmidt, a former prime minister of Denmark; the Nobel peace
laureate Tawakkol Karman; and Alan Rusbridger, a former Guardian
editor-in-chief.
But the
difficulty of setting up a new organisation in the age of Covid-19 means that
the board was unable to take the heat off Zuckerberg.
“Zuckerberg’s
strategy of dealing with Trump is an incoherent blend of two leadership
approaches,” said Chris Moos, a leadership expert and teaching fellow at Oxford
University’s Saïd business school.
Where some
attempted to find “practical approaches for dealing with those tensions” they
encountered at work, and others appealed “to higher-order principles”,
Zuckerberg tried both and succeeded at neither. “On the one hand, he has
engaged a wide set of stakeholders into the debate, throwing money at
initiatives to build racial justice and voter engagement. On the other, the
Facebook CEO has tried to rise above the controversy by making it clear that
his company will be erring on the side of free expression, ‘even when it’s
speech we strongly and viscerally disagree with’.”
Zuckerberg
can never be removed from his position. While he only owns 14% of the company,
the special class of shares he owns means he controls 57% of the voting rights
at board meetings. But employee pressure can hurt him, professionally and
personally: if Facebook no longer seems like a pleasant, enjoyable and
rewarding workplace, the company will struggle to hire and retain the highly
skilled staff it relies upon to compete in Silicon Valley.
In June,
the Stop Hate for Profit campaign found another weak point for the site:
advertisers. While Facebook takes some revenue directly from users, for
products such as its Portal videophone or its Oculus VR headsets, the vast
majority of the company’s $70.7bn (£57.5bn) annual revenue comes from advertising.
On 17 June, Color of Change, along with the NAACP, ADL, Sleeping Giants, Free
Press and Common Sense Media, launched a public request for “all advertisers to
stand in solidarity with Black Facebook users and send the message to Facebook
that they must change their practices by pausing all advertising on
Facebook-owned platforms for the month of July 2020”.
Many of
those advertisers were already uncomfortable about their spending on Facebook
before the latest campaign. The site, as with all programmatic advertising, can
have “brand safety” issues when companies find their messages next to extreme
or hateful content. At a macro level, meanwhile, marketers are all too aware of
the risks of helping consolidate the “duopoly” of Facebook and Google, which between
them have secured the majority of the advertising industry’s growth.
But even if
the Stop Hate for Profit campaign was pushing at an open door, the success has
been surprising. By the end of the first week, Patagonia, North Face and the
freelancing platform Upwork had signed on. And Unilever’s decision to pause
advertising until November – albeit only within the US, and without directly
citing the campaign – opened the floodgates. Over the weekend,it was joined by
other megabrands, including Coca-Cola and the alcohol conglomerate Beam
Suntory.
“Let’s be
honest,” said Moghal, “these tech platforms have generated income and interest
from this divisive content; they won’t change their practices until they begin
to see a significant cut to their revenue.”
With the
boycott officially starting on Wednesday, the campaigners are not easing off
the pressure. In fact, success has only driven higher ambitions.
“The next
frontier is global pressure,” Jim Steyer, the chief executive of Common Sense
Media, told Reuters on Monday. While some, including North Face and Patagonia,
have expanded their boycotts globally, others are currently content to only
withhold spending in the US. If even that is enough to get Zuckerberg in front
of a camera in less than two hours, the campaigners hope the power of worldwide
action could motivate lasting change.
|
Sem comentários:
Enviar um comentário