terça-feira, 4 de agosto de 2020

Silicon Valley is losing the battle against election misinformation




Silicon Valley is losing the battle against election misinformation

 

More groups are pushing false information into voters’ social media feeds in the run-up to November, and the deceptions are savvier than in 2016. It may be too late to fix.

 

By MARK SCOTT and STEVEN OVERLY

08/04/2020 04:30 AM EDT

Updated: 08/04/2020 01:31 PM EDT

https://www.politico.com/news/2020/08/04/silicon-valley-election-misinformation-383092

 

Videos peddling false claims about voter fraud and Covid-19 cures draw millions of views on YouTube. Partisan activist groups pretending to be online news sites set up shop on Facebook. Foreign trolls masquerade as U.S. activists on Instagram to sow divisions around the Black Lives Matter protests.

 

Four years after an election in which Russia and some far-right groups unleashed a wave of false, misleading and divisive online messages, Silicon Valley is losing the battle to eliminate online misinformation that could sway the vote in November.

 

Social media companies are struggling with an onslaught of deceptive and divisive messaging from political parties, foreign governments and hate groups as the months tick down to this year’s presidential election, according to more than two dozen national security policymakers, misinformation experts, hate speech researchers, fact-checking groups and tech executives, as well as a review of thousands of social media posts by POLITICO.

 

The tactics, many aimed at deepening divisions among Americans already traumatized by a deadly pandemic and record job losses, echo the Russian government’s years-long efforts to stoke confusion before the U.S. 2016 presidential election, according to experts who study the spread of harmful content. But the attacks this time around are far more insidious and sophisticated — with harder-to-detect fakes, more countries pushing covert agendas and a flood American groups copying their methods.

 


And some of the deceptive messages have been amplified by mainstream news outlets and major U.S. political figures — including President Donald Trump. In one instance from last week, he used his large social media following to say, without evidence, that mail-in votes would create “the most inaccurate and fraudulent election in history.”

 

Silicon Valley’s efforts to contain the new forms of fakery have so far fallen short, researchers and some lawmakers say. And the challenges are only increasing.

 

“November is going to be like the Super Bowl of misinformation tactics,” said Graham Brookie, director of the D.C.-based Atlantic Council’s Digital Forensics Lab, which tracks online falsehoods. “You name it, the U.S. election is going to have it.”

 

Anger at the social media giants’ inability to win the game of Whac-A-Mole against false information was a recurring theme at last week’s congressional hearing with big tech CEOs, where Facebook boss Mark Zuckerberg attempted to bat down complaints that his company is profiting from disinformation about the coronavirus pandemic. A prime example, House antitrust chair David Cicilline (D-R.I.) said, was the five hours it took for Facebook to remove a Breitbart video falsely calling hydroxychloroquine a cure for Covid-19.

 

The post was viewed 20 million times and received more than 100,000 comments before it was taken down, Cicilline noted.

 

“Doesn’t that suggest, Mr. Zuckerberg that your platform is so big that even with the right policies in place, you can’t contain deadly content?” Cicilline asked.

 

The companies deny accusations they have failed to tackle misinformation, highlighting the efforts to take down and prevent false content, including posts about Covid-19 — a public health crisis that has become political.

 

Since the 2016 election Facebook, Twitter and Google have collectively spent tens of millions of dollars on new technology and personnel to track online falsehoods and stop them from spreading. They’ve issued policies against political ads that masquerade as regular content, updated internal rules on hate speech and removed millions of extremist and false posts so far this year. In July, Twitter banned thousands of accounts linked to the fringe QAnon conspiracy theory in the most sweeping action yet to stem its spread.

 

Google announced yet another effort Friday, saying it will begin penalizing websites on Sept. 1 that distribute hacked materials and advertisers who take part in coordinated misinformation campaigns. Had those policies been in place in 2016, advertisers wouldn’t have been able to post screenshots of the stolen emails that Russian hackers had swiped from Hillary Clinton’s campaign.

 

But despite being some of the world’s wealthiest companies, the internet giants still cannot monitor everything that is posted on their global networks. The companies also disagree on the scope of the problem and how to fix it, giving the peddlers of misinformation an opportunity to poke for weaknesses in each platform’s safeguards.

 

All images are from Instagram (September 2019). The posts and identified accounts were later taken down by the company for links to the Internet Research Agency. The identities of non-IRA parties including domestic political groups’ logos, the faces of ordinary citizens, and comments by non-IRA users are redacted.

 

National flashpoints like the Covid-19 health crisis and Black Lives Matter movement have also given the disinformation artists more targets for sowing divisions.

 

The difficulties are substantial: foreign interference campaigns have evolved, domestic groups are copycatting those techniques and political campaigns have adapted their strategies.

 

At the same time, social media companies are being squeezed by partisan scrutiny in Washington that makes their judgment calls about what to leave up or remove even more politically fraught: Trump and other Republicans accuse the companies of systematically censoring conservatives, while Democrats lambast them for allowing too many falsehoods to circulate.

 

Researchers say it’s impossible to know how comprehensive the companies have been in removing bogus content because the platforms often put conditions on access to their data. Academics have had to sign non-disclosure agreements promising not to criticize the companies to gain access to that information, according to people who signed the documents and others who refused to do so.

 

Experts and policymakers warn the tactics will likely become even more advanced over the next few months, including the possible use of so-called deepfakes, or false videos created through artificial intelligence, to create realistic-looking footage that undermines the opposing side.

 

“As more data is accumulated, people are going to get better at manipulating communication to voters,” said Robby Mook, campaign manager for Hillary Clinton’s 2016 presidential bid and now a fellow at the Harvard Kennedy School.

 

Foreign interference campaigns evolve

Researcher Young Mie Kim was scrolling through Instagram in September when she came across a strangely familiar pattern of partisan posts across dozens of social media accounts.

 

Kim, a professor at the University of Wisconsin-Madison specializing in political communication on social media, noticed a number of the seemingly unrelated accounts using tactics favored by the Russia-linked Internet Research Agency, a group that U.S. national security agencies say carried out a multiyear misinformation effort aimed at disrupting the 2016 election — in part by stoking existing partisan hatred.

 

The new accounts, for example, pretended to be local activists or politicians and targeted their highly partisan messages at battleground states. One account, called “iowa.patriot,” attacked Elizabeth Warren. Another, “bernie.2020_,” accused Trump supporters of treason.

 

“It stood out immediately,” said Kim, who tracks covert Russian social media activity targeted at the U.S. “It was very prevalent.” Despite Facebook’s efforts, it appeared the IRA was still active on the platform. Her hunch was later confirmed by Graphika, a social media analytics firm that provides independent analysis for Facebook.

 

The social networking giant has taken action on at least some of these covert campaigns. A few weeks after Kim found the posts, Facebook removed 50 IRA-run Instagram accounts with a total of nearly 250,000 online followers — including many of those she had spotted, according to Graphika.

 

“We’re seeing a ramp up in enforcement,” Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told POLITICO, noting that the company removed about 50 networks of falsified accounts last year, compared with just one in 2017.

 

Since October, Facebook, Twitter and YouTube have removed at least 10 campaigns promoting false information involving accounts linked to authoritarian countries like Russia, Iran and China that had targeted people in the U.S., Europe and elsewhere, according to company statements.

 

But Kim said that Russia’s tactics in the U.S. are evolving more quickly than social media sites can identify and take down accounts. Facebook alone has 2.6 billion users — a gigantic universe for bad actors to hide in.

 

All images are from Instagram (September 2019). The posts and identified accounts were later taken down by the company for links to the Internet Research Agency. The identities of non-IRA parties including domestic political groups’ logos, the faces of ordinary citizens, and comments by non-IRA users are redacted.

 

In 2016, the IRA’s tactics were often unsophisticated, like buying Facebook ads in Russian rubles or producing crude, easily identifiable fakes of campaign logos.

 

This time, Kim said, the group’s accounts are operating at a higher level: they have become better at impersonating both candidates and parties; they’ve moved from creating fake advocacy groups to impersonating actual organizations; and they’re using more seemingly nonpolitical and commercial accounts to broaden their appeal online without raising red flags to the platforms.

 

The Kremlin has already honed these new approaches abroad. In a spate of European votes — most notably last year’s European Parliament election and the 2017 Catalan independence referendum — Russian groups tried out new disinformation tactics that are now being deployed ahead of November, according to three policymakers from the EU and NATO who were involved in those analyses.

 

Kim said one likely reason for foreign governments to impersonate legitimate U.S. groups is that the social media companies are reluctant to police domestic political activism. While foreign interference in elections is illegal under U.S. law, the companies are on shakier ground if they take down posts or accounts put up by Americans.

 

Facebook’s Gleicher said his team of misinformation experts has been cautious about moving against U.S. accounts that post about the upcoming election because they do not want to limit users’ freedom of expression. When Facebook has taken down accounts, he said, it was because they misrepresented themselves, not because of what they posted.

 

Still, most forms of online political speech face only limited restrictions on the networks, according to the POLITICO review of posts. In invite-only groups on Facebook, YouTube channels with hundreds of thousands of views, and Twitter messages that have been shared by tens of thousands of people, partisan — often outright false — messages are shared widely by those interested in the outcome of November’s vote.

 

Russia has also become more brazen in how it uses state-backed media outlets — as has China, whose presence on Western social media has skyrocketed since last year’s Hong Kong protests. Both Russia’s RT and China’s CGTN television operations have made use of their large social media followings to spread false information and divisive messages.

 

Moscow- and Beijing-backed media have piggybacked on hashtags related to the Covid-19 pandemic and recent Black Lives Matter protests to flood Facebook, Twitter and YouTube with content stoking racial and political divisions.

 

Facebook began adding labels to posts created by some state-backed media outlets in June to let users know who is behind the content, though does not add similar disclaimers when users themselves post links to the same state-backed content.

 

China has been particularly aggressive, with high-profile officials and ambassadorial accounts promoting conspiracy theories, mostly on Twitter, that the U.S. had created the coronavirus as a secret bioweapon.

 

Twitter eventually placed fact-checking disclaimers on several posts by Lijian Zhao, a spokesperson for the Chinese foreign ministry with more than 725,000 followers, who pushed that falsehood. But by then, the tweets had been shared thousands of times as the outbreak surged this spring.

 

“Russia is doing right now what Russia always does,” said Bret Schafer, a media and digital disinformation fellow at the German Marshall Fund of the United States' Alliance for Securing Democracy, a Washington think tank. “But it’s the first time we’ve seen China fully engaged in a narrative battle that doesn’t directly affect Chinese interests."

 

Other countries, including Iran and Saudi Arabia, similarly have upped their misinformation activity aimed at the U.S. over the last six months, according to two national security policy makers and a misinformation analyst, all of whom spoke on the condition of anonymity because of the sensitivity of their work.

 

Domestic extremist groups copycatting

U.S. groups have watched the foreign actors succeed in peddling falsehoods online, and followed suit.

 

Misinformation experts say that since 2016, far-right and white supremacist activists have begun to mimick the Kremlin’s strategies as they stoke division and push political messages to millions of social media users.

 

“By volume and engagement, domestic misinformation is the more widespread phenomenon. It's not close,” said Emerson Brooking, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab.

 

Early this year, for instance, posts from “Western News Today” — a Facebook page portraying itself as a media outlet — started sharing racist links to content from VDARE, a website that the Southern Poverty Law Center had defined as promoting anti-immigration hate speech.

 

Other accounts followed within minutes, posting the same racist content and linking to VDARE and other far-right groups across multiple pages — a coordinated action that Graphika said mimicked the tactics of Russia’s IRA.

 

Previously, many of these hate groups had shared posts directly from their own social media accounts but received little, if any traction. Now, by impersonating others, they could spread their messages beyond their far-right online bubbles, said Chloe Colliver, head of the digital research unit at the Institute for Strategic Dialogue, a London-based think tank that tracks online hate speech.

 

And by pretending to be different online groups with little if any connection to each other, the groups posting VDARE messages appeared to avoid getting flagged as a coordinated campaign, according to Graphika.

 

All images are from Instagram (September 2019). The posts and identified accounts were later taken down by the company for links to the Internet Research Agency. The identities of non-IRA parties including domestic political groups’ logos, the faces of ordinary citizens, and comments by non-IRA users are redacted.

 

Eventually, Facebook removed the accounts — along with others associated with the QAnon movement, an online conspiracy theory that portrays Trump as doing battle with elite pedophiles and a liberal “deep state.”

 

The company stressed that the takedowns were directed at misrepresentation, not at right-wing ideology. But Colliver said those distinctions have become more difficult to make: The tactics of far-right groups have become increasingly sophisticated, hampering efforts to tell who is driving these online political campaigns.

 

“The biggest fault line is how to label foreign versus domestic, state versus non-state content,” she said.

 

In addition to targeted takedowns, tech companies have adopted broader policies to combat misinformation. Facebook, Twitter and YouTube have banned what they call manipulated media, for instance, to try to curtail deepfakes. They’ve also taken broad swipes at voting-related misinformation by banning content that deceives people about how and when to vote, and by promoting authoritative sources of information on voting.

 

“Elections are different now and so are we," said Kevin McAlister, a Facebook spokesperson. “We've created new products, partnerships, and policies to make sure this election is secure, but we’re in an ongoing race with foreign and domestic actors who evolve their tactics as we shore up our defenses.”

 

“We will continue to collaborate with law enforcement and industry peers to protect the integrity of our elections,” Google said in a statement.

 

Twitter trials scenarios to anticipate what misinformation might crop up in future election cycles, the company says, learning from each election since the 2016 race in the U.S. and tweaking its platform as a result.

 

“It’s always an election year on Twitter — we are a global service and our decisions reflect that,” said Jessica Herrera-Flanigan, vice president of public policy for the Americas.

 

Critics have said those policies are undermined by uneven enforcement. Political leaders get a pass on misleading posts that would be flagged or removed from other users, they argue, though Twitter in particular has become more aggressive in taking action on such posts.

 

Political campaigns learn and adapt

It’s not just online extremists improving their tactics. U.S. political groups also keep finding ways to get around the sites’ efforts to force transparency in political advertising.

 

Following the 2016 vote, the companies created databases of political ads and who paid for them to make it clear when voters were targeted with partisan messaging. Google and Facebook now require political advertisers around the world to prove their identities before purchasing messages. The search giant also stopped the use of so-called microtargeting, or using demographic data on users to pinpoint ads to specific groups. Twitter has gone the furthest — banning nearly all campaign ads late last year.

 

But American political parties have found a way to dodge those policies — by creating partisan news organizations, following Russia’s 2016 playbook.

 

For voters in Michigan, media outlets like “The Gander” and “Grand Rapids Reporter” may first appear to be grassroots newsrooms filling the void left by years of layoffs and under-investment in local reporting. Both publish daily updates on social media about life in the swing state, mixing a blend of political reporting — biased toward either Democratic or Republican causes — with stories about local communities.

 

Yet these outlets are part of nationwide operations with ties to Republican or Democratic operatives, according to a review of online posts, Facebook pages and corporate records. Bloomberg and the Columbia Journalism Review first reported on their ties to national political parties.

 

“The Gander” is one of eight online publications that is part of Courier Newsroom, which itself is owned by ACRONYM, a nonprofit organization with links to the Democratic Party that aims to spend $75 million on digital ads to combat Trump during the election. Similarly, “Grand Rapids Reporter” is just one of hundreds of news sites across the country controlled by people with ties to the Republican Party, including Brian Timpone, head of one of the groups behind these partisan outlets.

 

Both groups have focused on promoting partisan stories on Facebook, Instagram and Twitter. Their pages, collectively, have garnered tens of thousands of likes, comments and other interactions, according to Crowdtangle, a Facebook-owned tool that analyzes people’s engagement on social media.

 

But neither group discloses their political affiliations on their Facebook pages, and the social media giant classifies them as “news and media” operations alongside mainstream outlets like POLITICO and The Washington Post. It’s the same classification Facebook used in 2018 for a partisan site started by then-House Intelligence Chair Devin Nunes (D-Calif.), even though it was funded by his campaign.

 

Steven Brill, co-chief executive at NewsGuard, an analytics firms that tracks misinformation, said his team has seen a steady increase in paid-for messages from these partisan-backed news sites in recent months, and expects more to come before November’s election.

 

“They can avoid the rules that Facebook and Twitter have against political advertising because it looks like a wonderful little independent local news operation,” he said. “You can only imagine what’s going to happen between now and November.”

 

And while the social networks’ policies have caused political ads to become more transparent than in 2016, many partisan ads still run without disclaimers, often for weeks.

 

On Facebook, more than half the pages that displayed political ads during a 13-month period through June 2019 concealed the identities of their backers, according to research from New York University.

 

On Google, multiple party political ads that violated the company’s guidelines ran for months before they were removed, according to a company transparency report.

 

And on Twitter, which has nominally banned all political ads, groups circumvent the rules by paying for so-called issues-based ads related to party platforms, for example promoting the Second Amendment or abortion rights.

 

Caught in the political vise

And now — with only a few months to go before the vote — social media platforms are also caught up in a content battle between Republicans and Democrats, with pressure coming from campaigns, politicians and the president himself. It’s a level of microscopic attention that was only beginning to bubble up in 2016.

 

For the left, Russia’s unchecked meddling during the last presidential race, which U.S. national security agencies concluded was partly aimed at aiding Trump, soured Democrats’ view of social media. For the right, the companies’ perceived bias against conservatives’ views has spurred Republican demands they avoid moderating any political speech — as well as a recent Trump executive order threatening legal liability for sites that show bias in allowing or removing content.

 

The companies insist political views do not factor into their decisions, and in fact they have asked the federal government in recent years for guidance on what constitutes permissible online speech. The First Amendment largely prevents the government from making such calls, though, and Congress’ efforts to legislate oversight rules for political social media ads have stalled because of a split between Republicans and Democrats on how to handle the issue.

 

That partisan divide may have even become a pawn in the disinformation war. Kim, the University of Wisconsin-Madison researcher, said she found evidence of foreign actors pretending to be U.S. activists in an apparent effort to amp up the divisions between the left and right. They put up incendiary posts, for example attacking the feminist movement or linking Trump supporters with Vladimir Putin, to sow anger between the left and right.

 

Republicans and Democrats appear to only agree that social media companies are a big part of the problem. How they should fix the issue is the subject of a deep, partisan divide that was on full display at a House Energy and Commerce subcommittee hearing on disinformation in June.

 

“Social media companies need to step up and protect our civil rights, our human rights and our human lives, not to sit on the sideline as the nation drowns in a sea of disinformation,” said the subcommittee’s chair, Rep. Mike Doyle (D-Pa.). “Make no mistake, the future of our democracy is at stake and the status quo is unacceptable.”

 

Minutes later, his Republican co-chair, Rep. Bob Latta (R-Ohio), chimed in. “We should make every effort to ensure that companies are using the sword provided by Section 230 to take down offensive and lewd content,” he said, before adding: “But that they keep their power in check when it comes to censoring political speech.”

 

With Washington split on how to handle the problem — and both foreign and domestic groups gearing up for November’s vote — misinformation experts are left wondering how bad, and widespread, the online trickery will be later this year.

 

“I didn’t see a meaningful drop in misinformation between 2016 and 2018,” said Laura Edelson, a researcher at NYU who has tracked the spread of paid-for political messages across social networks during recent electoral cycles. “The next trial will be the 2020 election, and I’m not optimistic.”


Sem comentários: