quarta-feira, 27 de outubro de 2021

The Facebook Papers reveal the limits of regulation. It’s time to think bigger

 


The Facebook Papers reveal the limits of regulation. It’s time to think bigger

 

Lawmakers across the Western world are thinking too narrowly in how they police social media companies.

 

The documents show that lawmakers are still not thinking big enough. | Josh Edelson/AFP via Getty Images

 

BY MARK SCOTT

October 26, 2021 7:23 pm

https://www.politico.eu/article/facebook-papers-reveal-limits-of-regulation-online-content-lawmakers/

 

If the revelations from the so-called Facebook Papers showed us anything, it's that new rules for policing what's posted online are needed. And fast.

 

The bad news: That's not going to happen — at least not in ways that will make a real difference.

 

Even before the inside look into Facebook's handling of online content, first reported by the Wall Street Journal and then by multiple media outlets, including POLITICO, policymakers in the European Union, United States and elsewhere were putting together proposals to force global platforms to be more accountable for what people posted online content, including potentially hefty fines when things inevitably go wrong.

 

But as lawmakers battle it out over the EU's Digital Services Act, the United Kingdom's Online Safety Bill and the U.S.' stumbling efforts to revamp its own law, known as Section 230 of the U.S. Communications Decency Act, none of this legislation will tackle the underlying issues laid bare by what has been revealed about Facebook's oversight of online content.

 

What the internal research, emails and insights show is a complex web of interconnected levers, which — taken together — have created arguably one of the most profitable businesses the world has ever seen, and built an online behemoth that shapes how almost all of us live.

 

As I waded through these internal documents, what struck me was how all-encompassing Facebook is. Engineers routinely tweaked content algorithms to prioritize some content over others. California-based executives regularly made choices that affected people in far-flung locations in almost real time. What the tech giant knows about people's daily digital habits — based on minute-by-minute monitoring of everyone's online interactions — would make George Orwell's "1984" look like a children's bedtime story.

 

The documents show that lawmakers are still not thinking big enough.

 

So far, the legislative proposals focus too narrowly on taking small bites at a much bigger problem. They would force social media companies to open up their data to outside researchers; limit how some political ads can target would-be supporters online; and demand companies make public their plans for tackling existential risks like when elected political leaders peddle misinformation and hate speech on their platforms.

 

But it's not enough to legislate on parts of what social media companies do. Until lawmakers take a step back and think more broadly about the rules overseeing online content — and its links to other digital priorities like privacy and competition — the abuses outlined from the internal documents are unlikely to go away.

 

For its part, Facebook has defended its oversight of online content, saying it has spent billions of euros, hired tens of thousands of content moderators and changed its algorithms to tamper down hate speech, misinformation and divisive political material and protect its 2.4 billion users worldwide.

 

What content to police?

So where have things gone wrong? Let's start with arguably the most far-reaching online content rules — the EU's Digital Services Act.

 

These proposals tick a lot of boxes.

 

Under plans expected to become law sometime next year, Facebook and others will be required to conduct regular risk assessments on potential problematic hotspots on their platforms, and hire independent auditors to make sure they're not cheating. Some outside groups will have (limited) access to internal social media data to analyze what's going on, while the bloc's regulators will be able to fine firms up to 6 percent of their annual revenue, amounting to billions of euros, if they don't comply with the rules. So far, so good.

 

But where this legislation falls is that it narrowly focuses on illegal content. Other material, including suspect social media posts that nudge up to the line of illegality, but never cross it, remain out of scope.

 

What internal Facebook documents show is that such “harmful non-violating narratives," to borrow from the company's own language, have repeatedly played a significant role in fomenting division and, in some cases, offline violence.

 

In the days around the January 6 Capitol Hill riots, for instance, the tech giant's engineers didn't take action against posts that questioned the legitimacy of last year's U.S. presidential election, even as such material was being used to promote political attacks in Washington. Another example: content that verged on anti-vaccine misinformation, now banned on the platform, was also not subject to review, despite that content fomenting online conspiracy theories around COVID-19.

 

European lawmakers said they had to draw the line somewhere, and expanding the scope of the bloc's proposed content rules to include harmful, but not illegal, material would have proved unwieldy. But this gap in the legislation — one that will allow reams of problematic content to remain online and off-limits to regulators — is a major blind spot in Europe's push to police social media

 

Political ads, politicians and the media

In London, the country's lawmakers want to solve that problem by regulating harmful online content, even if it is legal.

 

Under the U.K.'s Online Safety Bill, which is expected to voted on before the end of the year, social media companies will have a so-called duty of care to protect users from such problematic content — even if it doesn't break the country's existing hate speech laws. Fines totaling 10 percent of annual revenue could be levied for noncompliance, while social media companies' executives could even be sent to prison if the worst-offending material is ignored.

 

But, again, London's plans are flawed too.

 

Negotiations are still underway. But the proposals currently exempt online political ads, as well as posts from politicians and publishers, from any form of content moderation.

 

 

Facebook's internal documents revealed that paid-for partisan messages played a significant role in promoting divisive, nonpaid content across the platform. Elected officials and media organizations (some of which have been created by political groups to push their own agenda) also had a hand in fomenting distrust — and the problem went well beyond former U.S. President Donald Trump.

 

Frances Haugen, the Facebook whistleblower, said these limits could undermine the U.K.'s upcoming content rules by failing to confront what was really going on within Facebook.

 

"I am extremely concerned about paid-for advertising being excluded because engagement-based ranking impacts ads as much as it impacts organic content," she told U.K. lawmakers on October 25. "It is cheaper, substantially, to run an angry hateful divisive ad than it is to run a compassionate, empathetic ad."

 

Et tu, Washington?

Across the Atlantic, content moderation legislation continues to move at a glacial pace, even as Haugen meets with U.S. lawmakers also wading through Facebook's internal documents looking for answers.

 

New rules aren't likely anytime soon. But even the proposed bills — most of which would make social media companies more liable for what's posted online and force greater transparency over how decisions are made in Silicon Valley — fail to tackle how these firms' algorithms often push harmful, viral content over more staid material. Other bills specifically looking at monitoring such algorithms face little, if any, chance of becoming law.

 

No plans to regulate, or even monitor, such systems are expected to pass in Washington for the foreseeable future.

 

Internal Facebook documents, from 2018 and 2019, respectively, highlighted how Facebook's automated systems prioritized negative content because it was more likely to go viral — and therefore keep people on the social network. No likely law in the U.S. would deal with this underlying issue.

 

Until Western policymakers start thinking bigger, these fundamental flaws with existing online content proposals will not be fixed — and still leave people mostly unprotected from the dangers that the Facebook Papers have made painfully clear.

 

Mark Scott is chief technology correspondent at POLITICO.

Sem comentários: