Uncategorized

Facebook waited too long to stop 10bn pageviews of repeat misinformation spreaders

[ad_1]

Facebook could have prevented more than 10bn pageviews of prominent misinformation-spreading accounts in the US if it had acted sooner in the run-up to the 2020 presidential election, a new report has claimed.

The social media giant took a number of eleventh-hour steps to combat misinformation ahead of November’s highly polarised election, such as demoting some misinformation superspreaders and blocking new political advertisements.

However according to the US-based non-profit activism group Avaaz, if the platform had tweaked its algorithm and moderation policies in March last year, instead of waiting until October, it would have prevented an estimated 10.1bn additional pageviews on the 100 top-performing pages it classified as repeat spreaders of misinformation.

The list comprised pages that Avaaz had identified as sharing at least three misinformation claims that were fact-checked between October 2019 and October 2020, with at least two of the posts falling within 90 days of each other.

The report said that Facebook’s delay in acting had been critical because it allowed prolific spreaders of misinformation to increase their online footprint dramatically, with some tripling their engagement over the course of the election campaign and even catching up with mainstream US media pages.

It added that even after Facebook acted to block top-performing misinformation pages from October 10, the effect was inconsistent. While the average decline in interaction was 28 per cent, not all major figures were affected.

Facebook also failed to add warning labels consistently to misleading articles that had been fact-checked by its third-party partners, Avaaz data showed, while allowing top SuperPACs to target swing state voters with false and misleading advertising, earning more than 10m impressions. 

The damning findings come just days before chief executive Mark Zuckerberg is due to testify before Congress on misinformation alongside Twitter chief executive Jack Dorsey. Both are likely to face questions over the measures they have taken to tackle misinformation since it emerged that Russia had deliberately spread misleading narratives ahead of the 2016 US election.

The Silicon Valley executives are also likely to be questioned over whether content on their platforms contributed to the storming of the Capitol in Washington DC on January 6, in which five people were killed. To date, Facebook has sought to downplay its role in the insurrection, with chief operating officer Sheryl Sandberg saying in an interview that it was primarily organised on platforms with less stringent policies.

But Avaaz said: “The violence on Capitol Hill showed us that what happens on Facebook does not stay on Facebook. Viral conspiratorial narratives cost American lives and almost set American democracy aflame.”

The report also suggested that even where Facebook has had measures in place against hate speech, their application was inconsistent. Since June 2020, researchers have identified 267 pages and groups spreading “violence-glorifying content” among 32m followers, in clear breach of Facebook’s regulations. As of February 2021, 118 remained on the platform, reaching 27m followers.

Facebook is understood to have reviewed the remaining groups, and found that of the 18 that violated its policy, four had already been removed; the remaining 14 were also removed.

Avaaz also estimated that 91m users saw mail-in voter fraud content on Facebook.

“Allowing the spread of voter fraud content is disturbing and surprising, because Facebook has been fairly stringent about removing this in the past,” said Samantha North, co-founder of North Cyber Research. “This does not bode well for future democratic processes, and it needs tackling before the next US election.”

“This report distorts the serious work we’ve been doing to fight violent extremism and misinformation on our platform,” Facebook said. “Avaaz uses a flawed methodology to make people think that just because a page shares a piece of fact-checked content, all the content on that page is problematic.”

It also pointed to actions such as the removal of tens of thousands of QAnon pages, groups and accounts and of millions of pieces of content that violated policies on coronavirus and vaccine misinformation.

[ad_2]
Read More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *