Deplatforming works, but it’s not enough to fix Facebook and Twitter

The research suggests that banning certain users on Twitter and Reddit does help cut down hateful content – but it raises other concerns
Getty Images / WIRED

It took cheering on an insurrection, but the man who once labelled himself “the Ernest Hemingway of 140 characters”, has vanished from every major platform. Even Donald Trump’s beloved Twitter, where he held one of the world’s ten most popular accounts, now lies dormant.

His followers have received similar treatment. On Reddit, the subreddit r/DonaldTrump, has followed last year’s ban of r/The_Donald. Twitter claims to have taken down more than 70,000 Qanon accounts. Facebook has begun targeting Stop the Steal content, 69 days after the Joe Biden fairly and comprehensively won the US presidential election. Parler, the “free speech network", was ripped off hosting platform Amazon Web Services, with Amazon claiming it had failed to curb a rise in violent content.

As more platforms take action against extremist content, some have questioned whether such bans actually have the intended effect. On this point, the research is clear: if the goal is to detoxify a platform, bans work. A 2015 study on several particularly virulent subreddits found that “More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage-by at least 80 per cent.”

A more recent investigation into the consequences of last year’s end to r/The_Donald and r/Incels, found corresponding results: “moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers.”

That mirrors research into other extremist groups, including Isis. Amarnath Amarasingam, a senior research fellow at the Institute for Strategic Dialogue, collated multiple studies showing that suspensions did have an impact on replies, retweets and overall dissemination. Isis was removed from mainstream platforms like Facebook and Twitter in late 2015, he explains, which forced them to move to Telegram, where they continued to attract loyalists and plot attacks. “Then, in November 2019, Europol and Telegram collaborated on a sustained campaign, and it was hugely effective,” he says. “Several disseminators were arrested in real life as well, and the network suffered a major blow.”

The bottom line is that deplatforming reduces reach. It destroys an online community’s network, curbing their ability to gain new followers and victimise groups they dislike. Tommy Robinson, Milo Yiannopoulos and Alex Jones have seen their supporter base plummet after their respective bans; their revenues have declined accordingly. “I think the evidence is pretty clear that that’s a really positive step,” says Joe Mulhall, a senior researcher at Hope not Hate, an anti-extremism advocacy group.

One concern is that bans may radicalise a group’s base, transferring the problem elsewhere – to unmoderated platforms like Telegram or Discord or bespoke platforms, like Gab, set up specifically for far right discussion. In the same study of r/The_Donald, users who migrated to a new dedicated website “showed increases in signals associated with toxicity and radicalisation, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community.”

It’s easy to overstate the volume of these migrations, however, not least because users can hold multiple accounts simultaneously, deploying them for different purposes. “It’s also important to note that we always see this kind of pattern of coverage when there’s a purge,” says Richard Rogers, a professor of media studies at University of Amsterdam. “There are a bunch of stories about how it’s benefiting Gab or benefiting Parler or benefiting all the alternatives, but we should keep in mind that Voat, the Reddit alternative, has gone offline as of December 2020. So not all is well in alt-tech land.”

It’s also important to emphasise that bans on mainstream platforms aren’t the end of the story. “No one serious is arguing that if we remove far-right people or hate figures from mainstream platforms like Facebook and Twitter then the problem just disappears,” says Mulhall. “It simply limits their effects, and it moves them into parts of the internet, where they’re going to continue to be active, but it retards their ability to cause harm.”

So, de-platforming works – but there are other concerns. Is it legitimate? “Of course, open calls for violence, and hate speech should be restricted. But, the definitional boundaries are not always clear,” says Amarasingam. “As I’ve often said, an Isis beheading video is an easy choice. It’s harder when we are talking about criticisms of immigration policy or anti-refugee sentiment, or ‘build the wall’ rhetoric.”

Critics argue that their free speech is being impinged – but these are are private companies. If I don’t like what you say in my bar, I can kick you out. But there’s an important difference here – no bar dominates the world’s communication infrastructure.

In general, free speech arguments rely on a vague understanding of what deplatforming constitutes. The issue here is not just speech, but reach – access to a megaphone. Social media, explains Mulhall, lets extremists reach audiences of a size incomprehensible for most of the post war period. “If you wanted to traditionally hear and see what someone from the far right in the UK thought, like the National Front of the British National Party, you had to go along to a meeting, or write specifically to the far right and receive a booklet,” says Mulhall. Removing access to platforms brings us closer to the high bar of the post war period – extremists still have a right to speak within the bounds of legality, just to smaller audiences.

Equally, you could argue that detoxifying a platform actually expands freedom of speech, by removing people who ostracise and harass other people on these platforms. “By removing people that engage in that sort of behaviour from those platforms, we actually have an opportunity to create an online space, which is more pluralistic, that has more voices, especially from minority and marginalised communities, because we’ve removed the far right from them,” says Mulhall.

Perhaps the most troubling aspect of the bans is how they’ve been decided, and by who. It’s legitimate to ask why private companies, driven chiefly by PR concerns, govern the digital realm’s supposedly public sphere, pulling down world leaders and destroying competitor social media platforms. The timing of the takedowns seems typically reactive. “If Joe Biden didn’t win the presidency, I don’t think we’d be seeing the same kind of reaction to Trump’s tweets and to far right content online,” says Bharath Ganesh, a political geographer at the University of Groningen. “I think we need to be pretty cynical about the way that these tech companies are operating here.”

In the specific instance of Trump’s ban, it derived from clear and present danger of continued incitements to violence, says Renee DiResta, head of policy for Data for Democracy. This was not Silicon Valley targeting a conservative. “The President did not lose his account because of viewpoint based censorship, where the platform’s decided that they didn’t like the speech of a conservative President of the United States, the president lost his account because of incitement to violence, which is a violation of platform policy,” she says.

The community takedowns are slightly different. Parler was repeatedly warned to remove violent content and didn’t (the fact that many of its users tagged themselves on GPS, storming the Capitol was the final straw). The takedown of 70,000 QAnon accounts, on the other hand, represents social media trying to clean up a mess it created – QAnon would not exist without it. “Not only did the platforms allow QAnon to remain on the platform, they in fact through the recommendation engines, referred people into QAnon communities and pushed QAnon as a suggestion,” says DiResta. “And so, for years, communities were not only given safe harbour on the platforms they were, in fact, actually inadvertently promoted by the platforms. Now they are trying to figure out what to do about that after two years of allowing them to grow.”

Deplatforming isn’t a silver bullet – deleting X number of accounts after each atrocity cannot be the only policy. It’s an imperfect, short term option, one among a spectrum of interventions that need to be brought to bear on social media.

“I don’t see many of these platforms throwing out their business model any time soon,” says Amarasingam. “So, in the meantime, deplatforming could serve as a solution. I think the broader issue with radicalisation, of course, is also what happens offline.”

Will Bedingfield is a culture writer at WIRED. He tweets from @WillBedingfield

This article was originally published by WIRED UK