Tech companies should act to stop online misinformation

2021

INTRODUCTION

On 8 January 2021, Twitter permanently suspended the account of the former president, Donald J Trump. Although this final decision was provoked by Trump’s tweets before the riot at the Capitol building two days before, he had already been in trouble with social media companies for a variety of messages, particularly those loudly claiming unproven electoral fraud in the presidential election. (Ref: Washington Post)

After Russia invaded Ukraine in February 2022, Meta (the parent company of Facebook) removed the Russian state-backed broadcaster RT from its platforms. (Ref: Independent). Posts from RT.com on Twitter, already labelled with the tag ‘Russia state-affiliated media’, were removed and replaced with a notice of ‘Account withheld’. (Ref: Twitter)

These are the most high-profile interventions in the escalating involvement of social-media companies in what content can be shared on their platforms, sparking a debate about what should be done to combat misinformation on the web. Twitter currently has 1,500 employees dedicated to content review, but many are calling for tech companies to act even more stringently and to crack down harder on misinformation, while others argue that to do so is an infringement of people’s right to freedom of expression.

This has led to fears there is ‘Reality Crisis’ (Ref: New York Times), that we can no longer tell fact from fiction as a society. This panic follows on from the issue of fake news and whether it had any effect in influencing the 2016 US Presidential Election (Ref: Buzzfeed News) and the Brexit vote in the UK (Ref: The Independent). In addition, scandals like the one concerning Cambridge Analytica (Ref: The Guardian) have heightened fears that ordinary voters are susceptible to influence by dark forces.

But others worry about free speech and the new power of tech companies. Facebook, Twitter, Google and Amazon Web Services (the world’s biggest host of websites) have become a near-oligopoly on what can be seen and shared online. Are these still simply private companies or does their market strength suggest they are, in fact, more like public utilities, whose power needs to be considered in a different way?

IN CONTEXT

With over 5,000 mainstream news articles published daily in the USA alone (Ref: ResearchGate) and around 4.5 million blog posts per day internationally (Ref: TechJury), information is more accessible than ever. This is seen by some as an ‘overabundance of information, both factual and false’ (Ref: BMJ). Because of this, some argue it is harder than ever to distinguish between accurately presented fact and misinformation. As Ruth Marcus puts it: ‘With facts passé, the next inexorable move is to reduce all news to the same level of distrust and disbelief. If nothing is true, then everything can be false.’ (Ref: The Washington Post). To some, this requires the intervention of tech companies to help us navigate an ever more chaotic, ‘post-truth’ (Ref: New Statesman) media landscape or, as former President Barack Obama puts it, the ‘dust cloud of nonsense’ (Ref: Business Insider).

Born out of this new world of social media is the inevitable power social media companies now hold. Given their place effectively controlling much of the supply of information, tech companies are said to have turned themselves into a power rivalling governments (Ref: The Guardian). The possibility of affording such organisations even greater power to influence the flow of information worries many.

Perhaps, however, social media has less influence than many assume. Documentarian Adam Curtis contends that we get ‘lost in the hysteria about it all’ and that while we search for the dystopian effects of social media, what we fail to investigate is the question of why ‘people really vote for Brexit and Trump’ (Ref: Idler).

The difficulty in drawing a line

One issue some people have with tech companies’ attempts to stop online misinformation is the potential for political bias to affect what is or is not deemed unsuitable. There is no simple line between genuine comment and supposed misinformation. Instead, there is a range of content that often gets branded as misinformation, from innocent parody through to genuinely manipulative content (Ref: Visual Capitalist). If you cannot draw a simple distinction, then many worry that political bias will creep in.

However, defenders of intervention cite the fact that both Facebook and Google use independent, third-party fact-checkers to avoid bias (Ref: Facebook). Twitter is going further, keeping its programme in-house, but using interviews of people from across the political spectrum to create a community-led fact-checking system (Ref: The Verge). But can such ‘experts’ really ever be truly impartial?

In reality, instances of censorship of legitimate articles have plagued social media. For example, an article from the news website UnHerd that was critical of the World Health Organisation was censored by Facebook for being misinformation, despite containing none (Ref: UnHerd). Furthermore, Twitter’s attempt to de-politicise its advertising resulted in oil companies being allowed to claim natural gas was stopping global warming while stopping climate groups from posting advertising that countered these claims (Ref: MSNBC).

Advocates of tech-company intervention argue this is no reason to stop trying to fight misinformation altogether. As time goes on, they argue, the technology will improve and, even if it is never perfect, it is worth blocking the wrong content occasionally in the interest of stopping the huge levels of misinformation we see tearing apart our society today (Ref: Mashable).

The right to free speech vs the danger of misinformation

Critics of intervention argue that for tech companies to interfere with the content people post online is to infringe on people’s right to freedom of speech. Social-media platforms like Facebook and Twitter have grown to a point where they are the default forums for public debate. Following on from the writing of JS Mill, it is often argued this freedom of expression is foundational to democracy and progress (Ref: Alexander Meiklejohn). A faith in the ‘marketplace of ideas’ (Ref: Document Journal) leads opponents of tech company intervention to believe that truth and goodwill prevail when ideas are left to themselves to compete in the public realm. Freedom, therefore, is a requirement to allow people to think, explore and argue. Restriction hampers social progress.

This is countered by writer George Monbiot, who argues, ‘in a marketplace, you are forbidden from making false claims about your product’ (Ref: The Guardian). Using the context of vaccine misinformation, he goes on to argue that, considering the regulation we place on financial and goods markets, ‘We protect money from lies more carefully than we protect human life.’ For commentators like Monbiot, we need central control of what can and cannot be said to protect us from dangerous lies.

However, advocates of free speech argue that imposing any restriction to speech from above could end up causing greater social damage than allowing misinformation to roam freely. Callum Baird warns censorship is a ‘Pandora’s box’ which could lead us down a road where it is very quickly socially acceptable for a ‘small set of powerful groups seeking to protect themselves’ to filter public discourse (Ref: Newsroom). No guarantee of truth is inherent in any restrictions of speech. Baird insists that all restrictions on free speech are effectively slippery slopes, with small restrictions logically leading to ever more oppressive and authoritarian ones, with no better chance of seeing truth.

A tool for protecting orthodoxy?

In response to this, however, Emily Bazelon of the New York Times argues that free speech concerns are a façade for something more sinister. ‘A crude authoritarian censors free speech. A clever one invokes it to play a trick, twisting facts to turn a mob on a subordinated group’ (Ref: New York Times). From this view, those defending misinformation are often reactionaries who want to keep regressive ideas in the realm of public acceptability. Free speech is less a marketplace of ideas than a risk of poisoning the well.

One common argument is that even if one has the right to free speech, this does not mean the right to a platform where you can broadcast your thoughts to the world (Ref: New Statesman). Tech companies, it is argued, are merely private organisations and can take decisions they choose to improve the quality of their platforms. However, opponents insist that tech companies act increasingly like oligopolies, able to crush opponents ‘associated with the wrong political ideology’ (Ref: Glenn Greenwald). As one free-speech group puts it: ‘If you are prevented from speaking on the digital public square by a corporation, is it materially different from being prevented from speaking in the physical public square by government?’ (Ref: Free Speech Champions) Treating Facebook or Twitter simply as private companies allows them to act “as enforcers of new rules for the public square” which means we “are forfeiting constitutional protections and major aspects of self-governance’ (Ref: The Atlantic).

Who decides where the boundaries lie?

Should the government decide what constitutes misinformation and compel tech companies into action? There are plenty of cases in which the government could be argued to have spread misinformation itself, for example the so-called ‘Dodgy Dossier’ (Ref: The Guardian) produced by the UK government in 2002 to justify the Iraq War in 2003.

Should tech companies continue to intervene at their own discretion? Even Twitter’s CEO admits it sets a ‘dangerous precedent’ (Ref: The Guardian) when tech companies try to control the content on their platforms and they may likely push themselves further from the ‘healthy conversation’ they pursue.

To work this out we need to answer: At what point does something become misinformation? How and at what point does misinformation cause harm? And who, if anyone, has the right to draw that line and carry out measures against it?

ESSENTIAL READING

It is crucial for debaters to have read the articles in this section, which provide essential information and arguments for and against the debate motion. Students will be expected to have additional evidence and examples derived from independent research, but they can expect to be criticised if they lack a basic familiarity with the issues raised in the essential reading.

FOR

In 2020, Disinformation Broke The US
Jane Lytvynenko Buzzfeed News 6 December 2020

Covid lies cost lives – we have a duty to clamp down on them
George Monbiot The Guardian 27 January 2021

How the Biden Administration Can Help Solve Our Reality Crisis
Kevin Roose The New York Times 2 February 2021

The Problem of Free Speech in an Age of Disinformation
Emily Bazelon The New York Times 13 October 2020

AGAINST

The year Big Tech became the Ministry of Truth
Fraser Myers Spiked 28 December 2020

Facebook’s incompetent censorship
Douglas Murray UnHerd 12 February 2021

Online extremism: Censorship isn’t the solution
Callum Baird newsroom 23 February 2021

How Silicon Valley, in a Show of Monopolistic Force, Destroyed Parler
Glenn Greenwald Substack 12 January 2021

IN DEPTH

What does Free Speech Mean?
Free Speech Champions

Should spreading anti-vaccine misinformation be criminalised?
Melinda C Mills & Jonas Sivelä the bmj 17 February 2021

Adam Curtis: Social media is a scam
Tom Hodgkinson Idler 3 February 2021

Twitter’s Trump ban is more important than you thought
Paul Waldman The Washington Post 18 January 2021

We’re better without Trump on Twitter. And worse off with Twitter in charge.
Zephyr Teachout The Washington Post 14 January 2021

The President “Shouts Fire” and Two Social Media CEOs Respond: What Happens Next?
Denali Sagner Issue Voter 28 July 2020

Facebook Restricts Speech by Popular Demand
Daphne Keller The Atlantic 22 September 2019

Free Speech and Its Relation to Self-Government
Alexander Meiklejohn 1948

BACKGROUNDERS

Useful websites and materials that provide a good starting point for research.

Fact checking on Facebook
Facebook DOA: 1 April 2021

How Many Blogs Are Published per Day in 2021?
Bobby Chernev techjury 29 March 2021

The AI-Powered Fact Checker That Investigates QAnon Influencers Shares Its Secret Weapon
Matt Binder Mashable India February 2021

How To Spot Fake News, Visualized in One Infographic
Omri Wallach Visual Capitalist 10 February 2021

Twitter launches Birdwatch, a fact-checking program intended to fight misinformation
Kim Lyons The Verge 25 January 2021

Bias-aware news analysis using matrix-based news aggregation
Felix Hamborg, Norman Meuschke & Bela Gipp Research Gate June 2020

With fact-checks, Twitter takes on a new kind of task
Elizabeth Culliford & Katie Paul Reuters 30 May 2020

Has the internet broken the marketplace of ideas? Rethinking free speech in the Digital Age
Cody Delistraty Document Journal 5 November 2018

Fake news handed Brexiteers the referendum – and now they have no idea what they’re doing
Andrew Grice The Independent 18 January 2017

Why in the post truth age, the bullshitters are winning
Laurie Penny New Statesman 6 January 2017

When all news is ‘fake,’ whom do we trust?
Ruth Marcus The Washington Post 12 December 2016

This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook
Craig Silverman Buzzfeed News 16 November 2016

Obama: Fake News on Facebook Creates ‘Dust Cloud of Nonsense’
Alex Heath Business Insider 7 November 2016

Iraq dossier drawn up to make case for war
Richard Norton-Taylor The Guardian 12 May 2011

IN THE NEWS

Permanent suspension of @realDonaldTrump
Twitter 8 January 2021

Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach
Carole Cadwalladr & Emma Graham-Harrison The Guardian 17 March 2018

Fake News – Statistics & Facts
Amy Watson statista 5 May 2020

What Facebook’s Australia news ban could mean for its future in the US
Kari Paul The Guardian 27 February 2021

Twitter thinks ads about climate change are bad. Big Oil’s disinformation is fine, though.
Emily Atkin MSNBC 4 February 2021

Charities condemn Facebook for ‘attack on democracy’ in Australia
Emma Graham-Harrison The Guardian 20 February 2021

AUDIO

Tech Censorship And Independent Media, with Glenn Greenwald and the CEOs if Parler and Substack
The Megyn Kelly Show 13th January 2021