Updated: February 2025
Please note, this Topic Guide should be the starting point of your research. You are encouraged to conduct your own independent research to supplement your argument.
INTRODUCTION
There has long been concern that social-media giants have allowed the spread of ‘misinformation’ and ‘hate speech’ online and that they should do more to combat it. This has led to the big tech companies like Meta (which owns Facebook and Instagram), X and Alphabet (Google’s parent company) employing armies of fact-checkers and deploying algorithms to remove misleading or offensive posts. But as a result, many worry about the power this gives them, arguing that these companies have become a near-oligopoly on what can be seen and shared online and have the potential to shape the narrative and how we see the world.
However, in a twist to the trend in recent years, it seems that Meta has accepted some of these concerns. In January 2025, Mark Zuckerberg, Meta’s CEO, announced that the company would reduce the use of fact-checkers, who he said were ‘too politically biased’ and would ‘get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse’. [Ref:Guardian] In place of fact-checkers, Meta will be introducing a system similar to X’s Community Notes, where users can append clarifying or corrective information to a post. This has proven to be both trusted across the political spectrum and to have had a dampening effect on the spread of misinformation [Ref:BBC].
The capacity of social media to affect events seemed to be illustrated by the public disorder in the UK in the summer of 2024. In July, a lone attacker walked into a children’s dance class in Southport, killing three young girls and injuring another 10. Directly following the attack, rumours circulated online about the killer’s identity and motivation, which then incited protests and riots across the UK. Many of the attacks violently targeted Muslims, mosques, refugee and asylum seekers.
The scale and ferocity of these events led senior politicians and the chief executive of media regulator Ofcom to renew calls for social-media companies to remove violent content, and overhaul their algorithms to combat the dangers of misinformation and hate speech [Ref: BBC]. There was also consternation from many, directed towards X owner Elon Musk, who had waded into the row and been overtly critical of the UK government’s handling of the unrest [Ref: Politico].
Musk says he bought Twitter (which he rebranded as X) to protect free speech. But he continued to attract criticism following Donald Trump’s victory in the US presidential election in November 2024. High-profile journalists and publications accused X of ‘Trump boosterism’ [Ref: Politico]. In the UK, the Guardian newspaper announced it would be removing its official editorial accounts from X, stating: ‘The US presidential election campaign served only to underline what we have considered for a long time: that X is a toxic media platform and that its owner, Elon Musk, has been able to use its influence to shape political discourse.’ [Ref: Guardian] The non-governmental organisation (NGO) Reporters without Borders took X to court in France for ‘letting fake news run wild’ [Ref: Politico].
In addition to this public debate about tech companies, misinformation, hate speech and free speech, the UK’s Online Safety Bill, which was already scheduled to become law, will take effect from March 2025. Under this new legislation, online platforms will be legally required in the UK to ‘take proactive measures‘ against ‘the most serious and prevalent illegal content and activity’ [Ref: Gov.UK]. For some, this is a welcome step in the battle for online safety in a world where the internet has become ‘the wild west of content’ and deemed too dangerous for adults, never mind unsupervised children. For others, the legislation represents a worrying threat to free speech. The legislation will give legal weight to such terms such as ‘legal but harmful’ (more colloquially known as ‘lawful but awful’) and could result in governments and tech companies silencing criticism, and challenging ideas and topics they deem ‘offensive’. It is, critics assert, ‘an invitation to the powerful to silence the powerless’ [Ref: UnHerd].
Does the proliferation of concerns about misinformation and hate speech represent a threat to free speech as many fear? Are tech companies simply private companies who should be able to self-regulate their practices? Would government intervention in private business decisions set a dangerous precedent?
Or are tech companies simply not doing enough to root out hate speech and other harmful online content on their platforms – an issue which, many claim, is damaging to democracy. Should government intervene, given that the market strength of these companies means they are, in fact, more like public utilities whose power needs to be considered in a different way?
DEBATE IN CONTEXT
This section provides a summary of the key issues in the debate, set in the context of recent discussions and the competing positions that have been adopted.
With over 5,000 mainstream news articles published daily in the USA alone [Ref: ResearchGate] and around 4.5 million blog posts per day internationally [Ref: TechJury], information is more accessible than ever. This is seen by some as an ‘overabundance of information, both factual and false’ [Ref: BMJ]. Because of this, some argue it is harder than ever to distinguish between accurately presented fact and misinformation. As Ruth Marcus puts it: ‘With facts passé, the next inexorable move is to reduce all news to the same level of distrust and disbelief. If nothing is true, then everything can be false.’ [Ref: Washington Post] To some, this requires the intervention of tech companies to help us navigate an ever more chaotic, ‘post-truth’ [Ref:New Statesman] media landscape or, as former US President Barack Obama once put it, the ‘dust cloud of nonsense’ [Ref: Business Insider].
Born out of this new world of social media is the inevitable power social-media companies now hold. Given their place in effectively controlling much of the supply of information, tech companies are said to be becoming more powerful than nation-states [Ref: Common Dreams]. The possibility of affording such organisations even greater power to influence the flow of information worries many, particularly with the advance of artificial intelligence (AI), which was held by some to be ‘the greatest digital risk to the 2024 elections’ [Ref: LA Times].
Perhaps, however, social media has less influence than many assume. Documentarian Adam Curtis contends that we get ‘lost in the hysteria about it all’ and that while we focus on identifying the pernicious impactsof social media, what we fail to investigate is the question of why ‘people really vote for Brexit and Trump’ [Ref: Idler]. For some, social media is curating the way we think; for others, it just reflects it.
Moreover, there are many commentators who point out that, on a variety of issues, it is the mainstream media, not social media who are curating the way we think. There have been numerous incidents of mainstream media spreading inaccurate information. For example, when the BBC, the New York Times and several other outlets reported that the Israeli armyhad bombed a hospital in Gaza [Ref: The Atlantic]. Does that count as ‘misinformation’?
In addition, some journalists and commentators suggest that there have been many instances of news that is simply not reported in the mainstream. People turn to X and other social-media platforms, they argue, precisely because ‘mainstream’ or ‘legacy’ mediacan no longer be trusted to report the truth [Ref: Spectator] or when they seem to be engaged in curating a narrative that excludes certain news from being reported [Ref: spiked]. For such commentators, it is the free flow of information on these social platforms that allows the mainstream media to be corrected when they have misled the public.
However, social-media platforms are not immune from the charge of ‘narrative curation’ themselves. In the wake of Donald Trump’s victory in the 2024 US election, Elon Musk has been accused of deliberately manipulating X algorithms in favour of Republican content [Ref: Tech Policy Press]. After Russia invaded Ukraine in February 2022, Meta (the parent company of Facebook) removed the Russian state-backed broadcaster RT from its platforms [Ref:Independent]. Posts from RT.com on Twitter (now X), which had been already labelled with the tag ‘Russia state-affiliated media’, were removed and replaced with a notice of ‘Account withheld’. [Ref:Twitter]
In some instances, both the mainstream media and social-media platforms have been found to have suppressed news and information in unison. During the coronavirus pandemic, the ‘lab leak theory’ – that the Covid-19 virus originated in the Wuhan Institute of Technology rather than being passed to humansfrom live animals at a nearby market – was roundly condemned as a ‘conspiracy theory’ [Ref: Vox] with articles in the press as well as comments and posts on social-media channels being censored. Yet evidence and opinion have since emerged which suggest that it is in fact a credible theory deserving of proper consideration [Ref: Telegraph].
Concerns over ‘misinformation’, ‘disinformation’, ‘hate speech’ and ‘fake news’ online have prompted a series of laws, from the EU’s Digital Services Act [Ref: European Commission] to the UK’s Online Safety Act [Ref: Gov.UK], as well as a range of investigations, task forces, court rulings, social-media bans and legislation around the world – all deemed to be tackling online misinformation [Ref: Poynter].
In addition, Western governments have begun moving towards regarding social media platforms as publishers of user-generated content, holding the owners of these platforms as responsible for their users’ content. In August 2024, in a world first, Pavel Durov, founder and CEO of instant messaging and platform service Telegram, was arrested at Le Bourget Airport on the outskirts of Paris, France. The charges were seemingly related to a lack of moderation on his platform and accusations that Telegram was complicit in cases linked to drug trafficking, support for terrorism and cyberstalking [Ref:Le Monde]. Durov was subsequently charged with 12 crimes in connection with Telegram’s failure to remove certain content at the request of the authorities.
The only comparable case to the Durov arrest was in 2016, when a Brazilian judged ordered the arrest of a senior Facebook executive who was briefly jailed for refusing to hand over private WhatsApp messages to assist in a drug-trafficking case [Ref: Guardian]. However, he was quickly released, and many commentators branded the decision ‘hurried’, ‘unlawful’ and ‘extreme’ [Ref: spiked].
Who to trust?
These concerns about online debate and information have led to fears that what we are experiencing amounts to a ‘reality crisis’ [Ref: New York Times] – that we can no longer tell fact from fiction as a society. There are concerns about who gets to decide what is misinformation and what is not, and, in the case of contentious issues, one person’s fact can easily be another’s perspective or politicised spin on the truth.There is no simple line between genuine comment and supposed misinformation. Instead, there is a range of content that often gets branded as misinformation, from innocent parody through to genuinely manipulative content [Ref: Visual Capitalist]. If you cannot draw a simple distinction, then many worry that political bias will creep in.
However, defenders of intervention cite the fact that both Facebook and Google have been using independent, third-party fact-checkers to avoid bias [Ref: Facebook], although Meta has recently announced that it will end this practice [Ref: Meta]. Despite this, instances of censorship of legitimate articles have plagued social media. For example, an article from the news website UnHerd that was critical of the World Health Organisation was censored by Facebook for being misinformation, despite containing none [Ref: UnHerd]. Furthermore, when Twitter (now X) attempted to de-politicise its advertising, it resulted in oil companies being allowed to claim natural gas was stopping global warming while stopping climate groups from posting advertising that countered these claims [Ref: MSNBC].
Advocates of tech-company intervention argue that this is no reason to stop trying to fight misinformation altogether. As time goes on, they argue, the technology will improve and, even if it is never perfect, it is worth occasionally blocking content in error in the interest of stopping the huge levels of misinformation we see tearing apart our society today [Ref: Mashable].
The right to free speech vs the danger of misinformation
Critics of intervention argue that for tech companies to interfere with the content people post online is to infringe on people’s right to freedom of speech. Social-media platforms like Facebook and X have grown to a point where they are the default forums for public debate. Following on from the writing of JS Mill, it is often argued that this freedom of expression is foundational to democracy and progress [Ref: Alexander Meiklejohn]. A faith in the ‘marketplace of ideas’ [Ref: Document Journal] leads opponents of tech-company intervention to believe that truth and goodwill prevail when ideas are left to themselves to compete in the public realm. Freedom, therefore, is a requirement to allow people to think, explore and argue. Restriction hampers social progress.
This is countered by writer George Monbiot, who argues, ‘in a marketplace, you are forbidden from making false claims about your product’ [Ref: Guardian]. Using the context of vaccine misinformation, he goes on to argue that, considering the regulation we place on financial and goods markets: ‘We protect money from lies more carefully than we protect human life.’ For commentators like Monbiot, we need central control of what can and cannot be said to protect us from dangerous lies.
However, advocates of free speech argue that imposing any restriction to speech from above could end up causing greater social damage than allowing misinformation to roam freely. Callum Baird warns censorship is a ‘Pandora’s box’ that could lead us down a road where it is very quickly socially acceptable for a ‘small set of powerful groups seeking to protect themselves’ to filter public discourse [Ref: Newsroom]. No guarantee of truth is inherent in any restrictions of speech. Baird insists that all restrictions on free speech are effectively slippery slopes, with small restrictions logically leading to ever more oppressive and authoritarian ones, with no better chance of seeing truth.
Who decides where the boundaries lie?
Should the government decide what constitutes misinformation and hate speech, compelling compel tech companies into action? There are plenty of cases in which the government could be argued to have spread misinformation itself, for example the so-called ‘Dodgy Dossier’ [Ref: Guardian] produced by the UK government in 2002 to justify the Iraq War in 2003.
Should tech companies continue to intervene at their own discretion? Even Twitter’s former CEO, Jack Dorsey, has admitted it sets a ‘dangerous precedent’ [Ref: Guardian] when tech companies try to control the content on their platforms, and that the knock-on effects of doing so may well distort public discourse even furtherfrom the ‘healthy conversation’ they pursue.
To work this out we need to answer: at what point does something become misinformation or hate speech? How and at what point does misinformation or hate speech cause harm, or other undesirable outcomes – for instance, unduly influencing an election? And who, if anyone, has the right to draw that line and carry out measures against it?
ESSENTIAL READING
It is crucial for debaters to have read the articles in this section, which provide essential information and arguments for and against the debate motion. Students will be expected to have additional evidence and examples derived from independent research, but they can expect to be criticised if they lack a basic familiarity with the issues raised in the essential reading.
FOR
Opinion: There’s a clear way to regulate Facebook, TikTok and other social media
Anika Collier Navaroli and Ellen K Pao LA Times 20 March 2024
The time has come for strong, external oversight of social media companies
Melanie Dawes Ofcom 21 November 2021
In 2020, Disinformation Broke The US
Jane Lytvynenko Buzzfeed News 6 December 2020
Covid lies cost lives – we have a duty to clamp down on them
George Monbiot Guardian 27 January 2021
AGAINST
The government has withdrawn its misinformation bill: a philosopher explains why regulating speech is an ethical minefield
Hugh Breakey The Conversation 25 November 2024
Labour’s war on free speech is the real threat to public life
Frank Furedi The Times 14 August 2024
Online extremism: Censorship isn’t the solution
Callum Baird newsroom 23 February 2021
How Silicon Valley, in a Show of Monopolistic Force, Destroyed Parler
Glenn Greenwald Substack 12 January 2021
IN DEPTH
The Future of Free Speech
Jacob Mchangama Letters on Liberty Academy of Ideas
Should spreading anti-vaccine misinformation be criminalised?
Melinda C Mills & Jonas Sivelä the bmj 17 February 2021
Adam Curtis: Social media is a scam
Tom Hodgkinson Idler 3 February 2021
Twitter’s Trump ban is more important than you thought
Paul Waldman Washington Post 18 January 2021
We’re better without Trump on Twitter. And worse off with Twitter in charge.
Zephyr Teachout Washington Post 14 January 2021
The President ‘Shouts Fire’ and Two Social Media CEOs Respond: What Happens Next?
Denali Sagner Issue Voter 28 July 2020
Facebook Restricts Speech by Popular Demand
Daphne Keller The Atlantic 22 September 2019
Free Speech and Its Relation to Self-Government
Alexander Meiklejohn 1948
BACKGROUNDERS
Useful websites and materials that provide a good starting point for research.
New Research Points to Possible Algorithmic Bias on X
Prithvi Iyer Tech Policy Press 15 November 2024
Fact checking on Facebook
Facebook DOA: 1 April 2021
Ditching of Facebook factcheckers a ‘major step back’ for public discourse, critics say
Robert Booth Guardian 7 January 2025
How Many Blogs Are Published per Day in 2021?
Bobby Chernev techjury 29 March 2021
The AI-Powered Fact Checker That Investigates QAnon Influencers Shares Its Secret Weapon
Matt Binder Mashable India February 2021
How To Spot Fake News, Visualized in One Infographic
Omri Wallach Visual Capitalist 10 February 2021
Twitter launches Birdwatch, a fact-checking program intended to fight misinformation
Kim Lyons The Verge 25 January 2021
Bias-aware news analysis using matrix-based news aggregation
Felix Hamborg, Norman Meuschke & Bela Gipp Research Gate June 2020
With fact-checks, Twitter takes on a new kind of task
Elizabeth Culliford & Katie Paul Reuters 30 May 2020
Has the internet broken the marketplace of ideas? Rethinking free speech in the Digital Age
Cody Delistraty Document Journal 5 November 2018
Fake news handed Brexiteers the referendum – and now they have no idea what they’re doing
Andrew Grice The Independent 18 January 2017
Why in the post truth age, the bullshitters are winning
Laurie Penny New Statesman 6 January 2017
When all news is ‘fake,’ whom do we trust?
Ruth Marcus The Washington Post 12 December 2016
This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook
Craig Silverman Buzzfeed News 16 November 2016
Obama: Fake News on Facebook Creates ‘Dust Cloud of Nonsense’
Alex Heath Business Insider 7 November 2016
Iraq dossier drawn up to make case for war
Richard Norton-Taylor Guardian 12 May 2011
FURTHER READING
Reeves urges online platforms to remove violent content
Lauren Turner BBC News 26 January 2025
How the Biden Administration Can Help Solve Our Reality Crisis
Kevin Roose New York Times 2 February 2021
The Problem of Free Speech in an Age of Disinformation
Emily Bazelon New York Times 13 October 2020
Permanent suspension of @realDonaldTrump
Twitter 8 January 2021
The year Big Tech became the Ministry of Truth
Fraser Myers Spiked 28 December 2020
Facebook’s incompetent censorship
Douglas Murray UnHerd 12 February 2021
Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach
Carole Cadwalladr & Emma Graham-Harrison Guardian 17 March 2018
Fake News – Statistics & Facts
Amy Watson statista 5 May 2020
What Facebook’s Australia news ban could mean for its future in the US
Kari Paul Guardian 27 February 2021
Twitter thinks ads about climate change are bad. Big Oil’s disinformation is fine, though.
Emily Atkin MSNBC 4 February 2021
Charities condemn Facebook for ‘attack on democracy’ in Australia
Emma Graham-Harrison Guardian 20 February 2021
AUDIO
Tech Censorship and Independent Media, with Glenn Greenwald and the CEOs of Parler and Substack
The Megyn Kelly Show 13 January 2021
Misinformation Wars: who fact-checks the fact-checkers?
Battle of Ideas festival 19 October 2023