Humanity should fear advances in artificial intelligence

updated 2024

INTRODUCTION

The increasing ability of machines in recent years to replicate or even supersede human abilities in complex tasks has been impressive. Already, artificial intelligence (AI) techniques have been used to allow machines to beat the best players in the world at both chess [Ref: Time] and the Chinese board game Go [Ref: Guardian]. IBM’s Watson has beaten the best human players on the long-running US quiz show, Jeopardy! [Ref: Techrepublic]. AI has long been built into consumer goods such as Google search, Alexa and Siri, and is being rolled out in the NHS [Ref: Gov.uk]. Things really took off in September 2022, however, with the release of OpenAI’s ChatGPT. Its ability to produce convincingly human-sounding text in response to prompts written in everyday language was a sensation, reaching one million users within five days [Ref: Exploding Topics]. Since then, AI applications have hit the mainstream, available and easy-to-use for anyone with an internet connection; alongside text-generation products like ChatGPT, AI-powered tools for generating images, audio, video, code and more have proliferated.

But the implications for society are only just becoming apparent. In January 2024, the managing director of the International Monetary Fund claimed 40 per cent of jobs worldwide will be affected by AI. She warned that it will likely worsen global inequality, but could also enhance some humans’ performance and create new jobs [Ref: Guardian]. Jobs lost could range from call-centre staff being replaced by chatbots to highly-educated law and finance professionals; these industries are now projected to be among the most affected by AI [Ref: Guardian]. Facial recognition systems, combined with ubiquitous CCTV, could erode our privacy. AI-powered autonomous weapons, or bioweapons developed using AI technology, could herald new and deadlier forms of warfare. The world-famous physicist, Stephen Hawking, even claimed that ‘AI may replace humans altogether’ as a ‘new form of life’ that can rapidly learn and improve, making people obsolete [Ref: Independent]. However, AI is also projected to boost the UK’s GDP by up to 10 per cent by 2030 [Ref: PwC] and has the potential to revolutionise fields as diverse as medicine, education and sustainability. So, should we welcome AI’s potential, or are the risks too great?

IN CONTEXT

This section provides a summary of the key issues in the debate, set in the context of recent discussions and the competing positions that have been adopted.

What is AI?

The term ‘artificial intelligence’ was coined in 1956, but ‘AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage’ [Ref: SAS]. In essence, AI is ‘a collection of technologies that can be used to imitate or even to outperform tasks performed by humans using machines’ [Ref:The Conversation]. AI is used in a wide range of applications, from search engines on the internet to image-generating chatbots to self-teaching programs that learn from experience, such as Deepmind’s AlphaGo [Ref: Financial Times]. Now, it seems, ‘Machines are rapidly taking on ever more challenging cognitive tasks, encroaching on the fundamental ability that sets humans apart as a species: to make complex decisions, to solve problems – and, most importantly, to learn’ [Ref: Financial Times].

The ethics of AI

AI poses some fundamental ethical questions for society. For example, how should we view the potential for AI to be used in the military arena? Although there is currently a consensus that ‘giving robots the agency to kill humans would trample over a red line that should never be crossed’ [Ref: Financial Times], it should be noted that robots are already present in bomb disposal, mine clearance and anti-missile systems. Some, such as software engineer Ronald Arkin, think that developing ‘ethical robots’ that are programmed to strict ethical codes could be beneficial in the military, if they are programmed never to break rules of combat that humans might flout [Ref: Nature]. Similarly, the potential for the increased autonomy and decision-making that AI embodies opens up a moral vacuum that some suggest needs to be addressed by society, governments and legislators [Ref: The Times], while others argue that a code of ethics for robotics is urgently needed [Ref: The Times]. After all, who would be responsible for a decision badly made by a machine? The programmer, the engineer, the owner or the robot itself?

Furthermore, critics say that driverless cars may be involved in situations where there is a split-second decision either to swerve, possibly killing the passengers, or not to swerve, possibly killing another road user. How should a machine decide? To what extent should we even allow machines to decide? [Ref: Aeon] Others argue that technology is fundamentally ‘morally neutral’. ‘The same technology that launched deadly missiles in WWII brought Neil Armstrong and Buzz Aldrin to the surface of the moon. The harnessing of nuclear power laid waste to Hiroshima and Nagasaki, but it also provides power to billions without burning fossil fuels.’ In this sense: ‘AI is another tool and we can use it to make the world a better place, if we wish.’ [Ref: Gadgette]

A threat to humanity?

For some critics, advances in AI pose serious existential problems for humanity. Oxford professor Nick Bostrom has voiced concerns about what might happen if the ability for machines to learn for themselves accelerates very rapidly – what he calls an ‘intelligence explosion’. Bostrom believes ‘at some point we will create machines that are superintelligent, and that the first machine to attain superintelligence may become extremely powerful to the point of being able to shape the future according to its preferences.’ [Ref: Vox]

In May 2023, hundreds of prominent AI researchers and public figures from around the world, including the CEOs of leading companies such as DeepMind and OpenAI, released a Statement on AI Risk comparing ‘the risk of extinction from AI’ to that from ‘pandemics and nuclear war’ [Ref: Center for AI Safety]. Autonomy is a key issue that some critics are especially concerned about, with technologist Tom Ditterich warning that despite proposals to have driverless cars, autonomous weapons and automated surgical assistants, AI systems should never be fully autonomous, because: ‘By definition a fully autonomous system is one that we have no control over, and I don’t think we ever want to be in that situation.’ [Ref: Business Insider] Researcher Tamlyn Hill describes concerns about self-teaching AI that can outperform humans in every cognitive task, and thus always thwart attempts by humanity to control it [Ref: Scientific American].

Additionally, critics are keen to explore practical issues, such as the future of work, with many suggesting that advances in automation will result in certain jobs becoming obsolete. Commentator Claire Foges draws parallels with the Luddites, who attempted to resist the increasing automation of their jobs during the Industrial Revolution 200 years ago [Ref: History.com]. She notes recent forecasts that up to five million people could lose their jobs to automation [Ref: The Times]: ‘Two hundred years on, a braver newer world is arriving at astonishing speed, and threatens to make Luddites out of us all. The robots are coming, they are here; creeping stealthily into factory, office and shop.’ [Ref: The Times]. Some see this as a positive development, though: AI is already taking over repetitive work such as data entry, freeing people to work on more interesting, complex jobs better suited to human aptitudes.

A brave new world?

For advocates, AI promises to change the world in countless positive ways, while they see warnings about its risks as overblown. As Adam Jezard observes: ‘Such concerns are not new…From the weaving machines of the industrial revolution to the bicycle, mechanisation has prompted concerns that technology will make people redundant or alter society in unsettling ways.’ [Ref: Financial Times] Computer scientist Yann LeCun, one of the ‘godfathers’ of modern AI, cautions against over-emphasising its dangers: ‘AI will bring a lot of benefits to the world. But we’re running the risk of scaring people away from it.’ He anticipates a future where humans benefit from AI assistance which will be ‘like working with a staff of super smart people’ [Ref: Wired].

Advocates point to the benefits AI has already brought us when envisaging how it will continue to change the way we live our lives. In the field of medicine alone, AI is being used to improve prenatal screening for foetal abnormalities [Ref: iFIND], detect the early signs of heart failure [Ref: MRC Laboratory of Medical Sciences] and track changes in brain tumours from a person’s movement [Ref: Computational Oncology]. Others criticise predictions that advances in AI signal the end of humanity as misguided: ‘After so much talking about the risks of super intelligent machines, it’s time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges.’ [Ref: Aeon]

Perhaps more profoundly, others question why we are so quick to underestimate our abilities as humans, and fear AI. Author Nicholas Carr observes: ‘Every day we are reminded of the superiority of computers…What we forget is that our machines are built by our own hands.’ He argues that ‘if computers had the ability to be amazed, they’d be amazed by us’ [Ref: New York Times].

Fundamental to many pro-AI arguments is the belief that technological progress is a good thing in and of itself. Futurist Dominic Basulto speaks of ‘existential reward’, arguing that ‘humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential’ [Ref: Washington Post]. Throughout history, we have gradually made our lives easier and safer through innovation, automation and technology. For instance, the introduction of driverless vehicles is predicted drastically to reduce road accidents, just as ‘Machines known as automobiles long ago made horses redundant in the developed world – except riding for a pure leisure pursuit or in sport’ [Ref: The Times].

So, with all of the arguments in mind, are critics right to mistrust the proliferation of AI in our lives, and the ethical and practical problems that it may present humanity in the future? Or should we embrace the technological progress that AI represents, and the potential it has to improve our lives?

ESSENTIALS

FOR

AI will affect 40% of jobs and probably worsen inequality, says IMF head
Dan Milmo Guardian 15 January 2024

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’
Simon Hattenstone The Guardian 23 March 2023

AI causes real harm. Let’s focus on that over the End-of-Humanity hype
Alex Hanna & Emily M. Bender Scientific American 12 August 2023

‘Very scary’: Mark Zuckerberg’s pledge to build advanced AI alarms experts
Dan Milmo Guardian 19 January 2024

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not
Tamlyn Hunt Scientific American25 May 2023

Why Conscious AI Is a Bad, Bad Idea
Anil Seth Nautilus 8 May 2023

AGAINST

AI can help shape society for the better – but humans and machines must work together
D Fox Harrell Guardian 18 August 2023

Could AI transform life in developing countries?
Briefing, The Economist 25 January 2024

The future of jobs in the age of AI, sustainability and deglobalization
Saadia Zahidi World Economic Forum 3 May 2023

10 Ways AI Was Used for Good This Year
Sophie Bushwick Scientific American 15 December 2022

Ukraine Is Using AI to Help Clear Millions of Russian Landmines
Vera Bergengruen Time 2 November 2023

AI breakthrough in detecting leading cause of childhood blindness
UCL News 27 April 2023

IN DEPTH

Google Deepmind AI makes breakthrough in one of hardest tests for artificial intelligence
Andrew Griffin The Independent 17 January 2024

Preventing an AI-related catastrophe
Benjamin Hilton 80000 Hours March 2023

Davos 2024: Can – and should – leaders aim to regulate AI directly?
Amanda Ruggeri BBC Worklife 19 January 2024

Choosing AI’s Impact on the Future of Work
Daron Acemoglu and Simon Johnson Stanford Social Innovation Review 25 October 2023

Artificial intelligence and ethics: 10 areas of interest
Brian Patrick Green Markkula Center for Applied Ethics 21 November 2017

Top nine ethical issues in artificial intelligence
Julia Bossman World Economic Forum 21 October 2016

The doomsday invention
Raffi Khatchadourian New Yorker 25 November 2015

‘Omens’
Ross Anderson Aeon 25 February 2013

BACKGROUNDERS

Useful websites and materials that provide a good starting point for research.

Don’t fear the robots: why the rise of the machines is nothing to be scared of
Kevin McCullagh Icon 26 January 2018

The 10 most important breakthroughs in artificial intelligence
James O’Malley TechRadar 10 January 2018

The real danger of artificial intelligence it’s not what you think
João Duarte Hackernoon 13 November 2017

How AI can free up professionals to add more value
Christopher Fitzgerald and Fernando Florez ACCA 1 May 2017

Our fear of artificial intelligence
Paul Ford MIT Technology Review 11 February 2015

ORGANISATIONS

Links to organisations, campaign groups and official bodies who are referenced within the Topic Guide or which will be of use in providing additional research information.

Campaign to Stop Killer Robots

IN THE NEWS

Relevant recent news stories from a variety of sources, which ensure students have an up to date awareness of the state of the debate.

The AI race is generating a dual reality
John Thornhill FT 18 April 2024

Cryptographers Solve Decades-Old Privacy Problem

Madison Goldberg Nautilus 17 December 2023

You won’t be able to ignore AI in 2024

Will Dunn New Statesman 4 January 2024

AI is the buzz, the big opportunity and the risk to watch among the Davos glitterati

Kelvin Chan and Jamey Keaten AP News 18 January 2024

AUDIO VISUAL

The robots are coming: friends or foes?
Battle of Ideas 18 October 2014