this sci-fi ‘B movie’ is still shaping how we view the threat of AI

October 26, 2024 marks the 40th anniversary of director James Cameron’s science fiction classic, The Terminator – a film that popularized society’s fear of machines that cannot be reasoned with and that “absolutely won’t stop … until you’re dead,” as one character memorably puts it.

The plot concerns a super-intelligent AI system called Cloudy which has taken over the world by starting nuclear war. Amidst the resulting destruction, human survivors stage a successful fightback led by the charismatic John Connor.

In response, Skynet sends a cyborg assassin (played by Arnold Schwarzenegger) back in time to 1984 – before Connor’s birth – to kill his future mother, Sarah. Such is John Connor’s importance to the war that Skynet is betting on erasing him from history in order to preserve its existence.

Today, public interest in artificial intelligence has probably never been greater. The companies that develop artificial intelligence typically promise that their technologies will perform tasks faster and more accurately than people. They claim that AI can spot patterns in data that aren’t obvious, improving human decision-making. There is a widespread perception that AI is poised to transform everything from warfare to economy.

Immediate risks include introducing bias into job application screening algorithms and the threat of generative AI to displace people from certain types of worksuch as software programming.

But it is the existential danger that often dominates public discussion – and the six Terminator films have exercised one overall influence on how these arguments are formulated. Really, according to somethe film’s portrayal of the threat posed by AI-controlled machines distracts from the significant benefits offered by the technology.

Official trailer for The Terminator (1984)

The Terminator wasn’t the first film to tackle the potential dangers of AI. There are parallels between Skynet and HALL 9000 supercomputer in Stanley Kubrick’s 1968 film 2001: A Space Odyssey.

It also draws from Mary Shelley’s 1818 novel, Frankensteinand Karel Čapek’s play from 1921, RUR. Both stories are about inventors losing control of their creations.

At the time of publication, it was described in a review from the New York Times as a “B movie with flair”. In the intervening years, it has been recognized as one of the greatest science fiction films of all time. At the box office, it made more than 12 times its own modest budget of US$6.4 million (£4.9 million at today’s exchange rate).

Arguably, what was most novel about The Terminator is how it recreated a long-standing fear of one machine uprising through the cultural prism of 1980s America. Much like the 1983 movie WarGameswhere a teenager nearly triggers World War 3 by hacking into a military supercomputer, Skynet highlights Cold War fears of nuclear annihilation combined with anxiety about rapid technological change.



Read more: Science fiction helps us deal with the facts of science: a lesson from Terminator’s killer robots


Forty years later, Elon Musk is among the technology leaders who have helped keep the focus on the supposed existential risk of AI to humanity. The owner of X (formerly Twitter) has repeatedly referenced The Terminator franchise while also expressing concern over the hypothetical development of superintelligent AI.

But such comparisons often irritate the technology’s advocates. As the former UK technology minister Paul Scully said at a London conference in 2023: “If you’re only talking about the end of humanity because of some nonsensical Terminator-style scenario, you’ll miss out on all the good that AI (can do).”

That’s not to say there aren’t real concerns about military uses of artificial intelligence — ones that might even seem to parallel the film series.

AI controlled weapon systems

To the relief of many, US officials have said that AI will never make a decision on the deployment of nuclear weapons. But combining AI with autonomous weapon systems is a possibility.

These weapons have been around for decades and don’t necessarily require artificial intelligence. When activated, they can select and attack targets without being directly operated by a human. In 2016, US Air Force General Paul Selva coined the term “Terminator Riddle” to describe the ethical and legal challenges posed by these weapons.

The Terminator’s director James Cameron says that “the weaponization of AI is the biggest danger”.

Stuart Russell, a leading British computer scientist, has argued for a ban on all lethal, fully autonomous weapons, including those with artificial intelligence. The biggest risk, he argues, is not from a sentient Skynet-like system going rogue, but how well autonomous weapons can follow our instructionskills with superhuman accuracy.

Russell envisions a scenario where tiny quadcopters equipped with artificial intelligence and explosive charges can be mass-produced. These “butcher bots” could then be deployed in swarms as “cheap, selective weapons of mass destruction”.

Countries including the United States specify the need for human operators to “exercise appropriate levels of human judgment over the use of force” when operating autonomous weapons systems. In some cases, operators can visually verify targets before authorizing attacks and can “wave off” attacks if situations change.

AI is already used to support military targeting. According to some, it is even one responsible use of the technology as it could reduce collateral damage. This idea evokes Schwarzenegger’s role reversal as the benevolent one “machine guard” in the original film’s sequel, Terminator 2: Judgment Day.

However, AI can also undermine the role human drone operators play in challenging recommendations from machines. Some researchers believe that humans tend to trust what computers say.

‘Exciting ammunition’

Militaries involved in conflicts are increasingly using small, inexpensive aerial drones that can detect and crash into targets. These “floating munitions” (so called because they are designed to hover over a battlefield) have varying degrees of autonomy.

As I have argued in research in collaboration with security researcher Ingvild Bode, the dynamics of the Ukraine war and other recent conflicts where these munitions have been widely used raise concerns about the quality of control exercised by human operators.

Ground-based military robots armed with weapons and designed to use on the battlefield may resemble the merciless Terminators, and armed aerial drones may eventually resemble the franchise’s airborne “hunter-killers.” But these technologies don’t hate us like Skynet does, and neither do they “super-intelligent”.

However, it is very important that human operators continue to exercise agency and meaningful control over machine systems.

Arguably, The Terminator’s greatest legacy has been to warp how we collectively think and talk about AI. This matters more than ever before because of how central these technologies have become strategic competition for global power and influence between the US, China and Russia.

The entire international community, from superpowers like China and the US to smaller countries, needs to find the political will to cooperate – and to deal with the ethical and legal challenges posed by the military uses of AI in this age of geopolitical upheaval . How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator—even if we don’t see time travel to cyborgs anytime soon.