That’s right, fuck artificial intelligence too.
(Mum, if I go missing start a revolution and come free me from the Computer Overlord’s human battery pack. Pull out the giant plug. No, the giant one. There’s only one plug, mum. Oh, forget it).
In taking the view that artificial intelligence won’t be the end of humankind anytime soon, it seems I’m in the near-zero number of sane people willing to argue against Stephen Hawking, Bill Gates and Elon Musk. Yes, I know I’m an idiot.
The only thing to my advantage is that if I’m wrong, we’re all fucked and you’ll have bigger things to worry about.
Hawking, Gates and Musk, joined by 100 or so of their closest brainiac friends, warned in an open letter to the UN that uncontrolled weaponisation of artificial intelligence will lead to faster, bigger and more lethal wars. They’re talking about actual killer robot warfare. Their warning was clear: control the technology now or risk a dire future.
Killer Robot Train to Doomsville
It’s also the point where I jump off the Killer Robot Train to Doomsville. It’s where people make the seemingly logical leap that artificial intelligence is inherently bad. That it will inevitably lead to Terminator IV for realsies. But even Hawking and Co agree that artificial intelligence is not all bad. It has the potential to solve some of humanity’s greatest challenges, like poverty and inequality.
Opinions vary on whether artificial intelligence’s pros will outnumber its cons. On this, the Hawking-Musk-Gates alliance fractures, with Gates believing the potential artificial intelligence gains in areas like healthcare outweigh the risks.
Controlling Artificial Intelligence
There’s no doubt that artificial intelligence will need to be controlled. Many worry that governments are not up for the task.
But it’s also true that humankind has a remarkable ability to bumble through.
The UN is acting – in its glacial-paced, lowest-common-denominator kind of way – to build consensus to manage artificial intelligence technologies. The outcome won’t be perfect. We can expect the process to be like that of managing nuclear weapons: slow to develop, patchy in take-up and constantly under threat of unravelling, but so far effective. Just.
Pants Pooing and Flying Uteruses
The latest wave of collective pants-pooing is no doubt helped along by our inherent fear of new technologies. We’ve always feared new technologies. You can imagine the furrowing of monobrows the day Ugg rolled the very first wheel into the cave.
In the Middle ages, people feared the printing press would be the end of the civilised world. In the Victorian era, trains were thought to cause everything from insanity to uteruses flying out of your body.
Our fear of technology is amplified by our fear of the future. These fears can cloud our judgement and lead us to over-emphasise a threat. The problem with the ‘threat’ of artificial intelligence is that it’s only clear that there’s likely to be half a problem.
Natural Born Ice-cream Thieves
A threat is made up of capability and intent. You have to have both.
If a three-year-old boy approached you on the street and said: “Buy me an ice cream or I’ll ninja-kick you in the ovaries”. You wouldn’t consider him a threat even though he clearly intended harm – the capability to hurt you just ain’t there.
But if the kid’s dad, a convicted natural born killer in a neatly ironed ninja suit, came along and said the same thing, you’d likely order several double-choc ice creams before he finished his sentence. Clear intent and plenty of capability.
Artificial intelligence already has harmful capabilities. A self-driving car hit and killed a person in Arizona earlier this year (2018), but that was an accident. Sad and tragic, but an accident nonetheless.
It’s not a threat, it’s a risk like myriad others we face every time we get out of bed. It’s hard to imagine an artificially smart car deciding it would go on a rampage downtown. As Toby Walsh, a professor of artificial intelligence, says, “It’s just not in their code”.
Shall We Play a Game?
For Generation X, imprinted with movie plots like War Games where if it weren’t for Matthew Broderick a supercomputer would have started a thermonuclear war, artificial intelligence is ladened with risks.
But it’s also jam-packed with rewards, and it’s coming whether we like it or not.
The crux of the artificial intelligence risk is not the technology itself, but rather the way we handle it.
So the question becomes, not should we grab the tiger by the tail, but rather can we not get eaten now that we have? It will take all of humankind’s ingenuity, resilience and humanity to never let go of our tiger and to never take our eyes off it.