Opinion: When It Comes To Artificial Intelligence, How Much Is Too Much?
Matt Armitage ponders how far we should go with AI, and if the human touch is still relevant.
BY matt armitage | May 18, 2016 | Technology
There’s no shortage of things to love about Douglas Adams’s 1970s and 80s Hitchhiker series. The sunglasses that go pitch black when they sense danger. The ability to declare yourself temporarily dead for tax purposes. A cow that recommends the best bits of its body and then cheerfully commits suicide once you’ve selected your cut (Adams was an environmental activist) and, of course, the Babel Fish: a goldfish you slip in your ear and it feeds on your brainwaves; magically enabling you to understand any spoken language by way of compensation.
The usual sci-fi pie in the sky. But Adams was remarkably prescient when it came to artificial intelligence (AI). Central to the story is Deep Thought, a supercomputer designed to answer the ultimate question about life, the universe and everything. When, after aeons of data crunching, Deep Thought deduces that the answer to this ultimate question is 42, his creators are understandably upset.
The answer is the simple part, replies Deep Thought, the hard part is the question. It then sets about building a successor to calculate the question. Today we have the likes of Stephen Hawking and Elon Musk decrying the dangerous and ineffable nature of artificial intelligence, creating beings that have no more understanding of humans and their motivations than mules have of us. For, in movie franchises like Terminator and The Matrix, we are the mules; stubborn, hardy and more than a little inbred.
Creating superior beings with the ability to create their own superior beings is the perfect Hitchhiker contrivance. In the series, the supercomputer that the supercomputer builds turns out to be planet earth, and it is moments from answering the ultimate question when the planet is destroyed to make way for a hyperspace expressway.
So, I’m certainly not going to argue about anything more complicated than the relative merits of staples versus paper clips (try getting your SIM card out a smartphone with a staple) with Professor Hawking, let alone how AI poses an existential threat to our species. But however much danger AI poses, its dumb cousin poses more.
We blindly trust and rely on the machinery, code and mathematics that drive the Internet. We depend on it for banking. For shopping. For news. For entertainment. Even our sad, sad sex lives live beyond our technological comprehension. When mathematicians came up with the algorithms that bundled risky mortgages into tradeable assets they didn’t realize that the computers would simply continue to cross-underwrite the dodgy debt and thereby sow the seeds of a global economic meltdown.
Artificial intelligence is frightening. Artificial Unintelligence is worse. A few years ago I had a car that simply stopped working. No matter what the workshop did to try and fix it, the car’s ECU chip was convinced that it was sick and refused to move. After literally thousands in bills and the replacement of virtually anything the mechanics could remove with a crowbar, the gremlin was finally found to be nothing more than a faulty sensor. Which cost a measly RM200 (and no, I didn’t get a refund on the unnecessary repair work)
We live in a world where automation is increasingly the norm. Australian surgeons faced with a sparsely populated and enormous landmass are pioneering surgeries conducted by remote controlled robots by video link. Body scanners that can determine anything from a sprained ankle to a brain tumour in a matter of minutes are starting to appear in upmarket hospitals.
It’s not all bad. Robot surgeons wouldn’t have shaky hands after a binge-drinking episode. Smart home solutions that can speak to your sat-nav and get the dinner on when you’re just around the corner. And there’s always space for emergency systems that can land a passenger plane with the crew incapacitated.
There are still some things that need, if not a human, then an intelligent touch. The diagnosis of a terminal cancer. A drone ordered to kill an enemy. A police robot deployed to a street protest. There is so much in our world that requires a nuanced response. Over millennia, our brains have developed the ability to respond to these nuances in seconds. By comparison, Google’s self-driving cars require a battery of sensors, lasers, radar, gyroscopes and widgets to do something that we, frankly, do with our eyes shut.
I don’t want to be ruled or subjugated by a sentient machine. But I’m far more afraid of being controlled by insentient devices incapable of thinking beyond their programming. If we have no ability to reason with them, what will we have left?