Stephen Hawkings has warned about artificial intelligence, saying “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
The most hotly anticipated blockbuster movie of 2015 is without doubt Avengers 2: Age of Ultron, which is about a sentient rogue robot that turns against its creator and decides to destroy humans as it concludes that they are the main impediments to peace.
It’s hardly the first time Hollywood has depicted artificial intelligence machines as dangerous. If you go back as far as 1968, in the movie 2001: A Space Odyssey, you have HAL, a sentient computer that decides to kill the astronauts in a spaceship in order to continue with his programmed mission.
Then there was the 2004 movie I, Robot which featured VIKI, a sentient supercomputer that deduced that since humanity was on a path of destruction, it decided to enslave and control humans in order to protect them from their own destruction.
What all these films have in common is that they all depict artificial intelligence as sticking to their programmed mission but making a deduction that is ultimately harmful to humans. It makes for a good Hollywood script, for sure. That’s why filmmakers return to it time and again.
But it’s something some renowned thinkers are taking seriously. Tesla CEO has warned about artificial intelligence on many occasions. At a recent MIT symposium Musk called artificial intelligence an “existential threat” to the human race. At another recent event, Vanity Fair’s New Establishment Summit, Musk gave a somewhat humorous example: “If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans.”
Musk had the audience chuckling but the point he was trying to make was no laughing matter. No less than Stephen Hawkings has warned about artificial intelligence, saying “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
It’s worth remembering though that it is artificial intelligence that will enable things like driverless cars. If you own an iPhone, you’d be familiar with Siri. Well, that’s another example of artificial intelligence. There are many benefits to artificial intelligence which could enhance the quality of life as well as improve productivity at the workplace.
Perhaps to prevent artificial intelligence from harming humans, you could apply the Three Laws of Robotics devised by famed science fiction author Isaac Asimov, which are: to never harm a human, to always obey humans unless this violates the First Law, and to protect its own existence unless this violates the First or Second Laws.
But that’s assuming the machines are created that way in the first place. What if an evil genius decides to create a supercomputer that is designed to create harm? Scientist Steve Omohundro has an answer against a robot uprising, which is to detect a malicious robot early in its life before it acquires too many resources.
Of course it’s possible that by the time you detect it, the robot has already acquired lots of resources. That would be a nightmare scenario indeed. Like most things in life, there are no guarantees and there’s good and bad in everything.
And even that is reflected in Hollywood depictions of sentient robots. Yes, you have your HALs and Ultrons. But if you watch the new movie Interstellar, which is playing in cinemas now, you’ll see two robots, TARS and CASE, which are both of great help to the astronauts whom they serve. And no, they don’t turn rogue in the end. You can also look forward to another upcoming movie, Chappie, about a sentient robot that, unlike Ultron, seems to be cute, lovable and benign.
Oon Yeoh is a new media consultant.