By Jethro Wegener
Artificial Intelligence or “A.I.” is rapidly becoming less science fiction and more science fact, and leading experts are worried. On a theoretical level, the eminent Stephen Hawking himself has warned that machines could become smarter and more powerful than man and not want to be turned off, while Elon Musk warns that AI is ‘summoning the demon’. On a more personal level, things like Uber developing ‘driverless’ cars, which would make it more profitable for companies to use fully automated taxis instead of human drivers, may mean that many of today’s human jobs will be at stake in the future.
There is also an argument in favour of AI. For example, it can help in the medical field with things like Robotic Radiosurgery, which achieves precision in the radiation given to tumours. Laborious and dangerous tasks like mining could be done by robots, reducing the risk to human life. A type of AI encountered by most people everyday is in their smart phones, like personal assistant application ‘Siri’, which can make life more convenient for users.
So the question is, is it time to start worrying about ‘Skynet’ from the Terminator movies?
Let’s take a look at where AI is now. Some of the more famous examples are the ‘Chess Terminator’, a robot arm that was able to draw in a blitz game against the world chess champion in 2010, and the Ukrainian robot that fooled ‘Turing’ testers into thinking it was a 13-year-old boy. At first glance, quite impressive but, when you look closely you see the problem. These robots don’t actually simulate human intelligence. The Ukrainian bot, for example, was programmed to use human speech patterns to avert questions and only fooled a third of the judges into believing it was human. We’re obviously still a few years away from a ‘Terminator-like’ future but, there are some developments that could lead to harmful effects for people.
The ‘driverless’ cars for example, could be the beginning of a trend of AI putting a lot of people out of jobs. Jobs like chefs, salespeople, jobs where AI can do the task more efficiently than any human at a lower cost are all at stake; a troubling development indeed when one takes into account what happens to the people who actually work in these professions.
Taking things a step further, an even scarier notion is the idea of the ‘killbot’, AI designed specifically for military usage, such as ‘Ultron’ from ‘The Avengers’. Stephen Hawking has warned against this, raising the issue that a robot would not be able to tell the difference between combatant and civilian. Even more upsetting is that companies seem to just be interested in getting these robots working and are not looking into the safety measures needed, as computer science and engineering professor Stuart Russell has pointed out. Precautions like Asimov’s Three Laws of Robotics, the first law being ‘a robot may not harm a human being’, are not being considered. If AI controlled robots were to be used for military usage, they would already be violating the first law and robots, after all, are driven by logic – so if their makers are programming them to kill humans, who is determining which humans to kill – the robots themselves? Now that is a scary thought.
The main problem with AI is nobody really knows which way it’s going since there is a mixture of private and public bodies developing AI, each with their own agenda. There is no singular governing body regulating it and individual countries currently have no laws dealing with the development of AI, meaning that it could go in whatever direction its creators choose. The technology, though a few decades away from super intelligent robots like ‘Data’ from Star Trek, is advancing at a rapid rate and great strides are being made in the field every day. It may just be a matter of time before we have machines smarter than the greatest intellects out there to deal with, and by that point it may already be too late to find a solution. With minds like Hawking and Gates telling us to take heed, however, perhaps it would be wise for us to do so.