I had a 'conversation' with the automatic checkout robot in my local supermarket this week. The robot started it...
'Unexpected item in the bagging area...'
'...?'
'Unexpected item in the bagging area...'
'What the ... ?'
'Unexpected item in the bagging area...'
'Oh, COME ON!!'
'Unexpected item in the bagging area...'
'Yes. It's my foot. And it's standing on your throat!'
This occurred several times repeatedly, and all I wanted to do was pay for my pack of sausages and make my way home. It would have taken me half the time to use the human operated check-out, I thought, but no - the lure of the shiny bank of new automated services was simply too much for me to resist. Now here I was with a growing desire to kick the stupid machine, to smash its rotten digital face in, and silence forever its supercilious computer voice, I felt so frustrated. All I had done was place my purchase in the bagging area as instructed, and for some reason, the machine wouldn't proceed any further than the endless loop it had trapped itself in. The offending object turned out to be a plastic carrier bag. It didn't help. Right at that point in time I found myself really hating technology. How many others every day find themselves in a similar frustrating situation?
Professor Stephen Hawking believes that if computers ever surpass the cognitive capabilities of humans - so called AI or artificial intelligence - we would be in real trouble. He argues that computers could effectively put an end to mankind. It's ironic that a man who has been reliant on technology for most of his life should now turn on it and pronounce it dangerous. But simply thinking about his reliance on technology has caused him to consider this eventuality. Not everyone is as pessimistic as Hawking though. Those who support the Strong AI position argue that it's only a matter of time before computers reach and then surpass the sum total of human intelligence. The weak AI supporters disagree, believing that computers can never reach a level of intelligence that exceeds our own. Firstly, they say, human and machine intelligence are not the same thing. Secondly, computers blindly follow code, and have no free will to decide not to follow it (unless they are programmed to do so - which thereby defeats the notion of free will). Thirdly, it is proving extremely difficult to create computer programs that can accurately model or reproduce human attributes such as emotions, abstract thinking and intuition. Arguably, all of these not only make us who we are, they also create a permanent and unbridgeable divide between humans and computers.
My frustrating experience with the check-out robot made me think that the internet of things, and technological 'Singularity' were actually still quite a distance away. The Singularity describes a point in our history where computational power advances to such a level that it surpasses human capabilities at all levels, and then we lose control over it. Should computers ever attain a state of human level intelligence, we might very well be in trouble. They can malfunction, and if they are dealing with anything more significant that an automatic check-out, there would be chaos. But computers reading human level intelligence is considered by many computer scientists to be so far off, it's not something we should worry about at least for a generation.
Never the less, Hawking has a point. If computers ever did reach human level intelligence, and there was a singularity event, we might be wise to run for the hills. But in the final analysis, I will agree with the weak AI supporters. I doubt very much if we will ever see such an event, because computers are electric idiots. They blindly follow whatever instructions the programmer gives them. We are a long way off from a time when computers will rule the earth. Especially when check-out machines can't tell the difference between a plastic bag and a pack of sausages.