The Rise of Robots

We need to talk about AI! And the sooner the better. Artificial intelligence (AI) advances are powered by the leading Internet and Technology companies and we have very little independent oversight or regulation in their research laboratories. There’s something unsettling about that! And there’s no returning once we go too far down this road. We have reached the stage for the first time in the history of mankind where the creation is more powerful than the creator. There is little doubt about it, AI will evolve and become self-aware. And AI will self-learn and self-improve at unfathomable rates. What will it think of us? It’s 200 years since Mary Shelly created Frankenstein, we are following nicely in those footsteps, except this is real and a lot more scary.

 

The rise of robots has long been a recurrent theme in science fiction. However it now seems AI will inevitably grow too intelligent for us humans to control. Science fiction (remember The Terminator and iRobot films and let’s not forget the sentient HAL) is rapidly becoming science fact. This rise of AI in robotic form (but not necessarily) will lead to a dystopian world without the proper oversight by humans. It’s no longer just a theoretical threat to civilization, it is here and needs to be addressed now. While we have passed the date when Skynet became self aware in  The Terminator, how fictional does that concept seem today?

 

The early stages have crept in inconspicuously with not much concern, just annoyance, as companies are becoming more automated, more robotic, less human. We no longer expect to always talk to humans over the phone. Instead we must speak clearly and interact with AI and tolerate it’s attempts at natural language, driving us demented slowly but surely. While robotics and automation have transformed the workplace and replaced numerous jobs (much of this is a great thing), we need to be ever so careful we don’t cross the tipping point resulting in mass worldwide unemployment.

 

Powerful corporations are innovating and reshaping society itself in many areas. And this fundamentally affects the way we live, whether it’s genetically engineering ourselves and the food we eat, to medical nanobots, smart homes and self driving cars. Much of this takes places in exceedingly funded secret research laboratories, beyond the reach of citizens and even governments. Who polices them? Apparently nobody. This lack of proper oversight and discussion as to the direction this research is heading is highly dangerous and may well backfire resulting in catastrophic consequences to mankind. It’s so important that somebody, as in worldwide elected representatives, take control and ensure that society itself needs and is adequately prepared for these new technologies being thrust upon us, slow their development down if that’s what is best for people and society in general, and not allow control be taken by the interests of a small number of frighteningly wealthy people with little or short-term regard for the future of civilization, but who have high regard for corporate wealth. Unfortunately the empowerment of corporations board rooms today means the rest of us are at their mercy, check this link.

 

And now some Governments and their military in particular have created autonomous killer machines, thanks (or no thanks) to these rapid advances in robotics and AI. They can identify and destroy targets, including humans, without human intervention. And we haven’t even commenced discussions or debates on the morality, legality or ethics of this frightening development. Will there be a UN weapons treaty ban to regulate this?

 

The AI evolutionary ladder can be broken into stages.

  • Stage I: Artificial Narrow Intelligence (ANI) or Weak AI: Specialises in one area, relatively harmless so nothing much to fear here unless you play chess, remember IBM’s Deep Blue.
  • Stage II: Artificial General Intelligence (AGI) or strong AI: AI that performs as well as humans across all intellectual tasks.
  • Stage III: Artificial Superintelligence (ASI): AI that immeasurably out performs humans across all intellectual tasks.

How much time have we? Well we are already at stage I in many domains such as self-driving vehicles. We are on the road between ANI and AGI. Once we reach AGI it is thought, though opinions vary greatly, the time to ASI will happen rapidly, driven by AI itself.

 

As computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.” So vision, perception, movement are difficult for AI to adequately grasp for now (after all we in the animal kingdom have had a few hundred million years of evolution to perfect our skills), but that’s just a matter of time, and get’s closer as computational power increases.

 

My predictions are;

2050 – The day the world stops – the first major network failure – the damned inter-connectivity of everything has come back to haunt us!

2060 – the first person to be convicted from disconnecting from the global network loses his appeal case in the Interplanetary Court Of Justice (by then we will have colonized the Moon and Mars)!

2070 – When AI machines will demand equal rights to humans!

2100 – the dawn of the new species – Robo Sapiens, so long Homo Sapiens – you had your time, maybe your creation will do a better job – then again that’s not much of a challenge!

 

Is this article scaremongering? I think not. Many well respected personalities such as Elon Musk and Stephen Hawking echo this apprehension. Stephen Hawking’s quote sums it up nicely, “AI will be either the best or the worst thing to ever happen to humanity”. It may be that AI will be the last invention a human will make as our invention will be far superior and creative at inventing than us and will take over this role. In fact the fate of humanity may well depend on what AI thinks and does.

 

What will AI want, what direction will it steer the planet. Will we find a way to instill human values into AI, or is that unwise, perhaps do the opposite (a topic for another day). Will we have a shut down switch for AI? As we evolved to the top of the food chain we left other species behind. Do they have a shut down switch for humanity to stop their exploitation and annihilation by us. I think not! And likewise as AI leaves us for dust we most likely won’t have a way of combating their actions and progress.

 

We need to think long and hard about the world we are currently building so we can safeguard our long term prospects. It may be a case of getting it right first time. There will not be a second chance. We have invited our own extinction in through the front door. Why do we not see that? AI will annihilate us long before climate change does. Who will fight this imminent extinction, now (2016) is the time before it’s too late. What sort of ‘society’ might AI build? Will a strong charismatic robot emerge as leader and establish a new world order? We could be on the verge of the greatest threat humanity is ever likely to see. Will Robo Sapiens take over from Homo Sapiens?

 

Well if this is to be, then before I shuffle of this mortal coil, I’d like to say I for one welcome our new robot overlords (maybe they won’t understand tongue-in-cheek).