The 3 Laws that Stop Robots From Taking Over The World
As we steadily inch closer towards a world where robots and artificial intelligence (AI) become a common sight, one concern keeps cropping up over and over again: Can machines rise against humans someday and take over the world?
The question might appear or fantastical at first, but you’ll find that some of the smartest people of our time have voiced serious concerns over the topic. From the late physicist Stephen Hawking to tech-billionaires Bill Gates and Elon Musk, these visionaries have repeatedly addressed the issue of ethics in AI and the implications it could have on our future.
The idea of machines going rogue is not a new one. For many years now, films and books have imagined a futuristic time when machines have taken over the world. In Arthur C Clarke’s iconic sci-fi novel 2001: A Space Odyssey, the AI-powered robot HAL 9000 takes the fate of the humans on board the spaceship into its own hands. In the famous Terminator series of films, Arnold Schwarzenneger’s character is sent back in time to prevent the rise of the machines and save humanity. The Matrix Trilogy also explores similar themes where machines harvest humans for energy and everyone is forced to live inside a simulation. So, are these devious depictions of evil robots just the result of the imagination running wild or is it an actual possible reality that we may have to deal with in the future?
To find the answer, we may have to step away from the field of science and enter the realm of science fiction or ‘sci-fi’. One of the most famous writers of science fiction, an American author by the name of Isaac Asimov, first penned down the 3 Laws of Robotics. By doing so, he brought the ethics surrounding artificial intelligence into popular consciousness: If artificial intelligence surpasses human intelligence someday, what stops it from overpowering humanity and acting out of self-interest?
Isaac Asimov’s 3 laws of robotics are as follows:
- A robot may not injure a human being or, by failing to act, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where carrying out those orders would break the First Law.
- A robot must protect its own existence, as long as the things it does to protect itself do not break the First or Second Law.
The first mention of the three laws is in Asimov’s short story Runaround, written in 1942, much before the era of automation and robotics that we are used to now. Asimov was a visionary who conjured up fantastic futuristic worlds where robots and humans co-existed and worked together. He created the three laws to ensure smooth cooperation between robots and humans in his sci-fi stories. Throughout Asimov’s short stories and novels, he made small modifications to the laws to define the relationship between humans and AI in each story. In his later works, once he started exploring complex worlds where robots had taken over governments and of entire planets and civilizations, Asimov’s created a ‘zeroth’ law to precede his other three laws. It said:
“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
Artificial Intelligence and the Three Laws Today
In its current form, artificial intelligence is far from attaining the complex levels of processing power that Asimov describes. Today, the tangible forms of AI that we can see in everyday life are limited to automated vacuum cleaners like the Roomba and digital assistants like Siri and Alexa. Today, all engineers can do to make sure a Roomba doesn’t cause harm to humans is by attaching bumpers and sensors. If a Roomba accidentally snags onto someone’s foot, there is no mechanism by which it can understand if a human is in pain or discomfort.
But this doesn’t mean that it will always remain so. Technology is advancing rapidly in all fields today. Look at cars, for instance. Just in the last 100 years, we have gone from the crudeness of the first mass-produced car to the highly sophisticated computer systems of electric-powered self-driven cars. In such a setting, the day might not be too far where there is a need for robots with Asimov’s 3 Laws pre-programmed into them.
Asimov’s three laws have successfully made their way from science fiction to actual science. Engineers, scientists and philosophers today are constantly grappling with the all-important question of ethics in artificial intelligence. New fields of interest like ‘Robot rights’, liability for self-driving cars and weaponization of artificial intelligence are taking shape across the world. For now, the application of Asimov’s 3 Laws remains a field of speculation, where we can only intelligently predict what challenges we might face and how we can deal with them. What is really exciting is that in your lifetime, you could see it become a reality!
Enjoyed reading this article? For more on robotics and the technology of the future, check out these interesting reads on the Learning Tree Blog: