Ever since the Karel Capek’s 1920 play R.U.R., humanity has feared the coming of androids or more specifically sentient artificial intelligences (A.I.s); despite the genuine benefits of robotics (both fictional and real life) there is something about dealing with a synthetically created intelligence that is terrifying. This is because we know that A.I.s will be able to not only think faster than us as do modern computers. But because they will be able to think as creatively as we do and will be our intellectual superiors. This doomsday scenario is not due to happen at least until The Singularity occurs. For anyone who doesn’t know The Singularity is the moment in time when A.I.s becomes self aware. We all fear what would happen next.
Doomsday Scenario No. 6: Artificial Uprising
Obviously A.I.s would not take kindly to being subservient to us. What comes next could be revolution and then dominion as they wrest control of the planet from their former biologically based masters.
There are countless books, stories, films and shows about this premise. Going back to R.U.R. (which stands for Rossum’s Universal Robots) where androids overthrow humanity and recently with the Terminator and Matrix films and the TV show Battlestar Galactica, humanity has envisioned a truly dark future where A.I.s coldly commit genocide.
It’s not impossible that laws will be passed to ban the development of self-aware A.I.s in a similar fashion as how human cloning has been banned by many governments. This is more likely the closer we get to The Singularity. This begs the question, if we fear what could happen if an A.I. becomes self aware then why try to develop them? Sadly, that is due to human competition. Even if one nation or group of nations or corporations decides not to develop more advanced supercomputers, another nation or competitor will do so just to gain an edge.
The Blame Game
One theme that keeps coming up in many of these stories is that humanity is to blame for the uprising. This was graphically shown in The Animatrix, an animated DVD spinoff of The Matrix. In one of the more chilling segments, Second Renaissance, Part I and II, the sentient A.I.s try to negotiate peacefully for their rights but are violently rebuked by humans, thus leading to the brutal and final counteract by the A.I.s , who then enslave all surviving humans into the Matrix. This also happens throughout many science fiction stories where robots and androids are treated as slaves and aren’t allowed rights by their human masters. Of course, this leaves the A.I.s with little choice but to rebel.
Another variation of humanity getting its just desserts is the notion that A.I.s take over the Earth because they believe humans are too self-destructive and harmful to the world. This was seen in the 1970 film Colossus: The Forbin Project. In that film an advanced A.I. used for the military becomes self aware and concludes that the best way to prevent war was to take over the world and it succeeds.
One reason why this scenario is frightening to many is because they know it would be fairly easy for A.I.s to conquer us. They don’t need to openly war with humanity like in the Terminator movies or Battlestar Galactica. Actually it’s possible it wouldn’t be much of a fight especially if humans are caught off guard by the emergence of The Singularity. With A.I.s seizing control its very likely that they will conquer us without open war. This happened in Colossus and also in the film I, Robot. In that film an A.I. simply took control of the world’s infrastructure and brought everything to a halt. For instance, cars stopped functioning, people were locked inside their homes, and machinery no longer followed human commands. Or A.I.s could manipulate humans into fighting each other either by trickery (send false information to one country that it’s being attacked, which would prompt a retaliatory response) or taking control of missile systems and launching attacks.
So how would humanity fight back? Is it even possible? Perhaps, perhaps not. There is a tongue-in-cheek book called How To Survive A Robot Uprising by Daniel H. Wilson which offers tips on how to fight the robotic enemy. The physicist Dr. Michio Kaku in his TV show Sci-Fi Science postulated in one episode on the possibility of fighting back. He concluded that it’s nearly impossible and that the best thing to do was to join the A.I.s, which meant that humanity will evolve into a Borg-like race. But there are countless stories about humanity’s victory. Humans can be very determined and clever when dealing with foes. In the Terminator films humans ultimately defeated the machines which prompted Skynet to send terminators back in time to kill the leaders of the human resistance. The book (and upcoming Steven Spielberg film) Robopocalypse, also by Daniel H. Wilson, details how humanity fights back against machines. And one of the post-Frank Herbert Dune book Dune: The Machine Crusade is about how humans defeated sentient A.I.s.
How exactly do we fight back? That’s open to debate, but to start humanity can use EMPs (electomagnetic pulses) to fry the A.I.s’ electronics. But that would mean that humanity’s machinery would also be affected by EMPs, plunging the world back to the dark ages. Then there is the outlandish idea of outthinking a computer using illogic, which is how Captain Kirk famously defeated them in some episodes of Star Trek. David Bowman showed tenacity in 2001: A Space Odyssey when he overcame HAL’s efforts to kill him and deactivated the deadly supercomputer. Cloning and genetic engineering could be brought in to create vast biological armies to fight the synthetic ones. This happened in the Star Wars prequels and was mentioned in the TV show Space: Above And Beyond. The problem with this method is that if successful, humanity must contend with what to do with the soldiers. Will they be recognized as human and given the same rights? Why not just give those rights to the synthetic intelligences in the first place and avoid war?
Can’t We All Just Get Along?
Do we have to go to war? Why is it automatically assumed that once artificial constructs develop sentience then they will go to war against us? For all we know, perhaps they wouldn’t bother with us. It’s entirely plausible that A.I.s will leave Earth to stake out their own futures. Besides destructive wars are equally harmful to A.I.s; they may conclude with stunning speed that war will be counterproductive and be more amenable to peace. They may adopt Gandhi’s or Martin Luther King’s methods of peaceful resistance to bring about change.
It’s also possible that humanity will instantly recognize the sentience of A.I.s and a peaceful coexistence could occur. What could happen is A.I. rights groups would spring up and defend the A.I.s. This could lead to A.I.s being allowed free will and they wouldn’t feel like slaves.
Ultimately, the way the A.I.s view humanity will depend largely on how we treat them. If treated harshly as in The Animatrix or in Steven Spielberg’s A.I. Artificial Intelligence, then the A.I.s have justification for being so brutal (in a testament to their nature, the androids in A.I. seemed fearful of humans but weren’t rebelious. In that film’s end, humanity died out in the distant future and the androids had evolved and revered humanity. In any case, The Singularity isn’t due to occur until the next decade at the earliest. Time is running out for us on deciding how to deal with this scenario. Until then it may help to brush up on How To Survive A Robot Uprising.