When thinking about an eventual uprising of the machines, the image that comes to mind is that of a bunch of humanoid robots destroying people -"hasta la vista, baby."
The expansion in the capabilities of AI (artificial intelligence) should not lead the world closer to the end of humanity, at least not like the movies show.
Researchers at the University of Oxford's Institute for the Future of Humanity, map these existential risks. The list of potential causes of the apocalypse, in addition to AI, includes natural disasters, nanotechnology, and biotechnology.
According to Nick Bostrom, an Oxford professor and founding director of the institute, the risk of artificial intelligence comes from its use in the development of other potentially harmful things.
"Just as many other powerful technologies are used in harmful ways, AI could also become a tool for war and oppression," Bostrom said.
One concern is the use of technology to build weapons. Researchers Stephen Cave and Kanta Dihal, from the University of Cambridge, trace the history of autonomous weapons. The study even considers a work of fiction written in Ancient Greece. In the text "Argonautica," dating from the 3rd century BC, a giant made of bronze worked alone to defend Europe from invaders.
"In recent times, serious efforts have been underway to make these myths a reality, with significant funding for AI research coming from the military," Cave and Dihal wrote in the article "Hopes and fears for intelligent machines in fiction and reality".
In a European Union document that provides recommendations for working with AI, autonomous weapons are defined as machines capable of deciding "who, when, and where to fight."
AI can be used to automate the detection of drone targets, for example. But when can a robot decide, alone, when it's time to pull a trigger? Each country may have a different opinion.
Brad Smith, president of Microsoft, writes in his book "Tools and Weapons" that military leaders around the world agree on one thing, at least: "No onewants to wake up one morning to discover that machines have started war while they were sleeping."
However, the scary artificial intelligence present in films such as "Terminator," "Matrix," and "I, Robot" exists only in fiction. It is called general AI, capable of evaluating a wide range of topics.
Some experts say it will never exist, others that it will appear in the coming decades. At the very least, it is something that is far from emerging.
"When you think of AI, people have lots of hopes, fears, expectations, which are not reflective to most of today's technology," said Kanta Dihal, from the University of Cambridge.
The technology that exists today is narrow AI, capable of highly specific operations. A practical example is the mechanism capable of detecting cat images.
From databases with several photos, it is possible to make the system assimilate patterns, thereby assessing whether other images are of cats or not. That same robot would not be able to identify dogs or play chess.
The central part of Bostrom's theory for the end of humanity through artificial intelligence revolves around what he calls superintelligence. It would, theoretically, be the last technology that humanity would need to create.
"It would do invention activities much more efficiently than humans," the professor said.
The mechanism would be able to create other machines or systems, and that includes making improvements to itself. From a moment on, with the increase in capacity, the intelligence of this machine would be so high that a human being would not be able to understand. If that got out of hand...
Bostrom's thinking is criticized by some colleagues, who say that this type of discussion takes the focus off of more tangible problems. The philosopher himself recognizes that it is difficult to establish a time interval for technology to reach such a high level.
"Some people are convinced that superintelligence will appear in 10 to 15 years, others are equally convinced that it will never happen or that it will take hundreds of years," said Bostrom, who argues that it is essential to prepare for it.
The parable of the humans who saw paper clips
To illustrate the issues with superintelligence, Bostrom created a parable about a paperclip maximizer.
The story goes that humanity has created an AI system whose goal is to produce as many clips as possible. The system needs more and more water and metal to maintain manufacturing, and so it will consume more and more of these resources. Later, it may understand that people are made up of atoms –which could be rearrenged to make paper clips.
There comes the point when everything, Earth and people, becomes paper clips (made in the most efficient way possible).
For Bostrom, the main point is that the system didn't mean to harm human beings. "You don't have to program your survival instinct or desire for power and wealth for AI to look for these things, as they would already be the means to the ultimate goal of getting more paper clips," he explained.
For this catastrophe to happen, a very powerful and goal-oriented artificial intelligence would be necessary. Therefore, argues Bostrom, when programming a system with this potential, it would be essential to order it to do something that matters to humans in a broader sense.
Translated by Kiratiana Freelon