Chapter 2
AI in real life

Humans need to be able to take control of autonomous mechanisms

Experts say that, when an AI system is in a tough spot, the solution might be putting a person back in charge

Raphael Hernandes

Unlike a human being, technology cannot make decisions on its own. As much as it seems to work alone, autonomous equipment needs to be previously programmed to act, and, for that, a series of decisions are to be made when developing products.

This choice of how a machine will react in the face of challenges is one of the main topics in the debate about AI (artificial intelligence). And experts argue that when things get ugly, you must give control back to a human.

"One evident risk [of AI] is overreliance on these systems," said Konstantinos Karachalios, managing director of IEEE-SA (Institute of Electrical and Electronics Engineers Standards Association, which develops global standards for different industries). "We should always keep a critical eye on them and be able to remove them from the loop to take back control when needed."

Karachalios cites an example from the Cold War (1947-1991) to show how important it can be for a human to take the reins and not leave everything in the "hands" of machines.

In 1983, Soviet officer Stanislav Petrov (1939-2017) chose to ignore alarms from a system that identified, by satellite, nuclear missiles fired by the Americans. The detection was wrong. Petrov's attitude prevented his superiors from launching retaliatory attacks and, thus, a possible nuclear war.

Nowadays, one of the main dilemmas about systems that work alone is in self-driving cars, which are already circulating in tests in some cities.

"You are travelling along a single lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. How should the car react?"

The example was created by Jason Millar, a researcher at the University of Ottawa, Canada, and published on the Robohub website.

In that case, the two options will necessarily result in evil. Morally, there is no right alternative.

In any case, self-driving cars need to be programmed to act in these situations. Who needs to make the decisions: the car designer, the user, legislators?

For Millar, the medical field, which deals with life and death decisions all the time, can help in the response. "It is generally left up to the individual for whom the question has direct moral implications to decide which outcome is preferable [whether or not to do aggressive treatment, for example]." In this case, the driver himself.

"It might be that a deeply committed animal lover might opt for the wall even if it were a deer in his car"s path. It might turn out that most of us would choose not to swerve," he writes. "It is in this choice that we maintain our personal autonomy."

Even in less extreme cases, machines can be a problem. What if things get out of hand? If a self-driving car gets into an accident, is it the fault of the manufacturer or the person in the car?

According to the philosopher Luciano Floridi, from Oxford, in these situations, it is necessary to distribute the responsibility among all agents, unless they prove that they are not at fault in the accident.

If the person is in an autonomous car that does not allow driving, it is like being a passenger on a train, he said. In the event of an accident, there is no way to blame the person. If there is a possibility to take control, responsibility can be shared.

Translated by Kiratiana Freelon

Read the article in the original language