Capítulo 1
AI limits

AI can cause damage despite good intentions, EU says

Knowing what happens behind the scenes of AI is necessary to prevent it from being a black box

Raphael Hernandes

The first steps towards the emergence of greater control over the use of AI (artificial intelligence) were taken in 2019. One of the leading entities to launch guidelines was the European Union, with its "Ethics Guidelines for Trustworthy AI".

The document calls for AI to be at service to humanity and the common good and says it is necessary to guarantee reliability and transparency. It points out that, even with good intentions, AI systems can cause damage.

One of the most common problems with AI is that it works like a black box. The systems make conclusions from the analysis of the data presented to them, but it is not always possible to verify exactly what factors led to such conclusions.

The European Union debates this in its document that offers suggestions for an ethical use of technology.

The rules being studied reinforce the importance of engineers and companies in creating mechanisms that show how an AI reached a specific result. With this information, a person could contest the results of an examination, a trial, a job selection, etc.

The EU also recommends safeguards to process personal data, as well as to assess the source of data used to power artificial intelligence. It is called "bias": when information from the real world ends up harming groups of people.

An AI system fed with data from a company that never had many women on the team would tend to recommend fewer women than men for hiring if this bias is not corrected.

A classic case is that of Eric Loomis. In 2013, he was sentenced to six years in prison on multiple charges related to the theft of a car. The judge responsible for the sentence admitted to using an AI system called Compas to support the verdict.

Loomis' defense appealed the decision arguing that it was not possible to know what criteria the system adopted when defining the sentence. The judge denied the request saying that the result would be the same if the system had he used the system.

Three years later, an analysis of the ProPublica website showed that Compas was more strict with blacks –most often classified as "high risk" defendants. In addition, the system was only correct in 20% of cases when it tried to predict who would commit violent crimes in the future.

In sensitive cases like Loomis' or medical diagnostics, for example, experts argue that AI systems have to act as aids to humans in decision making. Whoever defines it, in the end, has to be someone of flesh and blood.

Translated by Kiratiana Freelon

Read the article in the original language