Chapter 2
AI in real life

Facial recognition saves lives but limits freedom

AI-based system helps law enforcement but bumps into privacy

Raphael Hernandes

Facial recognition is one of the most debated topics when talking about ethical limits for AI (artificial intelligence). The technology allows one to identify people through electronic images (photos and videos).

Do you know when Facebook suggests tagging a friend in a photo? This is facial recognition, and, on a massive scale, it can be applied to surveillance.

With just one face photo, one can identify a specific target using cameras. The feature begins to appear in different products: cell phones, cars, home doors, and refrigerators, among others.

"Do we really need facial recognition in stores, banks, everywhere? No, probably not. Perhaps we need it in more sensitive places, such as at the airport or near a nuclear facility," said Italian Luciano Floridi, a professor at the University of Oxford.

The issue raises privacy concerns. It is possible to identify individuals in a crowd using security camera images. Thus, it is feasible, for example, to list who were the people who participated in a protest.

Recognition systems need millions of photos to teach the computer what a face is. Some companies simply remove content from social media without people's consent.

Clearview, an American company that provides this technology to the police, is being sued for extracting billions of images from the internet without permission.

This application, on the other hand, can be combined with public security. For example, one can find fugitives and fight child sex trafficking.

The Spotlight tool, according to the American NGO Thorn, the creator of the technology, helps to identify an average of eight missing children a day in advertisements for prostitution on the internet.

Although it has evolved enough to be more accurate on average than human beings, a study by the American government points out that facial recognition has difficulty in identifying blacks and Asians.

One hundred eighty-nine algorithms from 99 developers were studied in two different facial recognition processes.

In the first, the focus was on whether a person in one photo was the same in another image. The errors were 10 to 100 times greater (depending on the algorithm) between Asians and blacks compared to whites.

In the second, from a photo, the objective was to identify who the individual was. In this case, mistakes (claiming to be one person, but being another) were more frequent among black women. When using this technology to identify criminals, for example, it can result in an innocent person being accused.

The inconsistency occurs because the groups of photos used by researchers to train the algorithm -a process in which the system ingests millions of images and calibrates them- are predominantly made up of white people.

Translated by Kiratiana Freelon

Read the article in the original language