Capítulo 1
AI limits

Artificial intelligence faces ethical issues on the path to responsible evolution

Allowing computers to make their own decisions leaves space for distortion and errors

Raphael Hernandes

Is it a good idea for weapons to automatically set their targets and fire triggers on their own? And to allow a robot to select candidates for job vacancies in a company? How can one ensure that technology is good for human beings?

These are some of the issues present in the ethical debate around AI (artificial intelligence). AI is growing fast and often incurs questionable applications.

Several countries are rushing to master the technology for commercial and military benefits. They are creating national policies to encourage research and to develop specialized companies in the area. The United States, China, and the European Union lead development in this area.

China is the most emblematic, with the country's startups being encouraged to develop sophisticated facial recognition systems. The government uses technology to track some minorities, such as the Uighurs, who are mostly Muslims.

Street cameras and cell phone apps monitor citizens' footsteps. The Chinese justify this through national security: the objective is to prevent extremist attacks. China has even sold the surveillance system governments in Africa, like Zimbabwe.

Reporting by the South China Morning Post newspaper shows that attempts to adopt technology in schools and universities, promoted by the Chinese Ministry of Education, raise concerns about privacy. In addition to facial recognition, the equipment monitors students' attention during classes.

Adopting technology at an early stage of development can result in other problems. A student reported to the Chinese newspaper that she is not recognized by the smart system when she changes glasses and that the facial recognition causes long lines to enter the dorms.

The discussion of ethics in the West tries to impose limits on artificial intelligence to try to prevent it from getting out of hand. Other powerful technologies have already needed the same treatment.

Bioethics, which helps to establish the rules for research in areas such as genetics, is often cited as a standard to be followed. The regulation comes in the wake of advances: rules for working with cloning, for example, emerged only after the disclosure of the experiment with Dolly the sheep in 1997.

Lygia da Veiga Pereira, head of the Department of Genetics and Evolutionary Biology at USP (University of São Paulo), explains that specialized committees evaluate and can block projects, both those from academia and those from private companies.

An excellent way to manage the risks of significant advances without impeding the progress of science, Pereira said, is through consensus of specialists, in congresses. They may suspend some activity around the world for a specified period –a moratorium– to resume the discussion in the future, with the most advanced technology.

This idea of ​​putting on the brakes appears in artificial intelligence. A draft European Union document obtained by the "Politico" website in January shows that the group is considering banning facial recognition in public areas for a period of three to five years. During this period, more robust rules must be created.

Translated by Kiratiana Freelon

Read the article in the original language