It isn't easy to become a pioneer in philosophy, a branch of knowledge with millennia of existence. But this happened to Italian Luciano Floridi, 55, professor of information ethics at the University of Oxford.
He is one of the first and most prominent names in the fields of philosophy and information ethics. These branches of study are related to computing and technology. He is an area advisor to the British government and has performed this role for giant companies, such as Google and China's Tencent.
His work stands out when the subject is specifically AI (artificial intelligence). Floridi was one of the 52 authors of the European Union's "Ethical Guidelines for Trusted AI."
With Folha, he spoke about a range of topics, including the definition of artificial intelligence and ethics for technology.
For him, ethics gains importance in the digital age. "We have less religion. People tend to associate ethics with religion a little less than in the past," he said. "Ethics needs to stand on its own."
The video call conversation lasted approximately one hour and was interrupted only once: when Floridi's wife, Brazilian, boarded a plane to visit her home country, and he wanted to wish her a good trip.
The patient and polite speech gave rise to irritation when the subject became the thinking of Nick Bostrom, also a philosopher at the University of Oxford, who talks about the risks of AI reaching superintelligence status and destroying humanity.
"AI described in Nick Bostrom's singularity and superintelligence is not impossible," he said. "Is it likely that an extraterrestrial civilization will arrive here, to dominate and enslave humanity. Impossible? No. Are we going to plan in case that happens? It can only be a joke."
*
What is the definition of AI? They are man-made artifacts capable of doing things we would do, for us and, sometimes, better than us, with a unique skill that we do not find in other mechanical artifacts: learn from performance and improve.
One way of describing AI is as a kind of reservoir of operations to do things that we can apply in different contexts. We can use it to save electricity at home, to find interesting information about people who visit my store, to improve my cell phone camera, or even to recommend a website other products that the consumer would like.
In the academy, there are many contrasting opinions about what AI precisely is. Is the definition of AI important for discussing ethics? One definition says "this is that" and "that is that," like "water is H2O" and "H2O is water," and there is no error. We don't have a definition of AI that way, but we also don't have a definition of many essential things in life like love, intelligence, not even democracy, trust, friendship, and so on. We often have a good understanding. We can recognize these things when we see them. It is crucial to have a good understanding of technology because there we have the rules and the governance of something we understand.
How important is ethics today in the digital age? Ethics has become more and more critical because we have something more and something less. We have less religion. Ethics has become more important because it needs to support itself. You cannot justify something by saying "because the Church says so" or because "God commanded." A little less religion made the ethical debate more difficult, but more urgent. And we have something more: we talk to each other much more than at any time in the past. I'm talking about globalization. Suddenly, different views on what is right and wrong are colliding in a way that never happened. The more technology, science, and power we have over anything - society, the environment, our own lives - the more urgent ethical issues become.
And why discuss ethics and AI? Until recently, we understood in terms of "divine interventions" (for people in the past who believed in God), "human interventions" or "animal interventions." Those were the possible forces. It is as if we have a chessboard on which, suddenly, a new piece appears. Clearly, this piece changes the whole game. It's AI.
If you have something that can do things in the world autonomously and by learning, in a way that it can change its own programs, the activity requires an understanding of right and wrong. Ethics.
How do we answer these questions and define the limits? The initial debate on ethics in AI dates back to the 1960s, with the first forms of symbolic AI emerging. The result is that in the last year, ethical codes have flourished for AI. Two, in particular, are very important in scope. One is that of the European Union. We did a good job, I think, and we have a good structure in Europe to understand good and not so good AI. The other is from the OECD, a similar structure.
Critics say these documents are not specific enough. Do you see them as a first step, or do you think they are enough? It shows that at least some people somewhere care enough to produce a document about this whole story. This is better than nothing, but that's it: better than nothing. Some of them are completely useless. What happens now is that every company, every institution, every government feels that it cannot be left behind. If 100 companies have a document with their structures and rules for AI and if I am company 102 I also need to have it. I can't be the only one without it.
We need to do much more. Therefore, governments, organizations, or international institutions make the true guidelines. If you have international institutions, such as the OECD, European Union, Unesco, intervening, we are already in a new step in the right direction. Take, for example, AI applied to facial recognition. We have already had this debate. Do I use facial recognition in my store? In the airport? This hole has to be plugged, and people are plugging it. I tend to be a little optimistic.
And how are we doing in translating practical policy guidelines? In a general context, I see large companies developing consultancy services for their clients and helping to verify that they are following the rules and regulations, as well as whether ethical issues have been taken into account.
There are gaps, and more needs to be accomplished, but something is already available. People are moving in terms of legislation, self-regulation, policies, or digital tools to translate principles into practices. What you can do is make sure that mistakes happen as rarely as possible and that, when they happen, there is a way to rectify them.
With different entities, governments, institutions, and companies creating their rules for the use of AI, are we not at risk of being lost in terms of which document to follow? A little, yes. At first, you may have a disagreement or different views, but this is something we have experienced in the past.
The big tech companies are asking for regulation, which is strange, since they usually want self-regulation. You work with one of them, Google, on a committee to advise on ethical issues. Why are these technology companies interested now? There are a few reasons for this. The first is a certainty: they want to be sure of what is right and wrong. Companies like certainty even more than good rules. Better to have bad rules than no rules at all. The second thing is that they understand that public opinion calls for a good application of AI. Given that it is public opinion, what has to be acceptable and what is not must come from society. Companies like regulations as they help.
Is there a difference when thinking about regulations for systems with different purposes? For example, is it different to think about AI regulations for self-driving cars and AI for song suggestions? Yes and no. Some regulations are common to many areas. Think about safety regulations involving electricity. It doesn't matter if it's an electric drill, an electric oven, or an electric car. It is electricity and therefore has safety regulations. This would apply equally to AI. But then you have something specific: you have safety linked to the brakes for the car, not the microwave, or the safety measures for microwave doors that are different from those made for car doors. This is very specific. I think, then, of a combination of the two: ethical principles that cover several different areas, guidelines that spread horizontally, but also vertically thinking about sector by sector. It is both general and specific.
How far are we from having these guidelines established? Are we talking about months, years, a generation? Some years. I wouldn't be surprised if we had this conversation in five years, and what we said today was history. I think technology is developing so fast, with such a widespread and profound impact that the socio-political will to regulate and create a clear sense of what is right and wrong will increase rapidly.
And how does this thinking process work in practice? For example, with self-driving cars, how does one conclude who is responsible for an accident: the driver, the manufacturer, who? There are many contexts in the world where something happens and it depends on many agents, more or less connected, with a role in that, say, accident, for example. We had this in many other contexts before AI. The recommendation is to distribute responsibility among all agents unless they can prove that they had nothing to do with an accident.
A very concrete example, an analogy: in Holland, if you ride a bicycle next to someone, no problem. You can ride on the street, side by side with someone and that's fine. If a third person joins you, it is illegal. You cannot go with three people side by side on a public street. Who gets the fine? All three, because when A and B were side by side and C reached them, A and B could slow down or stop completely for C to pass. Now, in the same Netherlands, another example, if two boats are standing alongside the river side by side, it is legal. If a third boat arrives and stops beside them, it is illegal. In that case, only the third boat would be fined. Why? Because the other two boats cannot disengage in that case, they cannot go anywhere. It is not their fault. You can see that these are two very elementary examples, obvious, with three agents. In one case, the responsibility is distributed; in the other case, only one is responsible.
With AI, it's the same. In contexts where we have a crazy person using AI for something bad, it is that person's fault. There is not much debate. In many other contexts, with many agents, many circumstances, many actors, who is to blame? Everyone, unless they prove they haven't done anything wrong. Then the manufacturer of the car, the software, the driver, even the person who crossed the street in the wrong place. There may be co-responsibility that needs to be distributed among them.
Is it always a case-by-case analysis? I think it's more types of cases. Not just an isolated incident, but a group of cases. It is not as if every time a cyclist joins another side by side, we get to see what happened. No. This is a type. The family of three boats is another decision.
Let's take a realistic example. If a person driving in a self-driving car has no way to drive, use a steering wheel, nothing. It's like me on a train; I have zero control. Then the car gets involved in an accident. Whose fault is it? Would you blame a passenger for the accident the train had? Of course not. There was no control. In a case where there is a steering wheel, in which there is a big red button saying, "if something goes wrong, press the button" ... Who is responsible? The car manufacturer and the driver who didn't press the button.
We need to be very concrete and make sure that there are typologies and, not exactly on a case-by-case basis, but understanding that case belongs to that type or that other. There we will have a clear sense of what is happening.
In your lectures, you mention the overuse and underuse of AI. What are the problems in these situations? Overuse, with a concrete example, is like the debate we have today about facial recognition. We don't need that amount of facial recognition in every corner. It's like killing mosquitoes with a grenade.
Underutilization is typical, for example, in the health sector. We don't use it because the regulation is not very clear; people are afraid of the consequences.
Will AI create the future and be in everything? We have a great opportunity to do a lot of good work, both for our social problems, inequality in particular, and for the environment, particularly global warming. It is a very powerful technology that, in the right hands and with the right governance, could do fantastic things. I am a little concerned that we are not doing this; we are missing the opportunity.
The reason ethics is so important is precisely because the good governance of AI, the correct application of this technology, will need a general project about our society. I like to call it a "human project." Something society will want. What future we want to leave for the next generations. This is ethical. But we are concerned with other things, like using AI to generate more money, basically.
What about the rights of robots? Should we be thinking about it? [Laughs]. No, this is a joke. Would you give rights to your dishwasher? It's a piece of engineering. It's good entertainment [talking about robot law], we can joke about it, but let's not talk about Star Wars.
You are critical of science fiction that deals with the end of the world through AI or superintelligence. Don't you see Nick Bostrom's idea of superintelligence as a possibility? I think people have been playing with some tricks. These are tricks that we teach philosophy students in the first year. The trick is to talk about possibility, and that is exactly the word they use.
Let me give you an example: imagine that I buy a lottery ticket. Is it possible for me to win? Sure. Absolutely. I buy another ticket from another lottery. Is it possible that I win the second time? Yes. If you understand what "possible" means, you have to say yes. But it will not happen. It is unlikely; it is insignificantly possible. This is the kind of rationalization made by Nick Bostrom. "Ah! But you cannot exclude the possibility ..." No, I cannot. AI described in Nick Bostrom's singularity, and superintelligence is not impossible. I agree. Does it mean it is possible? Do not.
Just as it is possible for an extraterrestrial civilization to arrive here, to dominate and enslave humanity. Impossible? Hmmm. No. Are we going to plan in case that happens? It can only be a joke.
Is AI a force for good? I think so. Like most of the technologies we have already developed, they are. When we talk about the Iron Age, of course, we have weapons in mind, but the truth is that the Iron Age introduced the way to make tools that can break stones, sow. When we talk about the wheel, alphabet, computers, electricity ... These are all good things. The Internet. It's all good stuff. Can we use it for something bad? Absolutely.
I am more optimistic about technology and less about humanity. I think we'll make a mess of [humanity]. Therefore, discussions like Nick Bostrom's, singularity, etc. they're not just funny. They are distracting, and this is serious.
As we speak, we have 700 million people without access to clean water who could use AI to take a chance. And do you really want to worry about any Terminator? Skynet? Ethically speaking, it is irresponsible. Stop watching Netflix and get real.
Translated by Kiratiana Freelon