Chapter 4
Interviews

Hany Farid, deepfake expert, defends government regulation of tech giants

Brexit and the USA elections show that misinformation campaigns work, he says

Raphael Hernandes

Hany Farid, 54, said he doesn't know how he ended up with the nickname "modern-day Sherlock Holmes," but he has no problem with it. "I have heard this in the press for many years. I've been called much worse, so I guess I don't care."

Farid specializes in forensic analysis of digital images and deep fakes, which he prefers to call "content synthesized by AI (artificial intelligence)" because it is "more descriptive." Among his activities, is the collaboration with DARPA (Defense Advanced Research Projects Agency) in the development of technologies to combat these fake contents since 2016.

Last semester he started teaching at schools of information and computer science at the University of California, Berkeley, after ten years at Dartmouth College; both institutions are in the USA.

Farid explained to Folha that the development of technologies to combat false content spread in mass is not enough. It needs to be supported by greater responsibility assumed by platforms such as Facebook, YouTube, and Twitter, as well as "smarter" consumers.

"We need to start realizing that we are being manipulated, whether by platforms, by bad actors, both at home and abroad, and we have to be more critical," he said.

Hany Farid, UC Berkeley professor
Hany Farid, UC Berkeley professor - Divulgação

His book "Fake Photos" (MIT Press), released in September, offers tips for anyone to identify fake photos with minimal, intermediate and advanced technical knowledge.

One of these techniques is to do a so-called "reverse search" of the image. A search is done using the photo (instead of words). You can do it on sites like Tineye and Google Images. With this, it is possible to determine the origin of the content.

*

How do you define deepfake? The best, most technical term is "AI-synthesized content." The reason I like this term is that it is more descriptive. First, we have to understand that fake images, fake videos, are not new. What has changed is the automation of creating fake audios and videos. This is important because, historically, there were half a dozen people around the world, like Hollywood studios, with this capacity. Now, the masses have been given access to do this. This changes the game in terms of online misinformation campaigns.
It works like this: you have two computers, one is the synthesis engine, and the other is a detector. The synthesis engine generates the image randomly by combining the pixels. He shows it to the detector and asks, "is this image fake?" The detector has millions and millions of images of people and responds, "try again." It does this repeatedly until the classifier (or detector) says, "it's perfect."
This is what democratizes access to technology by the average person on the internet, something that used to be in the hands of Hollywood studios only.

Was the creation of this technology the tipping point in the generation of fake content? It is difficult to pinpoint precisely, but I would say it was one of the key points. There is a lot of work on computer graphics, computer vision, that come into play.

Is there a difference when we see a fun deepfake, like putting Nicholas Cage's face in unusual situations, or when there's something like a president saying something he never said? It is exactly the same technology. The same technology that makes Donald Trump say something puts Nicholas Cage in a movie. There are many cool special effects in Hollywood used for entertainment, satire ... But the same technology can be turned into a weapon.

How easy is it to create a deepfake? Because dealing with AI isn't exactly that simple for most people ... Eighteen months ago it wasn't easy, but now you can just download the codes. What is happening [popularization] is because many of the codes to do this are open, people are creating the tools that can be used by people who are not computer science graduates.

You mentioned a change in 18 months. With that, you can get an idea of ​​the speed of the evolution of technology. What are your expectations of the future? I think it is very difficult to predict the future, but if the trend continues like this, and this is a big "if," but all the evidence points to it, I think what will happen is that the sophistication and realism of the fake content will help it to grow. It will become easier and easier to use. And the threats will only grow.
When we talk about the future, we need to talk about some other things, because the impact of deepfake is not just because of AI. If I had the ability to create deepfakes, but I couldn't distribute, it wouldn't be too much of a problem. But of course, I can, thanks to Facebook, Twitter, and YouTube.
And there is the fact that most people consume content at incredible speed in a very polarized society. All of these pieces come together to create this misinformation apocalypse. It is not an isolated thing. It really is the collection of all these pieces together.

Is it easier to create or detect a fake? It will always be easier to create. Staying on the defense side, in this case, is difficult. Think of things like spam and viruses. If you want spam to pass through the defense, just shoot billions and billions of spam. Some of them will make it.
Number two is that, on the defense side, the field changes very quickly. When we develop a technology to detect a fake, everyone is already in another.
So the goal is not to stop deepfakes and disinformation, but to minimize the impact. Let's take it away from the teenager in Macedonia who was manipulating the 2016 elections in the USA.

So what should we do to minimize the impact of deepfakes? Of course, we need to develop technology to distinguish the real from the false –that's what I spend most of my time doing.
But we need platforms like Facebook, Twitter, and YouTube to take more responsibility for how they have become weapons around the world.
And we too, as consumers, need to get smarter. We need to start realizing that we are being manipulated, whether by platforms, by bad actors. We need to understand that reading a news story on WhatsApp or Facebook is not the same as in your newspaper or in the New York Times.
I think a combination of technology, policies, education, and behavioral changes will be critical. Above all of this is regulatory pressure. Governments need to go down in history and say, "look, it's a mess."

Do you see a group developing good regulations? If so, who? I would say that, on the regulatory front, Europeans are in the best place with GDPR (privacy laws). They were very aggressive. The Germans, the French, the British, and the European Union as a whole have been good at these issues. This on the regulatory and legislative side.
On the policy side, big tech companies are nowhere to be found. On the technology side, we are improving. We are working hard, but it is always a constant battle. On the human side, I don't know. It looks like everyone is in the same mess right now.

Can this problem really be solved? I see it being managed or mitigated, but not resolved. There has always been false news, and it will always exist. But the scale and speed of that have now become incredibly dangerous. As I define "success" here is not to eliminate fake news, but to manage it. I suspect it will be years of work.
It will probably get worse before it gets better, because what we see here in the US with the elections and in the UK with Brexit is that fake news and false information work. Frankly, after what we saw from massive disinformation campaigns here in the USA, we did very little to improve the situation for 2020. I think we are still downhill. And I say that too because technology companies, despite the mess that was the 2016 elections, did very little to deal with the consequences of that.

What should technology companies have done? Is there still time before the elections? I think the time is short. I think several things need to be done. One is obviously to develop the technology. The other, perhaps more important, is to think of coherent policies to deal with disinformation campaigns. And I haven't seen the platforms do that. It is very difficult to divide satire, protected speeches, political comments, with intentionally misleading content focused on influencing elections. However, the combined value of Facebook, Google, YouTube, and Twitter is in the hundreds of billions of dollars. If you can't invest some resources in it ... Frankly, I don't have the patience.

Hany Farid, UC Berkeley professor
Hany Farid, UC Berkeley professor - Divulgação

Could you talk a little more about the effects of this fake content on the real world and democracy? I worry about fake videos of presidential candidates, fraud, non-consensual pornography. How are you going to believe anything you see, read, or hear? When Donald Trump manages, whenever he doesn't like something, to say "fake news, fake news, fake news" and people believe him, where are we as a democracy?
My wife has this great saying that "before we discuss anything, we need to agree on the facts". Because we may have differences of opinion about the interpretation of the facts, but the facts are the facts. We have to start there.
People confused the "Information Age" with the "Knowledge Age." We had this idea 20 years ago that information would free us. It turns out that it is not so. There is a difference between information and knowledge, and we confuse these two.

How do we transform the 'Information Age' into 'Knowledge Age'? The short answer is that I don't know. Part is education. We need to educate the next generation to be better digital citizens, like consuming online content. We have to encourage, find a way, to make technology companies do their job. I genuinely believe that the poison of the internet is the business model. The Silicon Valley business model is that everything is free, but I take all your data and deliver ads. First, you have serious privacy issues that we're still trying to understand. Second is that you encourage people to stay on your platform for as long as possible.
I don't know how to change that because the truth is, it's incredibly profitable. And people, for 20 years, were happy to pay nothing and are only now waking up. My hope is that the market will fix this. That some entrepreneur will come and say, "we can do something better."

You have been working with DARPA to develop tools to fight deepfakes even before they became popular... Yeah, we started with this very early in 2015. As soon as we saw the first technology to automate the creation of counterfeits, we could see what was going to happen. We saw deepfakes appear about a year and a half ago, and we started to work aggressively on them too. Fundamentally, the problems we were dealing with have not changed. It was only the scale and scope that changed.

What kind of technology do they have and how far are they from being put into practice? In terms of putting it into practice on the internet scale, you have to understand that this is very difficult. On Facebook alone, there are one billion uploads per day; on YouTube, there are 500 hours of video every minute. Operating on this scale is incredibly difficult, and we are nowhere close to doing that. The strategy is to launch our tools and put them in the hands of people like you, journalists. And allow them to have one more tool in their arsenal to determine whether stories are valid or not.

If creating fake content is a lot faster than detecting fake content, how can technology help with that? The hope is that when a video reaches the press, and one wants to determine whether it is real or not, one can use some of the tools that we and others are developing. Then, basically, classify the videos as fake or not.

Translated by Kiratiana Freelon

Read the article in the original language