Misinformation, Deepfakes, and the Future of AI: Talking Trends

Can we trust anything in the age of AI and deepfakes? Audrey Tang, Taiwan’s first Minister of Digital Affairs, sits down with Ikhlaq Sidhu to explore digital trust and how to combat misinformation.

 

© IE Insights.

Transcription

Ikhlaq Sidhu (IS): Hey, Audrey. Great to be able to chat with you. I was just thinking, you’ve got so many technical accomplishments. I think probably the topic of the year has got to be AI. What do you think of AI? We’re here now, it’s here now.

Audrey Tang (AT): Very much so. I often think of AI as assistive intelligence. That is to say, something that helps people communicate. Of course, I use language models all the time. I on my own MacBook trained an email-replying language model that learns from everything that I have ever written so that I can draft my emails. I can just say, decline very politely, and then it replies in a very polite way, in a way that I would do it. Because it’s trained on my laptop, I don’t have to worry about privacy, security or things like that.

IS: In that case, it really is an agent completely on your behalf. And I think most of the world is still using an agent that’s somewhere in the cloud or far away from them. But in your case, you kind of just put it right on your computer. Yeah. I’m thinking that that’s probably what’s going to happen for everybody in future.

I think AI is a little bit of a suit of armor, that you may need it to protect yourself. Like, assist you, as in writing the email, but also all these things that come in that are maybe not so helpful to you, it could be watching over and guiding.

AT: I think personal computing means that we are still the person, and AI just assists or augments our capabilities.

IS: Yeah. And I mean, the whole thing is pretty fascinating. A lot has happened in five years and a big surprise last year with OpenAI and GPTs and the language models. But I think of it sometimes kind of like there’s a library of all the work on the internet, and somehow it’s all sucked into some billion number of parameters.

And now instead of reading any of them, you can just ask this little translator. It’s like a librarian for the internet instead of telling you it’s on aisle 12, which is what Google would do, it just says, oh, the paragraph you want to know about, basically it says this.

AT: And it can interpolate between very polite, somewhat polite, and somewhat not polite as well.

IS: In its response.

AT: Yes. So, I’m very interested in this idea of alignment, tuning the LLM models to the local expectations, because what’s considered normal in Silicon Valley or wherever it’s trained may not be normal in my culture or in the way that I write emails. So how to steer it to or to align it is also something that’s very interesting.

IS: I think the amazing thing there is that that same one AI model can be all personalities. It’s like you could imagine a version where all these AI models are trained differently, you know, like this one is supposed to sound like a Western cowboy, and this other one is supposed to be very empathetic or something. But the same AI model you can just ask it for a slice of its personality.

AT: Like a foundation model. It can evolve into various different personalities.

IS: You can literally say, tell it to me like a therapist would and it’ll tell it to you like that. The same one will just give you all these different variations.

AT: Because for the past eight years, during my tenure in public service, I published all the transcripts of the journalistic interviews and even some internal meetings online. Yeah. So if you ask any foundation model, you know, phrase it in a way that Audrey Tang would, actually it already knows how Audrey Tang sounds like.

IS: Let’s say we go to that future where everyone’s got their AI embedded in their computer, in their phone, or, you know.

AT: AI PC.

IS: Yeah, exactly. PC evolves to become the AI PC. Then, is that like the end of OpenAI? Because, you know, now everyone’s got their own. You don’t really have to use that API or pay that monthly fee.

AT: Yeah. I mean in the beginning before the personal computer there was mainframe. And mainframe was very useful for a while. Yeah. And that was before, you know VisiCalc or some of those killer applications that worked on the personal level. So I fully expect there is still going to be a year or two, for the cloud-based service to be very useful, mostly because of network effect, but, more or less, I think within a few years the AI PC idea will catch up.

IS: It’s a pretty interesting thing from a strategy point of view from those companies, right, because they are, kind of going down one road map. And I’m assuming the one you’re probably using, like Meta’s version, I’m guessing, Llama 3. Yeah, right. So that one is open source kind of going with your philosophy on a number of things, right?

So if everyone’s got an open source and meta doesn’t charge for it, and, that’s widely everywhere, maybe, those gigantic stock prices are not warranted for all the companies that are getting in there. What does that mean for misinformation or for the possible the harm that happens to truth because of AI?

AT: There’s currently nothing preventing anyone from taking Llama and tune it on Audrey Tang’s way of speech or taking Stable Diffusion or taking a Stable Video or any of those models and make a very convincing, interactive deepfake, that people cannot really tell whether it’s Audrey Tang speaking or not. Yeah. Whereas with the API, at least they can do some, safeguarding.

IS: You just went in to kind of open the door to, well, how can I know anything is true anymore? I mean, seems like we were having a hard enough time just with social media and frankly Google-democratized algorithms for looking at when you type something in and where PageRank shows it to you, it’s not based on whether it’s true, it’s based on whether it’s more popular.

AT: A popularity contest.

IS: Right? It’s a popularity contest for truth. And so, we were already that far along. Now it’s going to be even more hard to know when something is true or not true. What happens?

AT: Yeah. In Taiwan we’ve seen a surge in scams, in fraud cases. Right. Where you just see an advertisement on Google, YouTube or on Meta and so on, that says this celebrity wants to teach you on how to invest in stocks or whatever. And when you click it, it brings you to an end-to-end encrypted chat, like WhatsApp or something.

And actually that celebrity in their sound, actually responds to your questions in real time. So it’s very difficult for people to tell whether this is actually the celebrity or not. So, our way to tackle this is very simply to say, you know, nowadays for investment advice or advertisements and for all advertisements, a digital signature needs to be carried by the platform.

If the platform does not carry the digital signature of the celebrity, and if somebody gets conned for 1 million, say from Facebook, and Facebook after getting notified, if it didn’t take it down, then Facebook will be liable for that 1 million of damage. This is re-internalizing the externality.

IS: All right. So what you’re doing is you’re basically taking the same approach that you used to know that when you go to ww.x.com that you know, that same digital signature is also to know that you’re actually talking to the exact person.

AT: Right, this is the KYC: Know Your Client approach.

IS: Right. Exactly. You know I’m just thinking back to that Taylor Swift, thing and like everybody can now make pictures of everybody and voices of everyone. You don’t know where the information’s coming from, and you don’t care where it’s coming from. But when it’s flooded over the internet, it’s really affecting people. What about those situations?

AT: Yeah. For example, leading up to elections. If some candidates or a minister or somebody gets deep faked or even cheap faked, just getting some clips and remixing it in a way that’s not very creative but convinces people. And then I think the most effective way we’ve found to counter that is called pre-bunking, because to debunk is already after the fact is it’s not going to catch up to the real merits and disinformation.

But if we pre-bunk, for example, two years ago I deepfaked myself and send out a video a government advertisement, really, that shows how an actor can play me. And with just 12 hours of MacBook processing, it becomes very convincing. And I tell people in that deepfake video, soon it will not be 12 hours, it will be 12 minutes.

And then 12 seconds and 12 milliseconds. And when it becomes 12 milliseconds, it become interactive. And you cannot tell the difference anymore. So know that if it is not signed, if it doesn’t come from a government number, because in Taiwan all the government SMSs come from a single number: 111. If it doesn’t come from that number, then it’s fake.

Previously on the internet, if we see somebody video or if you make a video call, the default is that we assume they are a human until they show bot-like behavior. And then we’re like, oh, that’s a bot, right? But from now on, I think we need to flip the default. Everyone should be assumed to be a bot unless we are meeting face-to-face as we’re meeting now, but otherwise, unless it’s digitally signed or your web of trust, meaning people you already trust are human, vouch for that person.

Otherwise we should just assume it’s a bot.

IS: I think this idea of, do you know who is sending you information? And how do we really know? And maybe we should just be assuming it’s not them, but until there’s a better way, or there’s at least a way to know it truly is that person. I guess that’s the world we’re really going into. And, this little conversation sheds a little light on what we can expect in the future.

AT: Yeah, very much so. I think digital signature far from being something that people only use for e-commerce, is probably just going to be the default from this point on.

Very good. All right. Thank you so much for taking the time.

 

Read More

WOULD YOU LIKE TO RECEIVE IE INSIGHTS?

Sign up for our Newsletter

Newsletter Subscription