The Ethics of AI: Who Controls the Future?

AI’s rapid rise is raising ethical concerns and security risks, including its potential use in terrorism. David Leslie explores the challenges and explains how we can ensure responsible AI development.

 

© IE Insights.

Transcription

If a human user queries one of these large models, for instance, about how do I make a biological weapon? Is this good or service affecting the public interest to leave the control of that good or service in the hands of the private sector?

In November of 2022, the technology world as we know it really changed. That was the month that they released ChatGPT. And what happened? We might call it an industrial revolution. It was the fastest-growing commercial application in U.S. history. Almost immediately, you had large companies basically integrating these large language models into their core services.

Many, many, commercial applications, downstream commercial applications of generative AI, basically penetrate into all domains and markets. What that’s meant, for the broader, research community, is that we needed to pay attention to how this kind of scaling of AI has been met on the other side by multiplying risks. For instance, if we think about our political life, we can think of the kind of integrity of the information ecosystems. With the synthetic generation of massive amounts of text, data and image data,

Suddenly, having confidence that the information we’re receiving is authentic, authentically human. What happens with these systems, in this case, is they can hallucinate. It’s really just the systems predicting an outcome that’s hopefully going to match the expectations of users who are entering prompts. But that prediction isn’t necessarily reflecting what’s out there in reality.

Another risk that’s arisen is the risk of asymmetrical terrorism or violence. If a human user, queries one of these large models about how to do things in chemistry, how to do things in biology, how to do things, that are not legal, for instance, about how do I make a biological weapon? The response has been, among the larger tech companies, is let’s put software patches and filters on these systems.

The truth is, this has only been partially successful because different people have found ways to what’s called jailbreak the system. So find prompts that work around the types of filters and the types of constraints that are put on the systems. Some would say it’s almost an impossible problem to me. And that’s because when you’re thinking about the space of possible prompts in natural language, that space of possible prompts is absolutely infinite.

We use a team of prompt engineers to sort of stress test the system to, for instance, put in as many prompts that might trigger a system to generate an outcome about biological weapons. We’ve had an increasing digital divide, across from the Global North, to the historically-characterized Global South. And what we are experiencing now with the large-scale AI systems is a potential force multiplier for inequality.

One other, really important to mention here is if you have too rapid of a path to market with some of these systems, you have, many, many large scale threats of what we call value lock-in or cultural discrimination. There are many cultural groups that have been marginalized and their data isn’t in the model. Another really significant issue is what we might think of as the consolidation of financial and technological power. You had over the last 10 or 15 years, larger tech companies basically being able to collect a massive amount of data, build a lot of infrastructure in terms of the computing power.

And in addition to that, these larger tech companies have been able to really draw in the top-level skills in AI as a science. And so now what? What we have to confront from a policy and governance perspective is how do we put checks on this massive control? And there’s a tradition of thinking on this that’s emerged over the last, arguably 400 years, called public utility thinking.

So is this good or service affecting the public interest in a way that makes it too risky to leave the control of that good or service in the hands of the private sector or in the hands of markets? There are certain elements of industry that we need to think of as those types of public-interest-oriented utilities.

We think of, for instance, energy that way, for instance, water. But as we now move into an increasingly cyber physical reality, we need to start thinking about these in terms of public utilities. And it’s not just as a national public utility. These are global dynamics. And so if we don’t rethink the way we’re governing them in terms of a more international perspective, we simply won’t be able to respond effectively.

There’s a readiness assessment methodology for UNESCO member states to assess where they stand in terms of really using these technologies responsibly, using digital and AI responsibly. We also have a range of real innovative possibilities. For instance, large scale systems that might address issues of biodiversity drain or climate change or public health. And if we don’t have the motivations directed towards these public interest concerns, then we simply won’t have positive action.

 

Read More

WOULD YOU LIKE TO RECEIVE IE INSIGHTS?

Sign up for our Newsletter

Newsletter Subscription