IE Law School hosted Professor Wayne Holmes for a talk on AI and education

A lecturer is presenting in front of an attentive audience in a modern classroom.

Professor Holmes explored the rapid growth of AI in the classroom, raising essential questions about its implications.

On March 27th, Professor Wayne Holmes participated in IE Law School’s Lawtomation and AI and the Law initiatives to discuss the educational implications of artificial intelligence.

Professor Holmes with Carmen Pérez-Llorca, Vice Dean for Strategic Initiatives at IE Law School.

Wayne Holmes is Full Professor of Critical Studies of Artificial Intelligence and Education at University College London (UCL), and a Senior Researcher at the UNESCO International Research Centre on AI. He also serves as Associate Professor (Adjunct) at the University of Nova Gorica (Slovenia) and as Foreign Expert at Beijing Normal University Zhuhai (China).

His talk, "Future-Proofing Legal Education in the Age of AI," raised many pressing questions for educators: Can AI help teachers improve education? What is the real cost of AI? The rise of generative AI in the classroom

Professor Holmes raised concerns about young people’s inability to differentiate between true and false information created by AI, emphasizing the lack of reliability in AI-generated content. In simple terms, "It looks intelligent, but it isn’t," he stated.

Holmes highlighted that one study demonstrated that while using ChatGPT may initially boost students’ performance, they do worse than their peers when the tool is later taken away. This indicates that overreliance on AI leads to a lack of genuine learning, which may have broader negative consequences in the long run.

"Everybody needs to be critically AI-literate."
Professor Wayne Holmes

The human element of AI

Holmes categorized AI into three dimensions: technological, practical and human. He noted, "There are lots of courses about the technological dimension and hundreds of pages about the practical dimension, but virtually nothing about the human dimension."

AI itself was merely a tool—humans train it, set the parameters for its responses and enter the prompts for what it generates. This means it can be easily misused, with deepfakes serving as a dangerous example. He points out that the big companies that own AI platforms have total control over how it is trained—"beyond democratic control," he emphasized.

Hidden costs

Professor Holmes in conversation with Antonio Aloisi, Law Professor at IE Law School.

Professor Holmes urged us to look below the surface of AI and consider several important questions. How will AI impact society in ways we don’t immediately see? What deeper implications should we consider?

One pressing concern is safety. He pointed out that despite AI’s widespread use in classrooms “there’s virtually no independent evidence at scale for the effectiveness or safety of these tools or their impact on cognition and mental health,” he warned.

Another hidden cost is environmental. AI platforms like ChatGPT require massive amounts of energy and water resources, and according to Holmes, “AI is actually adding to climate change as much as it is helping.”

A top-down and a bottom-up approach

To address these challenges, Professor Holmes called for stronger regulation by governments and intergovernmental bodies, particularly to protect children in educational settings. While existing rules cover issues such as data reliability, privacy and bias, they fail to address education-specific concerns like student agency and teacher disempowerment.

At the same time, he advocated for a bottom-up approach: improving AI literacy among individuals. "Everybody needs to be critically AI-literate," he states. This includes understanding not only the practical aspects but also the human implications of AI. Without this knowledge, he argues, "you’re not in a position to make informed decisions about whether to use AI, where to use AI and when to use AI."