Why the Term ‘Artificial Intelligence’ Is Misleading

The term artificial intelligence furthers misconceptions about the discipline’s capabilities, origins, and nature, writes Pablo Sanguinetti.

Listen to this Article

One of the paradoxes of the current boom in artificial intelligence (AI) is that this technology has assaulted the public debate with such intensity and speed, despite so many misconceptions about what it truly is. How do you define AI? Not even experts agree. This is partly because AI is not a singular reality but a collection of diverse techniques and systems grouped under the label. In their reference book on the discipline, for example, Stuart Russell and Peter Norvig avoid a single definition and instead offer four different ways of explaining it. The first source of confusion stems from how we conceptualize AI: its very name.

The problematic origins of the term

Although work on the idea of ‘thinking machines’ had begun some years earlier, the term ‘artificial intelligence’ is known to have first appeared in 1955 in the proposal for a seminar to be held the following summer at Dartmouth College, now regarded as a foundational moment for the discipline.

The computer scientist John McCarthy coined it with pragmatic objectives that he himself explained on several occasions. On the one hand, he was looking for a ‘catchy’ label that would attract funding and renowned experts. On the other, he wanted to differentiate the new field from other disciplines such as cybernetics or ‘automata studies’, which McCarthy himself researched with Claude Shannon, the father of information theory.

While undeniably successful in hindsight, the term was controversial from the start. Some participants in the Dartmouth seminar thought it was ‘too flashy’ or even ‘phony’. Others criticized the quest for ‘artificial’ rather than ‘real’ intelligence. Two participants, Allen Newell and Herb Simon, argued that the new field of study should be called ‘complex information processing’, and continued to use this label in their work.

Both parts of the name remain contested today. In her book Atlas of AI, Kate Crawford denies that this technology is either ‘intelligent’ or ‘artificial’. Rather, she writes, it ‘is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications’. Other studies point to the eugenic origins of the term ‘intelligence’ or the fraudulent tone implied by ‘artificial’.

“Artificial intelligence” as a category mistake

Beyond these well-known criticisms, I consider that the term ‘artificial intelligence’ leads to what philosopher Gilbert Ryle, in his classic book The Concept of Mind, calls a ‘category mistake‘. This occurs when a concept is misapplied to the wrong category. ‘Saturday is in bed’ is one example of a category mistake. Another one provided by Ryle is a tourist who is shown all the buildings of Oxford University and then asks: “But where is the University?”, confusing buildings with the institution to which they belong.

In my view, ‘artificial intelligence’ generates a category mistake of at least three kinds:

  1. Discipline vs entity: ‘Artificial intelligence’ is a discipline, a field of study, but the term is sometimes used with the indefinite article as if it were an individual, countable entity. For instance, phrases like ‘An AI designs materials…’ confuse a discipline with a tangible being, akin to saying ‘a medicine cures a tumor’.
  2. Aspiration vs reality: The term originally described an aspiration, a goal to be achieved in the distant future. Today, it is often used as if such intelligence already existed, as an already accomplished task. In 1955, the name denoted a promise, not an achievement. This is still true today.
  3. Tool vs agent: The term contributes to anthropomorphizing AI, confusing a tool with an agent, a piece of software with a being with its will, desires, and ideas. This is easy to see in the positioning of AI as the subject of the sentence, replacing the real agents of the action (humans who have used AI as a tool to do something), like in: ‘AI discovers…

The name ‘artificial intelligence’ also fosters a more subtle but equally powerful misconception. Namely, that AI systems not only do the same things as humans, but do them in the same way and according to the same internal mechanisms. This is not true. Airplanes fly, like birds, but by very different physical principles. If they were called “artificial birds”, it would probably be easier to misconceive what they are and how they work. People would be more likely to discuss false, non-existent problems in aeronautics and to relegate the real ones. The same can be said of AI. But the activity of thinking is less visible than that of flying, and the differences between what humans and machines do in this area are therefore harder to see. This kind of conceptual error is reflected in the well-known quote attributed to the computer scientist Edsger Dijkstra: ‘The question of whether machines can think is about as relevant as the question of whether submarines can swim’.

The impact of misconceptions

Although these misunderstandings originate in the term AI, they have spread throughout the lexicon of the discipline. To take just one relevant example, the word ‘hallucination’ has come to describe cases where a large language model includes non-existent information in its responses. Again, this is misleading. A ‘hallucination’ implies: a) the existence of a perceiving mind; b) that this mind can exhibit unexpected deviations from its normally correct functioning. Both are false. What we call ‘hallucination’ in AI models is nothing more than an error inherent – and therefore inevitable – in the probabilistic architecture used by chatbots. A clever semantic maneuver that uses an unsolvable problem of the model to deceptively magnify its capabilities.

It is true that the very vagueness of the term may have contributed to the rapid development of the discipline at one point, as a Stanford report stated in 2016. Today, however, in the midst of the AI boom, this ambiguity has debatable cultural, economic, political, and strategic consequences. For example, it is one of the causes of so-called ‘AI anxiety’: the wrong conceptualization of what AI is and can be. It also presents an obstacle in other key aspects: how can we use, develop, or regulate a technology that we can’t even agree on a clear definition of?

Toward a better terminology

For that reason, the analysis of the language we use to talk about AI is not merely a theoretical digression. Language matters. It is the tool with which we construct our reality. The words, the stories, the visual representation we use for AI impacts on its development and the way we will embrace this technology in our culture. In my book Tecnohumanismo. Por un diseño narrativo y estético de la inteligencia artificial (Tecnohumanism. Toward a Narrative and Aesthetic Design of Artificial Intelligence) I therefore propose the concept of ‘narrative design’ as a necessary – and often neglected – layer in the development of AI.

Other initiatives make the same point. In his study on the myths surrounding AI, the researcher Daniel Leufer includes the myth that the term “has only one meaning” and offers several possible alternatives, such as “computational statistics”, “cognitive automation”, “applied optimisation”, “automated decision (making/support) systems” or even “a computer program”. A stimulating intellectual exercise consists of replacing the term ‘AI’ for some of these alternatives in some of the most difficult and unresolvable problems posed by this technology today.

Some of the problems in conceptualizing AI are rooted in its very name. It may very well be already too late to change it: language is a social and living reality that, fortunately, cannot be regulated from above. But thinking about words, and becoming aware of the different realities we create with them, is a good way to dive deeper into the challenges posed by our interaction with AI today. It is also a reminder that, without a humanistic and critical approach, we won’t be able to fully understand one of the most important technological developments in history.

 

© IE Insights.