AI Hype

We’ve seen that machine learning has produced tremendous advances in the ability of computers to detect and replicate pattens, solve complex problems, and generate new content from existing content using a combination of logic and probability. These advances are real.

We’ve also seen that these abilities have emerged from a computer science research program that began more than half a century ago with an ambitious overarching goal: to simulate human intelligence in a machine.

Finally, we’ve seen that from the beginning, this program was one where talk of simulating intelligence easily slid over into talk of creating intelligence. “AI hype” might be said to begin with the obfuscation of the difference between these two things.

For businesses that sell AI systems, the financial incentives to keep up this obfuscation are obvious. The obfuscation also benefits businesses that invest in AI services with the thought that these might replace some of their actual human workers.

But even when the sellers and buyers of AI services don’t claim that the machines providing them actually are intelligent in the same way humans are, it benefits them to hype what the machines are actually capable of doing, and to minimize their failures—such as their well-documented propensity to generate non-existent facts and sources. (As is often noted, the now common term for this propensity, “hallucination,” perpetuates the obfuscation described above by ascribing some kind of imaginative power to circuits and software.)

In their book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, computational linguist Emily M. Bender and sociologist Alex Hanna write,

To put it bluntly, “AI” is a marketing term. It doesn’t refer to a coherent set of technologies. Instead, the phrase “artificial intelligence” is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that the technology is similar to humans, able to do things that, in fact, intrinsically require human judgment, perception, or creativity. (p. 38)

They continue,

The set of technologies that get sold as AI is diverse, in both application and contruction—in fact, we wouldn’t be surprised if some of the tech being sold this way is actually just a fancy wrapper around some spreadsheets. The term serves to obscure that diversity, however, so the conversation becomes clearer if one speaks in terms of “automation” rather than “AI” and looks at precisely what is being automated. (p. 39)

Bender and Hanna distinguish five types of automation among the systems marketed as powered by AI. Their point, again, is that these technologies don’t all work the same way:

  • Decision-making: Automated decision-making systems may be used, for example, to set bail, approve loans, review résumés, or determine eligibility for certain social benefits.
  • Classification: Automated classification systems work on data to match patterns and assign the data to different categories. Facial recognition tools based on image databases and targeted advertising based on user data fall into this category.
  • Recommendation: Automated recommendation systems lie behind the organization of social media feeds (when these are determined by a platform’s algorithm) and suggestions from services like Netflix or Spotify. They generate recommendations based on a profile compiled from the user’s data or from a similar user’s profile.
  • Transcription/translation: These automated systems translate information from one format to another. Examples include transforming speech into text, extracting text from images, and translating one language to another.
  • Text and image generation: Also known, as we saw earlier, as “generative artificial intelligence” or “GenAI,” automated text/image generation systems take a user prompt as input and generate plausible output in response.

Some of these technologies have been around for a long time; only recently, Bender and Hanna point out, have most of them been marketed under the banner of “AI.”

All of these technologies can save time and produce other benefits in personal, business, or research contexts. It isn’t hype to recognize what they can do. AI hype consists either in exaggerating their capabilities or in ignoring or minimizing the costs they incur—costs like those taken up on the next page of this module under “AI Harms.”

As we’ve seen Bender and Hanna arguing, AI hype can also take the form of representing these technologies, collectively, as more than the sum of their parts—particularly by suggesting that together they put the world into, or on the cusp of, a new era in which machines will somewhow awake into consciousness.

Accordingly, let’s conclude this page on AI hype by returning to what’s problematic in the idea of “thinking machines.”

I experience my own consciousness as something internal to myself; I experience another’s consciousness only by means of that person’s behavior. I can’t inspect your thinking directly, nor can you inspect mine. Each of us is limited to observing behavior in the other that looks like it must be the product of thinking. I’m inclined to believe that your outward behavior is accompanied by the same kind of inner experience that constitutes my own consciousness. But I have no way to know, for certain, that this is so. Alan Turing acknowledged this limitation in interpersonal knowledge in his pioneering paper on computing and intelligence, writing that, without the means of proof, “it is usual to have the polite convention that everyone thinks.”

For the same reason that I can’t know for sure that you’re thinking, there’s no way for me to prove that you’re not. AI hype benefits from the impossibility of proving this particular negative about machines as well. How would I prove that the computer on whose keyboard I’m typing right now isn’t conscious? So perhaps the polite convention that applies to people should apply to my computer as well—or at least to a chatbot I access on it, which responds to my prompts with linguistic behaviors that certainly look like the behaviors of a thinking being.

Granted, there are those who take the position that, in principle, there’s no reason to believe a machine couldn’t think—either because they’re willing to equate thinking itself with external behavior, making the presence or absence of some internal sensation of thinking irrelevant, or because the fact that consciousness arises from material causes—brain cells passing electrical signals across synapses—suggests that non-biologial materials such as silicon should be able to achieve the same trick.

But there are reasons to be skeptical.

One of the most famous arguments against collapsing thinking into behavior, formulated in 1980, and described in detail in the Stanford Encyclopedia article mentioned earlier, was offered by the philosopher John Searle in a thought experiment known as the “Chinese Room.” In the experiment, a human translator in a sealed room translates into English notes passed to him in a language he can’t speak or write, written in characters he doesn’t even recognize; he produces his translations by using a simple look-up table. It would make no sense to conclude from this behavior that the translator understood the language he was successfully translating.

A number of more recent critics have questioned the possibility of thinking machines by pointing to the irreducibly embodied and situational nature of human communication, in which intention, meaning-making, and striving play roles that are absent from the way AI systems—even those that improve themselves by means of machine learning—generate language. In a 2023 New York magazine article profiling Emily Bender, the computational linguist mentioned above, titled “You Are Not a Parrot,” Bender points out that language “is built on ‘people speaking to each other, working together to achieve a joint understanding. It’s a human-human interaction.’” The title of the New York profile comes from a 2021 paper that Bender co-authored with three others (two of them members of Google’s “Ethical AI” team who lost their jobs as a result), “On the Dangers of Stochastic Parrots: Can Large Language Models Be Too Big?”. Thanks to Bender and her co-authors, the expression “stochastic parrot” has become a widely used shorthand for the way language-generating AI systems operate: not by striving to put thought into words but, as we saw earlier, by a process of statistical inference in which they replicate patterns in the huge quantities of text on which they’ve been trained.

The philosopher Alva Noë, in his essay “Rage Against the Machine,” has pointed out that human language-generation, unlike that of computers, involves elements of “resistance,” “irritation,” and “negotation”—negotiation not only with other speakers but with the meanings of words themselves, resulting, at times, in new or altered meanings, which in turn alter the speakers themselves.

We don’t just talk, as it were, following the rules blindly. Talking is an issue for us, and the rules, such as they are, are up for grabs and in dispute. We always, inevitably, and from the beginning, are made to cope with how hard talking is, how liable we are to misunderstand each other, although most of the time this is undertaken matter-of-factly and without undue stress. To talk, almost inevitably, is to question word choice, to demand reformulation, repetition and repair. What do you mean? How can you say that? In this way, talking contains within it, from the start, and as one of its basic modes, the activities of criticism and reflection about talking, which end up changing the way we talk. We don’t just act, as it were, in the flow. Flow eludes us and, in its place, we know striving, argument and negotiation. And so we change language in using language; and that’s what a language is, a place of capture and release, engagement and criticism, a process. We can never factor out mere doing, skilfulness, habit – the sort of things machines are used effectively to simulate – from the ways these doings, engagements and skills are made new, transformed, through our very acts of doing them. These are entangled. This is a crucial lesson about the very shape of human cognition.

The question whether machines can think may seem too abstruse, too “philosophical,” to consider very long, especially in comparison to the many questions now circulating about how AI will likely affect the job market, the economy, education, and other aspects of daily life. But as we move ahead to consider, on the next page, AI Harms, keep in mind that some of the likely effects of AI are directly connected to claims about its capabilities, including the claim that it can replace humans in doing tasks that have traditionally been assumed to require a thinking human being.


This site uses Just the Docs, a documentation theme for Jekyll.