What is Artificial Intelligence?

Artificial intelligence—AI for short—especially a variety of that species known as generative artificial intelligence—GenAI—has been the talk of computer scientists, journalists, educators, economists, politicians, business executives, futurists, bloggers, and social media influencers—a partial list—ever since the company OpenAI released its chatbot ChatGPT in late 2022.

But actually artificial intelligence is an old idea. As the article on the topic in the Stanford Encyclopedia of Philosophy points out, the seventeenth-century French philosopher Descartes envisioned machines capable of speech and action and asked whether it would be possible to distinguish such machines from human beings. The article traces some aspects of artificial intelligence as far back as Aristotle, who can be “credited with devising the first knowledge-bases and ontologies.”

In the mid-twentieth century, at the dawn of the modern computer age, the mathematician Alan Turing, who, as we saw earlier, described the idea of a universal computer, took up, in a famous paper titled “Computing Machinery and Intelligence,” the question “Can machines think?” Rather than answer the question directly, Turing described a game, which he called the “imitation game,” designed to test whether a machine could simulate the appearance of thought well enough that a human being couldn’t distinguish it, in conversation, from a fellow human being. The test has since come to be known as the “Turing Test.”

Still, neither Descartes nor Turing actually used the term “artificial intelligence.” It was a proposal for a summer conference at Dartmouth College to be held in 1956, sponsored by the Defense Advanced Research Projects Agency(DARPA), that put the term itself at the center of discussions, in computer science, about the possibility of thinking machines.

In the proposal, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon called for a two-month study “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

A key word in this quotation is conjecture. In the years since the 1956 Dartmouth conference, artificial intelligence has grown into a vibrant research field, one that’s had ups and downs but that’s produced remarkable breakthroughs, especially in this century, in the ability of computers to simulate speech and writing and to automate tasks. As various successors to the 2022 version of ChatGPT have been released, as other companies such as Anthropic and Google have produced their own chatbots as well as other tools for generating new content algorithmically from existing content—images and music, for example—and as computer systems for problem-solving through pattern-matching have grown in speed and sophistication, it’s become common to see references to software that “reasons” and claims that AI tools are on the verge of, or have already achieved, consciousness.

And yet, whether “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” is a question on which there’s no more agreement now than there was at the time of the conference.

Moreover, and perhaps more to the point, whether a machine that can simulate intelligence can be said to possess intelligence—even of an “artificial” kind—isn’t at all clear. The Dartmouth proposal glides smoothly from speculation about simulating intelligence to assertions about how a “truly intelligent machine” would behave. Some critics of the research field that emerged from the 1956 conference have focused on what they see as its too easy conflation of thinking with behavior that looks like that of thinking human beings, as though the difference didn’t matter, or, worse, as though there’s in fact no difference at all.

We’ll look at this and other criticisms of AI in a later section of this module. For now, we should simply note that any attempt to define “artificial intelligence” in computing takes us into territory that’s highly contested. The term doesn’t pick out a single, clear, obviously achievable goal or set of computing methodologies. It may be best to keep those quotation marks around it—mentally, at least—and simply treat it as an imprecise but, whether we like it or not, increasingly entrenched catch-all label for a range of rapid innovations in computer software loosely connected by some common features. The next page will examine some of these features and introduce you to additional terms you should know.


This site uses Just the Docs, a documentation theme for Jekyll.