What does it really mean for a machine to be intelligent?
In 1950, Alan Turing proposed his famous Imitation Game as a test for machine intelligence. The test was simple: could a computer, through text alone, fool a human into believing it was also human? For decades, this remained a distant goal. Today, large language models (LLMs) have, by many measures, passed this test. ChatGPT can effortlessly write a sonnet on demand, a task Turing himself proposed in his original paper.
But does that make it intelligent?
We present our take on this question in the forthcoming book chapter in the book Designing an Intelligence, edited by George Konidaris. We argue that despite this incredible linguistic fluency, something profound is still missing. While LLMs can manipulate language with superhuman skill, we believe that they are not truly intelligent. They are more like incredibly sophisticated “calculators” for words. Turing’s core idea—that disembodied language is a sufficient test for intelligence—is false. Read more on our substack, What to Tell the Robot, written with David Watkins!
