The Bar for AI Keeps Shifting
During my computer science studies, our introduction to artificial intelligence didn’t begin with neural networks or robotics, but with a parade of definitions:
- “We call programs intelligent if they exhibit behaviors that would be regarded intelligent if they were exhibited by human beings.” — Herbert Simon
- “AI is the study of how to make computers do things at which, at the moment, people are better.” — Elaine Rich and Kevin Knight
- “AI is the science of common sense.” — Claudson Bornstein
- “AI is the attempt to make computers do what people think computers cannot do.” — Douglas Baker
- “The study of methods by which a computer can simulate aspects of human intelligence.” — Computers and IT, 2003
- “Research scientists in AI try to get machines to exhibit behavior that we call intelligent when we observe it in human beings.” — James Slagle
- “AI is a collective name for problems which we do not yet know how to solve properly by computer.” — Bertram Raphael
In hindsight, these definitions reveal a persistent tension: is intelligence located in the mechanism itself, or in the results it produces? Should we evaluate a system by its inner workings, or by how it makes us feel?
The simple engine behind the curtain
At their core, large language models like GPT are conceptually straightforward: they predict the next token in a sequence, guided by statistical patterns. If you trace a response step by step, you won’t find “thought” as we experience it, and certainly not self-awareness.
Critics seize on this mechanistic truth to dismiss these systems as mere stochastic parrots — mathematical engines that remix human-authored text without genuine understanding. This is true.
Yet these same models write working code, pass professional exams, offer effective therapy, plan complex trips, interpret poetry, draft legislation, and even play chess tutor or romantic confidant. People build relationships with these LLMs that, to them, are very real.
They are reshaping law, journalism, design, education, and medicine—often in profound ways. The mechanisms may be simple, but their outputs can surprise, unsettle, amuse, and even move us. For most users, intelligence isn’t a property hidden under the hood—it’s an experience at the interface.
Black boxes everywhere
We’re surrounded by systems we only half understand. You can’t describe exactly how your pancreas regulates insulin, how your car’s fuel injection works, or how a judge arrives at sentencing. You model what you don’t see—simulating beliefs, desires, and likely actions in people, institutions, and now, machines.
The Turing Test wasn’t about proving consciousness; it was about demonstrating competence. Its power lies in shifting the question of intelligence away from internal processes and toward external behavior. By that measure—intelligence as perceived—modern language models already qualify.
The danger of moving the goalposts
Every time AI masters a new benchmark, we redefine “intelligence” to exclude it:
- Chess was a hallmark of human ingenuity—until Deep Blue defeated Kasparov.
- Language was uniquely human—until LLMs composed essays, poems, and jokes.
- Creativity was safe—until AI-generated art won competitions and went viral.
- Today, we concede mimicry but deny “true” understanding.
- Tomorrow, we may demand self-awareness as the final frontier.
This intellectual retreat obscures what is already happening and lulls us into complacency. If we keep shifting the bar, we risk ignoring systems capable of influence, persuasion, or even manipulation—simply because they don’t think “our” way.
What intelligence feels like
In everyday life, we judge people by their words and deeds, not by their brain chemistry. We call someone intelligent if they communicate clearly, solve problems deftly, or adapt to new challenges—without demanding a tour of their neural pathways. We infer intelligence from interaction and outcome.
The same standard applies to machines. When an LLM untangles a confusing question, tutors a student, or composes a lullaby, it feels intelligent. Focusing solely on “predictive text” is like calling a violin “just vibrating strings” or a novel “just ink on paper.” Mechanism matters—but meaning emerges in context.
Seeing clearly, responding wisely
Navigating this AI era, you must hold two truths simultaneously:
- Engineered artifacts: These systems are built, with no intrinsic understanding or self-awareness.
- Competent actors: In interaction, they often behave indistinguishably from intelligent beings.
Ignoring either leads to mistakes: techno-mysticism on one side (or worse), dismissive complacency on the other. Instead, we need a Turing-style perspective that recognises intelligence as a property that emerges in the space between system and user.
That doesn’t mean abandoning rigor or skepticism. It means focusing on outcomes as well as architecture, and acknowledging what’s already changing—even if we don’t yet fully grasp how.
Because intelligence—like meaning, trust, or beauty—isn’t hidden in the mechanism. It lives in the moments when something surprises us, helps us, or moves us. And those moments are already here.