Part Five: The Illusion of Understanding
– Reflections on Artificial Intelligence – A Guide for Thinking Humans
This is the fifth reflection in a series inspired by Melanie Mitchell’s book Artificial Intelligence: A Guide for Thinking Humans. For an introduction to the series, head over to the first post here.
When I was reading Mitchell’s section on machine comprehension, particularly her discussion of Winograd schemas, I found myself lingering on a deceptively simple question.
“The city councilmen refused the demonstrators a permit because they feared violence.”
Who feared violence?
Now try this:
“The city councilmen refused the demonstrators a permit because they advocated violence.”
Who advocated violence?

These are questions that tripped up early AI systems. They contain no exotic vocabulary, no convoluted grammar. But they demand context. A human reads them and responds instantly. We fill in the blanks, guided by our lived experience. Because we know what kind of people typically fear or advocate violence. We lean on our knowledge of the world, of motives, of politics, of tone.
Machines once fumbled in this fog. But not anymore. GPT-5 can now answer most of these with confidence that seems natural, even human.
So has the machine finally learned to understand? This is the question that haunts the age.
We are easily seduced by fluency. If something speaks well, we believe it must think well. True for someone as well. There is a culture of “fake it till you make it” built on this very foundation. If you can act knowledgeable, a large populace will take you to be so. In case of machines, if it responds correctly, we assume comprehension. But this is a dangerous sleight of hand. Because comprehension is not a statistical trick. It is not the right word at the right time. It is being inside the problem.
To understand is to inhabit.
When I say I understand you, I mean I have entered your frame. I not only see what you said, but why you said it, and what silence wrapped around it. Understanding requires history, continuity, empathy, and context.
Language models have none of these. They do not anchor their responses in an inner model of the world. They simulate coherence, they predict, they echo, they refine. But they do not ground. They are autocomplete on steroids.
And yet, the illusion works. Because what we read sounds right. Often it is right. But correctness is not comprehension, just as a good impersonation is not a friendship.
The deeper danger is not that machines fool us. It is that we start to forget what comprehension actually is. That we begin to define intelligence as coherence delivered quickly.
Mitchell’s warning, written in 2019, saw this clearly. And now in 2026, the problem has only grown. These systems fail less visibly; but when they do, their errors are subtler and more seductive. A hallucinated quote here. A misread intention there. We let it slide, because the rest is smooth.
But a machine that says “I understand” has not, necessarily, understood. It has simply said what usually comes next.
We need better models, not better mimicry. We should not reward fluency unless it comes with grounding. And we must ask not just if the machine answered, but whether it knew why the answer mattered.
Because to truly understand is to care. To remember. To be changed.
And no model, no matter how vast, can simulate that. At least, not yet.