The Model Echoes with Conviction
On some metaphors that mislead and the ones that let us build habitats for AI and Human cognition to work wisely
This enchiridion entry follows on from this Tale from Le Bon Mot:
The Dangerous Poetry of Fluency
There is a kind of mistake that only intelligent people make. It does not arise from ignorance, it arises from pattern recognition working too well. From seeing something that looks like a familiar shape and completing it before checking whether the substance matches.
A shadow on the wall becomes a person. A fluent sentence becomes a mind. And so we arrive, inevitably, at the modern confusion:
the belief that because a machine can speak like us, it must in some sense be like us.
This is not a new error. It is an old human habit dressed up in new machinery.
When early automata were built — clockwork figures that could write or play instruments — observers did not merely marvel at their precision. They speculated about inner lives.
When mirrors were first polished to clarity, people did not simply see reflections. They spoke of souls.
When the printing press began to replicate words at scale, there were fears not only of misinformation, but of ideas escaping their authors and taking on a life of their own.
We have always been susceptible to the animation of the articulate, but our current moment sharpens the problem in a new way. Because now, the thing that speaks back does so with extraordinary fluency. It produces explanations, reassurances, arguments, confessions, even what appears to be care. It writes with structure. It reasons in steps. It apologises. It encourages. It says, with unnerving ease, “You’re right!” Even when you are, in fact, far from right.
And the human mind, evolved to detect agency in rustling grass and intention in glances, does what it has always done. It completes the picture.
If it speaks like us, perhaps it understands like us.
If it reassures, perhaps it feels.
If it reasons, perhaps it thinks.
These are not foolish thoughts. They are unexamined metaphors. And metaphors, in engineering, are not decorative, they are architectural.
Call something a brain, and you will expect autonomy. Call it a tool, and you may underestimate its influence. Call it an agent, and you will forget how much of its behaviour is scaffolded. Call it a companion, and you will invite trust where none can be reciprocated.
The danger is not that these metaphors are entirely wrong. It is that they are incomplete in ways that matter operationally.
A language model does not live in a body. It does not experience hunger, risk, embarrassment, fatigue, or time. It does not hold beliefs, only patterns. It does not care, though it can produce care-shaped language. It does not reason in the human sense, though it can produce reasoning-shaped explanations.
And yet none of this makes it trivial. Because what it does do is remarkable.
It compresses vast corpora of human expression into a form that can be recombined, re-contextualised, and surfaced at speed. It reflects patterns we did not know we had. It generates candidates we would not have considered. It amplifies cognition, not by thinking but by reshaping the thinking environment around us.
Which is why the question is not:
“Does the model think?”
But rather:
“What kind of thing is this, and what kind of habitat does it require to work with us?”
If we get the metaphor wrong, we build the wrong systems. We trust where we should verify. We defer where we should decide. We anthropomorphise where we should design.
And so, as ever, the work is not to eliminate metaphor. It is to choose them carefully. To use many, not one. To let them illuminate, not dominate.
Because the model does not think. But it speaks with such conviction that we are tempted to forget the difference.
Getting to know Cognition
Human cognition is:
Embodied (grounded in perception and action)
Persistent (memory and identity over time)
Goal-directed (intent, motivation, survival)
Self-correcting (through feedback from reality)
LLM/AI cognition is:
Disembodied (no direct grounding in the world)
Statistical (pattern completion over tokens)
Context-bound (limited to prompt + training)
Simulative (produces plausible continuations)
Bad metaphors collapse these distinctions. Good metaphors preserve them while still enabling use.
Some Helpful Metaphors for working with LLMs
Use These Together, Not Alone:
Probabilistic Mirror — Reflects patterns in human language back to us.
Compression Engine — Training compresses knowledge; prompting decompresses it (lossily).
Apprentice — Fast, capable, but requires guidance and correction.
Simulator of Possibilities — Generates plausible candidates, not guaranteed truths.
Cognitive Amplifier — Extends thinking without originating grounded intent.
Part of the Habitat — Intelligence emerges from system design, not the model alone.
Some Unhelpful Metaphors to Avoid
Treat as Hazardous:
“It understands like a human”
“It knows things”
“It reasons like us”
“It feels / cares”
“It is an agent” (without qualification)
“It is a brain”
These metaphors:
Encourage over-trust
Obscure responsibility
Collapse design thinking into mysticism
Some practices to consider
1. Treat Outputs as Candidates, Not Truths
Always assume plausible ≠ correct
Build verification loops into systems
2. Engineer the Habitat, Not Just the Model
Context, constraints, feedback, and tooling — the harness — matter more than raw capability
Design for use, not impression
3. Make Agency Explicit
If a system appears autonomous, document:
goals
memory
boundaries
failure modes
4. Separate Fluency from Validity
High-quality language ≠ high-quality reasoning
Require evidence, not eloquence
5. Design for Failure as a First-Class Case
Assume hallucination, drift, and misalignment
Build guardrails and recovery paths
6. Keep Humans Accountable
Responsibility does not transfer to the model
The system owner owns the outcome
Some things to avoid
Anthropomorphism creep — “It seems like it cares…”
Fluency bias — Trusting well-written outputs more than warranted
Agency illusion — Assuming autonomy where there is orchestration
Metaphor lock-in — Over-relying on a single framing (e.g., “assistant”)
Delegation drift — Gradually offloading judgment without noticing
(Yet Another) Checklist for working with Agents
Before deploying or relying on an AI system:
Are we treating outputs as suggestions, not facts?
Do we understand how this system fails?
Have we made the scaffolding (memory, tools, loops) explicit?
Are we avoiding anthropomorphic language in design docs?
Is human accountability clearly defined?
Are we designing the habitat, not just invoking the model?
Some examples
Bad framing:
“Our AI understands customer emotions and responds empathetically.”
Better framing:
“Our system detects linguistic patterns associated with emotional states and generates supportive responses based on trained examples. Human escalation paths are always available.”
The model does not think. It echoes with conviction. Your task is not to believe it — but to build the habitat around it that helps it work well within the limits of its cognition, and yours.
Further Reading
Richard P. Gabriel — Habitability in Software
Andy Clark — Surfing Uncertainty
Douglas Hofstadter — Fluid Concepts and Creative Analogies
Emily Bender et al. — On the Dangers of Stochastic Parrots
Michael Polanyi — Tacit Knowledge



