The Danger of Borrowed AI Certainty
The power of the question, "Could I defend this decision without the AI present?"
This enchiridion entry follows on from the short story:
There is a danger in software development that does not arrive with alarms, outages, or angry incident calls. It takes the forms of glossy roadmaps1 and, more and more often these days, confident AI responses. It arrives politely. It formats its answers well. It uses bullet points. It sounds calm. It sounds certain.
And that is precisely the problem.
Software has always been an act of belief balanced precariously on evidence. We believe our tests are sufficient. We believe our abstractions hold. We believe this change will not break production at 3 a.m.
Proof, in the mathematical sense, is rare in our world. Evidence is abundant but incomplete. Belief fills the gaps so that work can continue and people can go home.
AI enters this fragile equilibrium like a well-spoken consultant who never sleeps and never admits uncertainty.
Used well, it is astonishing. It collapses search costs. It accelerates recall. It drafts boilerplate that once drained human joy. It spots patterns we might miss when tired, stressed, or staring at the wrong layer of the system. It augments, and even accelerates. In a profession plagued by cognitive overload, AI can act as a relief valve. Fewer blank pages. Fewer context switches. Faster movement from intent to execution.
But AI also introduces a new and subtle failure mode: borrowed certainty.
AI does not know. It does not reason in the way we reason. It does not distinguish proof from persuasion, or evidence from eloquence. It predicts what sounds right based on statistical patterns in human language. The danger is not that it is wrong — we are wrong all the time — but that it sounds so confident in the shape of correctness.
And software developers are especially vulnerable to this shape.
We are trained to trust systems that compile, tests that pass, dashboards that are green. We mistake fluency for validity. When AI produces a coherent explanation, a plausible architecture, or a convincing root cause analysis, it feels like proof. The language matches the rituals we associate with expertise.
But language is not execution. Coherence is not causality. And confidence is not correctness.
When AI output is treated as proof, teams stop interrogating assumptions. When it is treated as evidence without provenance, decisions drift away from reality. When it is treated as belief — quietly, subconsciously — engineers begin to outsource judgment rather than augment it.
This is not a moral failing. It is human ergonomics.
Under time pressure, we accept certainty wherever it appears. Under cognitive load, we grasp at answers that reduce ambiguity. AI is very good at supplying both.
The result is a new form of technical debt: not in code, but in epistemology.
Code written with borrowed certainty often looks fine. It passes review. It deploys cleanly. And then it fails in strange ways because the underlying understanding was never earned. The team cannot explain why it works, only that it was “recommended.”
Used wisely, AI is a powerful assistant: a junior pair programmer with infinite patience and a frightening vocabulary. Used carelessly, it becomes an authority figure that no one appointed, but everyone obeys.
The task of modern software development is not to reject AI, nor to worship it, but to re-separate proof, evidence, and belief — and to be explicit about which one we are using at any given moment.
Because the system that fails loudest is rarely the one with the most bugs.
It is the one built on certainty no one earned.
AI Persuades
Evidence supports. Proof compels. Belief commits. AI persuades — and persuasion is not a substitute for understanding.
AI assistants generate fluent, plausible outputs without grounding in reality, execution, or consequence. In software development, this collapses critical distinctions:
Evidence (logs, metrics, tests, behaviour)
Proof (formal guarantees, types, invariants)
Belief (engineering judgment under uncertainty, ideally with a strong relationship to evidence)
When these collapse, teams inherit confidence without comprehension.
Some practices to consider
Treat AI output as hypothesis, not conclusion
Demand traceability: “What evidence would confirm or refute this?”
Pair AI with execution feedback (tests, production signals)
Require humans to explain AI-assisted decisions in their own words
Use AI to expand options, not to close exploration prematurely
Some things to avoid
Mistaking coherent explanations for causal understanding
Accepting generated code without understanding failure modes
Letting AI summaries replace primary evidence
Using AI to resolve architectural disputes by authority
Quietly shifting from thinking-with to thinking-instead-of
A useful checklist
Before accepting AI-assisted work, ask:
What is the evidence, and where did it come from?
What would make this false or unsafe?
Is this acting as proof, evidence, or belief right now?
Could I defend this decision without the AI present?
Roadmaps and their relationship to different types of evidences are the subject of a future enchiridion entry.



