Optimise the Silence, Optimise your Thinking
Exploring agentic workflow design through the lens of essential and accidental cognitive load
This entry in a Software Enchiridion is a follow on from this short story:
There are a set of fashionable questions in AI circles right now: How smart can we make the agents? How long its chain of thought runs. How autonomous it becomes. How many tools it can wield without supervision.
These are the wrong questions.
The more dangerous—and more consequential—question is quieter:
What kind of thinking does this agent make unnecessary for humans? And what kind does it make possible again?
We are building systems at extraordinary speed that can, to a casual observer, think about almost anything. But thinking is not free. Human attention is finite, fragile, and easily polluted. Every system that “helps” someone think is also deciding, implicitly, what they no longer need to think about.
That decision shapes organisations far more than any model architecture.
Most failed agentic systems do not fail because the agent reasons poorly. They fail because they optimise humans out of the loop in exactly the wrong places. They remove effort, yes—but they also remove comprehension, human creativity, human practice. Exactly the things they should be looking perhaps to amplify. Over time, they quietly atrophy judgment, sense-making, and responsibility.
This is how you end up with teams that move fast but cannot explain why. Organisations that ship confidently but cannot reason about risk. Leaders surrounded by dashboards who feel strangely blind.
The tragedy is that none of this requires malevolence or incompetence. It happens precisely when we optimise for the wrong thing.
Accidental and Essential Cognitive Load, and Cognitive Load Redistributors
Agents are often sold as cognitive amplifiers. In reality, they are cognitive load redistributors. They take thinking from one place and move it somewhere else. Sometimes that “somewhere else” is the machine. More often, it is removed entirely as it was accidental cognitive local.
Essential cognitive load supports and amplifies all the brains involved. Accidental wears them thin until none are being used effectively.
When an agent gives answers without preserving understanding, humans inherit a new burden: oversight without insight. They must approve decisions they can no longer reason about. They must trust outputs they cannot reconstruct. This is not empowerment. It is abdication disguised as efficiency:
The value of an agent is not measured by what it can decide—but by what it frees humans to decide well.
That is not a technical design problem. It is a practical, moral and organisational one.
Do not ask what your agent can think about; Ask what thinking it protects for humans
Human thinking worth preserving is not cheap computation. It is:
judgment under uncertainty
ethical reasoning
trade-off negotiation
strategic sense-making
learning from failure
These are not bottlenecks to eliminate. They are capabilities to defend.
An agent that removes these from daily practice does not create leverage—it creates fragility.
What Agents Should Optimise (Instead)
Well-designed AI interactions, especially agents, can be designed to help cognition be supported and even encouraged in the right area, on the right things, at the right time. Establishing a software development habitat that supports:
Cognitive quiet — Agents should absorb noise, repetition, and vigilance so humans can think deeply, not constantly.
Legibility — They should return outputs in forms humans can reason with, not merely consume.
Better questions — A good agent ends conversations with: “This is unclear.” “This assumption is stale.” “Someone must decide.”
Temporal lift — Agents should free humans to think in longer time horizons, not just faster cycles.
Some practices to consider
Design agents to surface trade-offs, not conclusions
Design agents to remove mechanism toil, not useful cognitive processes
Prefer narratives over scores
Make uncertainty explicit and unavoidable
Preserve “decision seams” where humans must intervene
Periodically exercise systems without the agent to test retained literacy
Some things to avoid
Answering machines that suppress debate
Scoreboards that replace judgment
Opaque autonomy that demands trust without understanding
Over-automation of rare but critical decisions
If humans cannot explain a decision without referencing the agent, the agent has already gone too far.
A Simple Test for Agents Gone too Far
Ask two questions of any agentic workflow:
What higher-order question can a human now reasonably ask because this agent exists?
If this agent were removed for a week, would humans still know how to reason about the system?
If the answers are unclear, the optimisation is misplaced.
Agentic systems are not just tools, they are literacy infrastructure. Like maps, ledgers, diagrams, and printing presses, they change what kinds of thinking are viable at scale. The danger is not that machines will think for us—but that they will think instead of us, leaving humans present but hollow.
The goal is not maximum automation. The goal is useful silence. Space for judgment. Room for disagreement.
Time to think.



