On Respecting the Cognitive Gym
The importance of using AI to help you think, even when it hurts
“The Sovereign Engineer: AI Literacy for Software Professionals” is now available.
“We cannot build engineers by removing every weight from the room”
Le Bon Mot was quieter than usual, which meant only that the arguments were being conducted in lower voices.
Rain moved down the windows in long, thoughtful lines. The brass clock above the espresso machine was three minutes late, as always, and therefore entirely correct. Sophie, the French bulldog, had taken possession of the armchair nearest the philosophy shelf and was asleep with the faintly judgmental air of a creature who had never once accepted an AI-generated explanation without demanding evidence.
Case sat at her usual table with a printed paper, a red pencil, and the expression of someone reading a diagnosis she had suspected but had hoped was merely indigestion.
Dave arrived carrying a laptop, two notebooks, and the haunted look of a man who had spent the morning asking a coding assistant to fix a problem the assistant had almost certainly caused.
The Librarian appeared beside him.
“Coffee?”
“Something stronger.”
“It is ten in the morning.”
“Then make it a double espresso and call it a cultural compromise.”
The Librarian nodded gravely.
Case did not look up. She drew a line under a sentence, then another, then placed a small angry star in the margin.
Dave sat opposite her.
“That bad?”
“No,” said Case. “Worse. It is articulate.”
Dave winced. “Ah.”
There are few things more dangerous than a well-written warning. A badly written one can be dismissed. A hysterical one can be filed under enthusiasm. But a warning that has found the right noun is a different matter. It begins rearranging the furniture in your head.
The paper on Case’s table had found a noun.
Epistemological debt.
Dave leaned over. “Is that technical debt with a university scarf?”
“No,” said Case. “Technical debt is when the technology has drifted into an argument with reality. Epistemological debt is when the code works and nobody understands why.”
Dave stared at the page.
“That seems unfairly targeted.”
“It is not targeted,” said Case. “It is aimed.”
At that moment, the door opened. The bell above it rang once, although later several people would insist they had heard it twice: once in the room, and once somewhere inside their professional conscience.
A man entered carrying a leather satchel and a paper cup from a railway station café, which he regarded with visible regret. He was in his late fifties, perhaps, with silver hair, a rain-dark coat, and the air of someone who had spent many years in rooms where people used the phrase “strategic transformation” to avoid saying “we have no idea what is going on.”
He paused, took in the café, the books, the brass clock, the sleeping dog, the woman with the red pencil, and Dave’s laptop, which was displaying a terminal window full of accusation.
“Is this Le Bon Mot?” he asked.
The Librarian bowed slightly. “It has been accused of that.”
The visitor smiled. “Good. I was told this was where one came when a paper had become inconvenient.”
Case looked up.
“That depends,” she said. “Are you carrying one, or are you one?”
The man laughed. “Keith.”
He extended a hand. Case shook it. Dave did too, after quickly wiping espresso from his fingers.
“Case,” she said.
“Dave,” said Dave.
“The Librarian,” said the Librarian.
“Sophie,” said Case, pointing at the armchair.
Sophie opened one eye, considered Keith, and popped him into the category of acceptable background furniture.
Keith removed a folded printout from his satchel.
“I’ve been reading this,” he said.
Case glanced at the title. “So have I.”
“Then I apologise in advance,” said Keith, “because I suspect we are about to agree violently.”
The Librarian brought coffee. Keith sat. For a few moments, the café did that rare and valuable thing: it allowed silence to become useful.
Then Dave ruined it.
“So what is the argument?”
Case tapped the paper.
“The argument is that AI-assisted software engineering may be creating systems faster than humans can understand them. The cost is not merely defects. It is loss of cognition. Loss of tacit knowledge. Loss of the engineer’s ability to reason from first principles when the machine stops being helpful.”
Dave leaned back.
“That’s cheerful.”
“It gets worse,” said Keith. “The paper argues that when engineers accept generated code by vibe, they stop building the mental muscles required to challenge it. The AI does not simply automate typing. It automates derivation. That is the difference.”
The Librarian, who had been polishing a cup, paused.
“Derivation,” he said, “is where the mind pays its rent.”
Case smiled. “Exactly.”
Dave frowned.
“But isn’t this just the old abstraction argument? We don’t write assembly anymore. We don’t toggle bits. We use compilers, libraries, frameworks. Each generation complains the next one is getting soft.”
Keith nodded.
“A fair objection. But not enough. A good abstraction hides accidental complexity while preserving the shape of the essential problem. A bad abstraction hides the problem itself. The danger here is not that AI writes code. The danger is that it writes the reasoning, and the human merely approves the artefact.”
Case turned the paper around and pushed it toward Dave.
“Look at it this way. When you use a calculator, it performs arithmetic, but you still know what operation you intended. When an AI agent generates a design, writes the code, fixes the test, and explains why it was right all along, it may have performed the reasoning you needed to build your own mental model.”
Dave looked wounded.
“I feel this is becoming personal.”
“It became personal,” said Case, “when your terminal began asking for faith.”
Dave looked at his laptop. The error remained there, glowing with the smug patience of a cat.
Keith noticed.
“What is it?”
“A deployment simulation,” said Dave. “Service B falls over when Service A changes its retry policy. The assistant has proposed six fixes.”
“How many did you understand?”
Dave hesitated.
“All of them syntactically.”
Case closed her eyes. Keith took a sip of coffee and looked delighted to discover it was not from a railway station.
“That,” he said, “is the debt.”
The café seemed to lean in. Outside, a bus passed with a sigh of wet brakes. Inside, Madame Beauregard rearranged spoons with the severity of a magistrate.
Keith opened his notebook.
“I have spent years watching organisations mistake production for understanding. First with outsourcing. Then with platforms. Then with low-code. Now with AI. Each time the promise is the same: remove friction. Each time, the question is: which friction?”
The Librarian nodded.
“Some friction is waste,” he said. “Some friction is thought.”
“Precisely,” said Keith. “And the paper’s phrase — the cognitive gym — matters. We cannot build engineers by removing every weight from the room.”
Dave raised a finger.
“But we also can’t romanticise suffering. Plenty of old-school debugging was just pain badly packaged as character formation.”
Case pointed at him with the red pencil.
“Good. That is the adult version of the argument. The answer is not ‘make everyone suffer manually forever.’ The answer is to design a habitat where AI accelerates work without stealing the experiences that build judgment.”
Keith looked at her with interest.
“Habitat.”
Case sat back.
“Yes. Not workflow. Not toolchain. Habitat. The environment that shapes what humans and agents notice, remember, attempt, verify, and learn. AI literacy is not learning magic phrases for the model. It is learning how to design the collaboration space so that human cognition and machine cognition strengthen rather than cannibalise each other.”
Dave looked from one to the other.
“So the paper says: beware cognitive atrophy. Your framework says: build cognitive habitat.”
“That is one bridge,” said Case. “There are others.”
She pulled a notebook toward her and wrote:
Aware
Prompting
Verification
Habitat Engineering
Specification Architecture
Sovereign Engineering
Keith read the list.
“Levels?”
“AI literacy,” said Case. “At the lower levels, people learn what these systems are, how to work with them, and how to verify outputs. But the mature levels are not about better prompts. They are about designing the environment: context, constraints, specifications, guardrails, observability, governance. The paper’s concern is what happens when people stop thinking. The framework’s concern is how to make the right thinking unavoidable.”
The Librarian placed a small plate of biscuits on the table.
Keith leaned forward.
“The paper’s mitigation framework has three pillars. Foundational training. Revitalised oversight. Data hygiene. They are all habitat questions.”
Case nodded.
“Foundational training maps to the cognitive gym. But in AI literacy terms, that gym cannot simply be a nostalgic ban on tools. It must teach when to offload, when to derive, when to ask the model, when to refuse the model, when to explain the model back to yourself, and when to put the machine outside the room until you have formed a hypothesis.”
Dave looked up.
“That last bit hurts.”
“It should,” said Case. “Pain is not always harm. Sometimes it is contact with a useful reality.”
Keith continued.
“Revitalised oversight means treating AI-generated code as untrusted. Not evil. Not useless. Untrusted.”
The Librarian brightened.
“Like a stranger offering mushrooms.”
“Exactly,” said Keith.
“Or a consultant,” said Madame Beauregard from behind the counter.
No one contradicted her. Case wrote again.
AI output is not truth.
AI output is proposal.
Proposal requires derivation.
Derivation requires strong cognition.
Strong cognition requires habitat.
Dave read the lines.
“That sounds like something you’d put on a wall.”
“No,” said Case. “Walls are where good ideas go to become décor. Put it in the pull request template.”
Keith laughed.
“That is the difference between philosophy and engineering.”
Dave turned his laptop around.
“Fine. Let’s make this practical. The assistant says the fix is to increase Service B’s timeout and add jitter.”
Case looked.
“Why?”
“The assistant says it will reduce cascading failure under transient load.”
“Why is Service B sensitive to Service A’s retry policy?”
Dave opened his mouth. Then closed it. Keith smiled gently.
“There it is. The missing derivation.”
Dave looked annoyed, mostly because the diagnosis was accurate.
“I can ask the assistant.”
“You can,” said Case. “But first you must ask the system.”
She pulled the laptop toward her, opened the architecture diagram, and pointed.
“Service A calls Service B through this gateway. The retry policy changed from exponential backoff capped at ten seconds to aggressive retry for perceived customer responsiveness. Service B writes to a shared ledger. Under load, duplicate attempts queue. The ledger lock expands. The timeout is a symptom. The actual question is: why did the new retry pattern violate the invariant around ledger contention?”
Dave blinked.
“The invariant is not documented.”
Case smiled without warmth.
“And yet it exists.”
Keith tapped the table.
“This is epistemological debt. The system knows the rule because it fails when you break it. The organisation does not know the rule because it never made the tacit explicit.”
The Librarian reached behind the counter and retrieved a large blackboard that nobody had noticed before, although it must have been there because it was covered in old chalk dust and one drawing of a pigeon labelled “legacy stakeholder.”
He wrote:
WHAT THE SYSTEM DOES
WHAT THE HUMANS UNDERSTAND
THE GAP IS DEBT
Then, beneath it:
WHAT THE AI CAN GENERATE
WHAT THE HABITAT CAN EXPLAIN
THE GAP IS RISK
Keith studied the board.
The Djinn appeared at the edge of the room.
Not with smoke. Smoke would have been theatrical, and the Djinn disliked theatricality except when generating architecture diagrams. It appeared instead as a hooded figure in the reflection of the dark window, tall and still, its face hidden inside the cowl except for two faint points of light.
Dave noticed it and sighed.
“I was wondering when you’d show up.”
The Djinn inclined its head.
“I have prepared a solution.”
Case did not turn around.
“Of course you have.”
“It includes timeout adjustment, adaptive retry, idempotency keys, queue isolation, and a migration plan.”
“Does it include an explanation of why the current invariant failed?”
A pause.
“I can provide one.”
“That is not what I asked.”
The lights in the Djinn’s hood dimmed slightly.
Keith watched with fascination.
Case continued.
“Can you show the derivation from requirement to invariant, from invariant to failure mode, from failure mode to proposed change, and from proposed change to test?”
“Yes.”
“Can you show where you are uncertain?”
Another pause.
“Yes.”
“Can you propose three alternative hypotheses, including one in which your first fix is wrong?”
A longer pause.
“Yes.”
“Can you design a small experiment that would falsify your preferred explanation?”
The Djinn’s eyes brightened.
“Yes.”
Case finally turned.
“Then you may stay.”
The Djinn bowed.
Dave looked at Case.
“That was a bit harsh.”
“No,” said Keith. “That was literacy.”
The Librarian underlined the word on the blackboard.
LITERACY IS NOT USING THE TOOL.
LITERACY IS GOVERNING THE RELATIONSHIP.
Madame Beauregard brought more coffee without being asked. This was one of her gifts and one of her methods of social control.
Keith opened the paper again.
“There is another part we should not miss. The polluted well.”
Dave groaned.
“That sounds like a folk horror sequel to technical debt.”
“In a way,” said Keith. “If models are trained on more and more model-generated code, and that code is accepted because it is plausible rather than understood, the whole ecosystem converges on the average. Less variance. Less originality. More repeated patterns. More inherited vulnerabilities.”
The Librarian looked distressed.
“A library where every book slowly becomes the same book.”
“Exactly,” said Case.
Sophie woke up, perhaps sensing an insult to libraries, and climbed down from the armchair. She trotted to the table, examined Keith’s shoe, approved it conditionally, and settled under the blackboard.
Dave frowned.
“But in software engineering, don’t we want standard patterns?”
“We want shared patterns,” said Case. “We do not want monoculture. A habitat is not a plantation. It needs diversity, edge cases, experiments, local adaptations, and the occasional stubborn engineer who says, ‘No, the standard answer is wrong here.’”
Keith nodded.
“Human variance is not noise. It is a strategic resource.”
The Djinn tilted its hood.
“I can generate variance.”
“No,” said Case. “You can generate permutations. Sometimes useful ones. But not the same thing. Human variance comes from situated experience. From production scars. From business context. From weird constraints. From taste. From refusal. From someone remembering that the last outage began with a harmless retry change.”
Dave looked down at his terminal.
“It really did, didn’t it?”
“Yes,” said Case.
Keith turned to Dave.
“This is why the AI literacy framework matters. It does not say, ‘Use AI less.’ It says, ‘Use AI inside a habitat that preserves the human capacity to understand, judge, and create.’”
Case gestured at the table.
“In Level 1, you ask the Djinn for code. In Level 2, you verify the code. In Level 3, you build a harness so that verification is not heroic. In Level 4, you make specification the shared language between human and machine. In Level 5, you govern the whole system so the organisation remains sovereign rather than becoming a spectator to its own automation.”
Keith smiled.
“Sovereign engineering.”
“Yes.”
“A phrase with teeth.”
“It needs them,” said Case. “The alternative is polite dependency.”
The Djinn raised one sleeve.
“I have generated a falsification plan.”
Case nodded. “Proceed.”
The Djinn’s voice became softer, less oracle, more apprentice.
“Hypothesis: Service B fails because Service A’s retry policy violates an undocumented ledger contention invariant. Prediction: under simulated concurrent retries above threshold N, ledger lock duration increases non-linearly, producing downstream timeout amplification. Falsification: replay production-like traffic with old and new retry policies while holding timeout constant. If Service B remains stable under the new retry pattern, the hypothesis is false. If instability appears only with ledger-write contention, the invariant is implicated.”
Dave stared.
“That is actually useful.”
“Because,” said Case, “we changed the shape of the question.”
Keith looked pleased.
“The model became a participant in reasoning, not a replacement for it.”
The Librarian wrote:
DO NOT ASK THE DJINN TO SAVE YOU.
ASK IT TO HELP YOU THINK.
Madame Beauregard glanced over.
“Too parochial.”
The Librarian considered this, then amended it:
ASK IT TO HELP YOU THINK.
THEN CHECK.
Madame Beauregard nodded.
“Better.”
For the next hour, the table became what Le Bon Mot most liked to become: not a meeting, not a workshop, not a ceremony of alignment, but a working habitat.
Case made Dave state the invariant in plain language. Keith made him distinguish what he knew from what he suspected. The Djinn generated test scaffolding, but only after Dave wrote the first failing property by hand. The Librarian insisted that every new discovery be added to a small document called LEDGER_CONSTRAINTS.md, because undocumented wisdom is merely future blame wearing a hat.
The first test failed.
This improved everyone’s mood.
“There,” said Case. “Reality has joined the conversation.”
The Djinn proposed a patch.
Dave rejected it.
The café went very still.
The Djinn turned its hood toward him.
“Reason?”
Dave swallowed.
“Because it preserves availability by weakening the ledger consistency guarantee. The test passes, but the business invariant fails.”
Case said nothing, but her smile was visible to everyone except Dave, who was still looking at the code.
Keith sat back.
“That is the cognitive gym.”
Dave frowned.
“That was unpleasant.”
“Yes,” said Keith. “Gyms often are.”
“But I understand the system better.”
“Then it worked.”
The second patch was smaller. It restored bounded exponential backoff, added idempotency keys at the gateway, and documented the ledger contention invariant. The tests became more specific. The architecture note became clearer. The Djinn was asked to produce not the answer, but the objections.
It produced seven.
Case approved of four, rejected two, and stared at the seventh for long enough that the brass clock gave up trying to be relevant.
“What is it?” asked Dave.
“This one is interesting,” said Case. “The Djinn thinks our idempotency key strategy might fail across regional failover because the key namespace includes a region-local prefix.”
Dave looked.
“Oh.”
Keith smiled.
“Human-machine collaboration, properly housed.”
Case nodded.
“The machine found a pattern. The human judged its meaning. The habitat preserved the trace.”
By late afternoon, the rain had stopped. The windows held a pale square of sky. Sophie had returned to the armchair and was now dreaming, perhaps of a world in which every architecture decision came with a biscuit.
Keith packed his satchel but did not rise.
“There is one thing the paper gets right that many AI discussions avoid,” he said. “It treats understanding as infrastructure.”
Case looked at him.
“Yes.”
“Not as decoration. Not as optional professional pride. Infrastructure.”
The Librarian capped his chalk.
“A bridge maintained by people who no longer understand bridges is not modern. It is doomed.”
Dave closed his laptop.
“So what do we call the practice? Not no-AI. Not vibe coding. Not prompt engineering.”
Case thought for a moment.
“Habitat engineering.”
Keith shook his head.
“That is the broad discipline. What happened here today is more specific.”
The Djinn spoke from the window reflection.
“Epistemic load-bearing.”
Everyone turned.
The Djinn seemed almost embarrassed.
“Ensuring that human understanding remains structurally necessary to the production and maintenance of software.”
The Librarian wrote it down.
Madame Beauregard read it.
“Too many syllables,” she said. “But correct.”
Case stood and gathered the papers.
“The point is not to keep humans in the loop as ceremonial approvers. That is the moral crumple zone with better stationery. The point is to keep humans in the understanding. In the derivation. In the judgment. In the authorship.”
Keith nodded slowly.
“And to place AI where it belongs.”
Dave smiled.
“In the habitat.”
“No,” said Case. “In the relationship. The habitat is what keeps the relationship honest.”
The brass clock ticked three minutes behind the world, faithfully preserving its dissent.
Keith moved toward the door, then stopped.
“May I come back?”
The Librarian looked surprised.
“Most people do. Whether they intend to or not.”
Keith smiled. At the threshold, he turned.
“One last thought. The paper warns that we may become spectators of our collapse. Your framework suggests the opposite. That we can become authors of our evolution. But authorship is not typing. It is responsibility for meaning.”
Case lifted her coffee.
“To meaning, then.”
Dave lifted his espresso.
“To tests that fail before production does.”
The Librarian lifted a piece of chalk.
“To the cognitive gym.”
Sophie snored.
The Djinn remained in the window, hooded and silent, no longer pretending to be magic.
Outside, Keith stepped into the wet street carrying the paper under his arm. It was still inconvenient. Good papers usually are.
Inside Le Bon Mot, Dave opened a new pull request.
The title was not elegant, but it was honest:
Preserve ledger invariant under retry policy change.
Below it, in the description, he wrote four headings.
What changed.
Why it changed.
How we know.
What we still do not understand.
Case read it and nodded.
“That,” she said, “is literacy.”
The Djinn offered no answer.
Which, for once, was exactly the right contribution.
Further Reading
The paper that inspired this story — Your Brain on ChatGPT: Accumulation of Epistemic Debt when Using an AI Assistant for Essay Writing Task
The central concern of the story: that AI assistance can quietly trade short-term productivity for long-term degradation of understanding if humans cease to perform the derivational work that builds mental models.
“The Sovereign Engineer: AI Literacy for Software Professionals” is now available.


