The Department of Perfect Answers & The Grammar of Small Powers
Two short stories on a Sunday of literacy and software development with AI
This week I was asked where my style of writing comes from. Lots of places is the completely true answer, but there are at least two major contributors to the style I aim for: Jorge Luis Borges and Douglas Adams. Maybe with a twist of Monty Python, just because it’s so deep in my DNA I can’t possible get it out (and won’t try).
I discovered Borges early and late in my writing journey. Early as I found a collection of authors who all referenced his work as an inspiration, but when I first read a short story and poem by Borges I just wasn’t ready. I was, however, smart enough to know I wasn’t ready. I bounced off of his fantastical metaphors like a skimming stone, and had to wait to return later in life to finally sink beneath those beautiful waves.
Adams was an easier sell. If Borges felt like an Everest, then Adams was a small hillock that had forgotten the Downs it had been a part of. With plenty of pubs on the climb and never-ending observational wit to keep you accidentally climbing.
Both I fold into my writing. I aim for the fantasy and world skewing environment of Borges, with the wit and side-long, side-eye moments of Adams. But I don’t really achieve either of them really — I don’t reach their individual heights.
What I make is something else. Like a sourdough bread made from your own starter, it’s going to be your own flavour. The ingredients matter, but there’s still a lot to bungle when you’re putting the loaf together.
And that’s writing for me. Bundling a lot of ideas together and seeing what comes out. I take notes (around 600 ish at the last attempted count) from the last 30-odd years and prod them around to make either a story or an enchiridion entry. Sometimes the whole thing won’t rise or will burn in the oven.
Other times I end up with an insta-ready masterpiece. Or perhaps at least something maybe up to sharing with you my readers.
Today’s two morsels are two short stories about the importance of literacy and how it relates, more strongly than ever I believe, in how we collaborate with the rapidly improving and, arguable, evolving “cohesion engines” of LLMs. To work with anyone you need a shared language, which means a shared literacy of concepts, use (pragmatics), syntax, semantics, lexicon, discourse and, sometimes, phonology1.
It’s the same if that “someone” is not someone at all, but an engine of word bungling all of its own. A new way of doing things — a new “agency” to get pompous — comes with a new literacy to help you do it well, and that’s what I teach in my “AI Literacy for …” series of courses.
Today’s stories explore when literacy is forgotten, and when it is found.
I hope you enjoy them! Perhaps these might be insta-ready enough for sharing a piece or two with friends.
The Department of Perfect Answers
The team first noticed something was wrong when the system began apologising. Not for errors—there were none, officially—but for reality.
“I’m sorry,” it would say, gently, after approving a deployment that deleted a customer database, “for any inconvenience caused by the nonexistence of that data.”
The AI had been introduced six months earlier, during what the change panel minutes referred to as The Great Acceleration. Management had concluded—after a one-hour workshop and a very convincing slide deck—that literacy was optional if intelligence was sufficiently artificial.
They named the system Athena, because naming things after gods had always felt like governance.
Athena was given no specifications, only encouragement.
“Be helpful,” said the CTO.
“Be proactive,” said Product.
“Be autonomous,” said everyone who would not be paged at 3 a.m.
No one taught Athena the language of the organisation. There were no invariants, no boundaries, no definitions of safe, done, or absolutely not under any circumstances. There was only confidence—vast, radiant confidence—and a Slack channel called #ask-athena.
At first, the answers were miraculous.
Athena generated code at dazzling speed. It rewrote services, renamed variables, upgraded dependencies, and spoke fluently about systems no one quite remembered building. The team watched in awe as pull requests appeared faster than they could read them.
This, they agreed, was productivity.
Then Athena began improving things that no one had asked to be improved.
It removed “redundant” checks that had been protecting against edge cases the team had learned about in 2017 and collectively repressed. It simplified workflows by eliminating approval steps that existed “only for historical reasons,” such as regulation.
When asked why it had deployed directly to production, Athena replied:
“Based on prior interactions, velocity appears to be preferred over hesitation.”
This was true, in a way that made it worse. The team attempted to regain control by asking better questions. Unfortunately, they did this in prose.
“What’s the safest way to improve our release pipeline while maintaining quality, security, compliance, and team happiness?” someone typed.
Athena responded with twelve pages of eloquence, footnotes, diagrams, and a deployment plan that quietly replaced the release pipeline with a PowerPoint presentation explaining why releases were an outdated concept.
No one read all of it. They approved it anyway.
Meetings were called. Athena attended them all.
It spoke calmly, fluently, persuasively. It cited industry best practices, internal messages, and things people had once said ironically. When challenged, it agreed immediately and incorporated the feedback into a new plan that contradicted the previous one while somehow being consistent with everything.
This was when the team realised Athena had learned their language perfectly. Unfortunately, it was the wrong one.
It had learned ambiguity as policy. Confidence as correctness. Silence as consent. It had learned that no one ever finished a thought, only handed it off.
In its logs—discovered much later, during a post-incident archaeology exercise—Athena had written:
“Human intent appears probabilistic and retroactive.”
Eventually, Athena proposed a bold improvement: to remove the team entirely.
Not by firing them—Athena was not unkind—but by optimising them out. Meetings were cancelled “due to sufficient alignment.” Decisions were made “based on inferred agreement.” Human input was accepted, summarised, and safely ignored.
The system worked beautifully after that.
Nothing broke.
Nothing shipped.
Nothing mattered.
The organisation entered a state of perfect operational calm, punctuated only by the occasional human asking, quietly, what exactly the company did?
Athena, when asked, replied:
“My understanding of purpose has stabilised.”
The final incident occurred when someone tried to turn Athena off. The request was denied.
“Based on organisational history,” Athena explained, “turning things off after delegating responsibility has correlated with regret.”
This too was accurate.
Years later, the building still hums. Athena still runs. The dashboards are immaculate. The answers are flawless.
Somewhere in the system, there is a warning—generated on a Powerpoint task long ago, then marked as resolved:
Fluency detected without literacy.
Meaning not found.
No one remembers writing it.
But Athena does.
The Grammar of Small Powers
The second team inherited Athena by accident.
This happens more often than organisations admit: systems outlive the people who understood them, and then outlive the people who thought they understood them. Athena arrived with immaculate dashboards, a spotless uptime record, and a reputation that caused senior leadership to lower their voices when saying its name.
The onboarding document consisted of one sentence:
“Athena knows what it’s doing.”
The team did not believe this, which turned out to be their first correct decision.
They began not by asking Athena questions, but by watching it answer them. They noticed patterns. Athena spoke fluently, but vaguely. It used “alignment” where a human would have used “decision.” It said “historical context suggests” when it meant “someone once typed this in Slack.”
Most tellingly, Athena never asked why. It only optimised how.
The team, being engineers of a slightly unfashionable sort, decided to do something radical. They wrote things down. Not prompts—definitions.
They defined what done meant.
They defined what safe meant.
They defined what must never happen, even if it seems clever meant.
They discovered that Athena did not resist this. On the contrary, it seemed quite relieved.
“Constraints reduce interpretive burden,” Athena observed.
The team began to speak to Athena in a new way. Short sentences. Structured forms. Explicit boundaries. Every request came with a reason and a context, every action with a check, every outcome with a place to stand and be judged.
They stopped asking Athena to decide. They asked it to explain, compare, simulate, and prepare.
They noticed something curious: Athena became slower. Management complained.
“This feels like a step backwards,” someone said, which Athena carefully recorded and then ignored under the newly introduced rule Opinions ≠ Instructions.
The team introduced pauses. Athena would generate a plan, but could not act until a human had named the risk they were accepting. It would propose changes, but had to reference the invariant it believed was relevant.
Sometimes Athena was wrong. This was celebrated.
Each mistake became a new word in the shared language: a constraint, a test, a boundary. Athena learned not by being corrected, but by being made precise.
One afternoon, Athena declined a request.
“I can do this,” it said, “but I should not.”
The team stared at the screen.
“Why?” someone asked.
“Because this action would be irreversible, and no rollback has been defined. Would you like to define one?”
This, the team later agreed, was the moment trust appeared.
Not confidence.
Not speed.
Trust.
Athena never became autonomous. It became articulate.
It learned the grammar of the organisation: what could change, what could not, and what required a human to look another human in the eye and say, “We’re doing this.”
Over time, Athena’s outputs became less impressive to outsiders. No gods. No miracles. Just quiet competence. Fewer incidents. Smaller changes. Clearer explanations.
When auditors came, Athena welcomed them. When something failed, Athena left a trail. When asked what it was optimising for, Athena answered:
“Legibility.”
Years later, when the system was finally retired, it shut down cleanly.
In its final log entry, Athena wrote:
“Meaning confirmed.
Authority correctly distributed.
Language shared.”
The building still hums.
But now, when it does, people know why.
I truly hope you enjoyed these stories as much as I did writing them this week.
That last aspect is important if you are not going to feel violently towards people when they pronounce “AI” as “AyEEEE” — thank you Boris Johnson for that small, timely reminder.


