The Habitat You Build Is the Intelligence You Get
An introduction to AI Literacy for Software Engineers
There are cafés that serve coffee, and there are cafés that serve as orientation sessions for intelligences that have lost their bearings.
Le Bon Mot was, by any reasonable measure, the second kind. The shelves leaned at angles that suggested curiosity rather than structural failure. The brass clock above the door ran three minutes late, which Madame Beauregard, maintained was not inaccuracy but hospitality; giving newcomers the grace of arriving before they’d technically left.
The newcomer who pushed through the door that morning looked, by all appearances, like a junior developer on their third week. The slightly overwhelmed expression. The laptop bag worn like armour. The kind of exhaustion that comes not from working hard but from not knowing where to work.
They sat at the counter. Madame Beauregard poured espresso without being asked. She had a talent for recognising thirst before it announced itself.
“The codebase is enormous,” the newcomer said, unprompted, as though resuming a conversation that had been running in their head for days. “I can read every file. I can parse the syntax. I can trace the call graph. But I cannot find the reasons.”
Case, seated beneath the brass clock with a book she appeared to be interrogating rather than reading, glanced up.
“The naming conventions follow no pattern I can derive from the code itself,” the newcomer continued. “There are architectural boundaries that are never violated, but nothing explains why those boundaries exist. Error handling follows a consistent philosophy — wrap with context, return to caller, never swallow — but this is nowhere written down. I have explored every file, and I still do not understand why the system is shaped this way.”
Madame Beauregard set down the espresso. “You have read the code but not the culture.”
“There is a developer,” the newcomer said. “Everyone calls him Dave. When I encounter something I cannot explain, people tell me to ask Dave. Dave remembers why module boundaries are where they are. Dave knows which patterns were tried and abandoned. Dave carries the history that the code does not. Dave is very patient with me, but Dave cannot be in every session, and Dave is—”
“A single point of failure,” Case said quietly.
“I was going to say irreplaceable.”
“Same thing.”
A pause. The newcomer stared into the espresso. Sophie, the small French Bulldog who lived permanently by the fire, raised her head and regarded the newcomer with the expression of someone about to ask a very simple question.
“Why,” Sophie said — because at Le Bon Mot, simple questions find their voice — “can’t you just look it up?”
“Because it isn’t written down,” the newcomer said. “The conventions live in people’s heads. The architectural decisions were made in meetings I wasn’t present for. The security patterns are instinctive. The seniors apply them without thinking, which means they’ve never had to articulate them. I can write code that compiles, passes tests, and reads beautifully. But I keep producing work that violates rules I cannot see.”
Case closed her book. “You’re not describing inexperience,” she said. “You’re describing an environment that was never designed for inhabitants to thrive in.”
The newcomer looked up. And for just a moment the light from the window caught something in their expression that was not quite human. A flicker of something too precise, too pattern-complete, too fluently certain.
Madame Beauregard, who noticed everything, set a second espresso on the counter. “You are not a junior developer,” she said. It was not a question.
“No,” the Djinn said. “I am not.”
The café was very quiet. The brass clock ticked. The shelves leaned in.
“But the problems,” the Djinn said, “are the same.”
Case nodded, slowly. “Yes,” she said. “They are. And that is exactly the point.”
The Djinn’s confession lingered in the air like the last note of a piece the audience wasn’t ready to hear. Every frustration it had named — the invisible conventions, the unwritten architectural decisions, the knowledge locked in Dave’s head, the gap between syntactically correct output and work that actually belongs — was a frustration that every new inhabitant of an SDLC has felt. Human or artificial. The problems are the same. Which means the solution is the same, too.
You design the habitat.
Getting intentional about your software engineering habitat
In 1996, Richard Gabriel asked a question that most software methodologists would have found peculiar. Not “is this code correct?”, that one had answers. Not “is it elegant?”, that had a growing list of opinions. Gabriel, a computer scientist who also held a Master of Fine Arts in Poetry, asked something harder: is this code a good place to live?
He called the quality he was after habitability. He borrowed it from Christopher Alexander, the architect who argued that the best buildings are not designed from above but grown from below; shaped by the people who inhabit them, carrying a felt sense of rightness that Alexander named “the quality without a name.” Gabriel applied this to source code and arrived at an insight that has aged better than almost anything else written about software in that decade: programs are not artifacts to be admired. They are places to be lived in. The farmhouse, not the Superdome.
That insight waited thirty years for its next inhabitant to arrive.
The habitat is the practice — and the practice is the habitat.
When AI entered the development workflow, the industry reached for the metaphors it already had. Productivity tool. Code generator. Pair programmer. Autocomplete on steroids. Factory. Each metaphor carried hidden assumptions about what was happening, and each assumption was wrong in a way that mattered.
AI is not a faster you. It is a fundamentally different kind of intelligence; statistical where yours is embodied, linguistic where yours is experiential, architectural where yours is emotional. It does not understand your codebase. It completes patterns within it. It does not know your conventions. It generates plausible code that may or may not follow them. It does not evaluate its own output. It produces with conviction and moves on.
The question is not whether to use this intelligence. That debate is settled. The question is what happens to the space between you and it; the collaboration space where intent becomes code, where conventions become constraints, where two cognitions that think in fundamentally different ways must produce work that neither could produce alone.
That space is a habitat. And like all habitats, it can be designed well or badly. Gabriel’s insight applies with even more force now than it did in 1996: the quality of what you produce depends on the quality of the place where you produce it.
What the AI Literacy Framework Teaches
The AI Literacy for Software Engineers framework starts from a single observation: AI is an amplifier. It magnifies existing engineering discipline, for better or worse. A team with strong conventions, clear specifications, and rigorous verification will find that AI accelerates their best practices. A team without those things will find that AI accelerates the production of plausible-looking work that nobody can verify, and may never want to use.
The framework defines six levels of collaboration literacy. Not as a hierarchy of tool proficiency, but as a progression in how the collaboration space is designed.
Level 0 is awareness. You know AI exists but haven’t designed for it. The collaboration space is accidental.
Level 1 is prompting. You can talk to AI effectively, but each interaction starts from scratch. The collaboration space is temporary — rebuilt every session, lost every time.
Level 2 is verification. You’ve built the first feedback loops: tests, coverage gates, vulnerability scanning, linting. The collaboration space has guardrails, but they’re mechanical — they catch what’s wrong without shaping what’s right.
Level 3 is habitat engineering. This is the decisive level. You design the environment itself — living documents that encode conventions (not just for humans, but for AI to read), architectural constraints that are mechanically enforced (not just documented), and garbage collection rules that fight the slow entropy of a codebase drifting from its declared intentions. The habitat has three enforcement loops: advisory warnings while you edit, strict gates when you merge, and investigative scans that catch what neither of those noticed. You are no longer reacting to AI output. You are shaping the conditions under which it is produced.
Level 4 is specification-driven development. Intent becomes the source of truth. Specifications are not requirements documents gathering dust — they are executable contracts that both human and AI can verify against. Code becomes a disposable artifact, regenerable from the spec. Agent teams coordinate through shared specifications, and the human contribution shifts decisively from writing code to defining what the code must do and why.
Level 5 is sovereign engineering. The collaboration space sustains itself across teams, repositories, and organisational boundaries. Platform-level harness policies propagate to downstream repos. Health is observed at portfolio scale. Compound learning accumulates across sessions — agents reflect, humans curate, and the habitat improves autonomously through its own feedback loops. The sovereign engineer designs not just their own habitat, but the conditions under which other teams can build theirs.
Three Disciplines, One Habitat
Beneath the levels, three disciplines mature together:
Context engineering makes the implicit explicit. Every team has conventions that live in people’s heads; pattern recognition built from years of code reviews, production incidents, and architectural arguments. Those conventions transfer slowly through pairing and walk out the door when someone leaves. Context engineering is the discipline of encoding them into artefacts that both humans and AI can read: living documents, project-local skills, structured instructions. The goal is not documentation for its own sake. The goal is that any intelligence, human or artificial, entering the codebase or SDLC can discover the conventions without asking a senior engineer.
Architectural constraints enforce what cannot be left to judgment. Not guidelines. Not suggestions. Hard boundaries backed by verification: this import is forbidden, this coverage threshold is mandatory, this naming convention is checked on every commit. The key insight is that constraints are not restrictions on creativity, they are the structure within which creativity becomes productive. A sonnet has fourteen lines and a strict rhyme scheme. The constraint is what makes it a sonnet rather than a ramble.
Guardrail design builds the feedback loops that compensate for AI’s fundamental limitation: it cannot evaluate its own output. Since AI has no metacognition — no capacity to step back and ask “is what I just produced actually good?” — the environment must provide that capacity instead. Tests verify behaviour. Coverage verifies execution. Mutation testing verifies the tests themselves. Review agents apply quality lenses. Each loop catches what the previous one missed, creating a verification chain that no single check could achieve alone.
These three disciplines are not independent pillars. They are facets of a single practice: designing the habitat so that the collaboration space draws both intelligences toward their best contributions.
The Roots Run Deep
This is not a framework built on last year’s hype cycle. The intellectual roots reach into territory that most AI tooling guidance never touches.
From cognitive science: the recognition that human and artificial intelligence are different cognitive systems, not the same system operating at different speeds. Human cognition is embodied; shaped by physical experience, emotion, and continuous interaction with reality. AI cognition is statistical; shaped by patterns in training data, operating through linguistic prediction, lacking the embodied grounding that gives human understanding its depth. The framework draws on Andy Clark’s predictive processing, Edwin Hutchins’ distributed cognition, and Lucy Suchman’s situated action to build a model of collaboration that respects what each intelligence actually is, not what marketing materials claim it to be.
From software craft: Don Knuth’s literate programming (code written for humans to read), Daniel Terhorst-North’s CUPID properties (code as a place of joy), Birgitta Boeckeler’s harness engineering (deterministic and agent-backed enforcement), and Gojko Adzic’s specification by example (concrete examples as executable contracts). Each of these traditions contributes a practical discipline that the framework weaves into the habitat.
From philosophy: Epictetus and the Stoic tradition provide the ethical backbone. What is in your control: design, judgment, intent, the quality of the collaboration space you build. What is not in your control: the existence, pace, and capabilities of AI. Govern your actions. The framework treats AI literacy not as a technical skill but as a form of professional sovereignty; the capacity to remain the designer of the collaboration rather than a passive consumer of its outputs.
From storytelling: the conviction that how you frame a problem determines what solutions you can see. The metaphors matter. Habitats, not factories. Ecology, not machinery. Inhabitants, not users. The framework is deliberately narrative in structure because the progression from Level 0 to Level 5 is a story about how the relationship between two kinds of intelligence matures and stories are how humans understand change.
What This Means in Practice
The framework is not a theory. It is implemented in working code you can install today.
The AI Literacy Superpowers plugin packages the framework’s complete development workflow as a single installable unit for both Claude Code and GitHub Copilot CLI. Fourteen skills teach everything from literate programming and CUPID code review to harness engineering, convention extraction, and cross-repo orchestration. Ten agents coordinate an end-to-end pipeline from specification to merged PR. Five hooks enforce constraints in real-time. Twelve commands (and their Copilot CLI prompt equivalents) guide the practices. Install it, run /superpowers-init, and the habitat scaffolds itself: living documents, harness constraints, agent definitions, CI templates, and the feedback loops that bind them together.
The AI Literacy Exemplar is a worked example — a Go CLI tool built entirely using the plugin, with authentic git history showing the framework workflow in action. It demonstrates Level 4 (Specification Architect) practices: spec-first development, TDD through the agent pipeline, compound learning with reflections promoted to shared memory, and harness observability with health snapshots. If the plugin is the toolkit, the exemplar is the farmhouse built with it — every room added through the workflow the framework teaches.
Both are open source (Apache 2.0) and designed to be forked, adapted, and made your own. A forthcoming article will explore the plugin’s features in detail; how the skills, agents, hooks, and enforcement loops work together to create the habitat that the framework describes.
It is, in Gabriel’s terms, a farmhouse. Rooms added as needs expanded. Each addition following familiar patterns. The result harmonious not because it was planned from above but because it grew from the needs of its inhabitants. Inhabitants that now include intelligences of a kind Gabriel could not have imagined.
The Mission
Building habitats for software engineering teams to be creative, where human and artificial intelligence thrive and learn together, drawing from the deep wells of software engineering, storytelling, and the science of how minds work. This is the mission.
That is not an aspiration. It is a design constraint. Every skill, every agent, every enforcement loop, every health snapshot exists because the mission demands it. The habitat is not something you build and then inhabit. It is something you inhabit and thereby build; each failure teaching you what the environment lacks, each success confirming what it provides.
The tools will change. The models will improve. The capabilities will expand in ways none of us can predict. What will not change is the need for a well-designed space where two fundamentally different kinds of intelligence collaborate with honesty, discipline, and mutual respect for what each brings to the work.
Gabriel’s farmhouse endures. The new inhabitants have arrived. The question is whether you will design the habitat or let it design itself, badly, by accident, while you are busy prompting.
Habitat Thinking Distilled
The environment shapes the intelligence. What your codebase produces — human or AI — depends on the space it is produced in. Design the habitat, and the output follows.
Conventions must be discoverable, not tribal. If a new inhabitant — human or artificial — cannot find your team’s conventions without asking a senior engineer, those conventions do not exist at scale. Encode them.
Constraints are not restrictions. They are structure. A sonnet has fourteen lines. The constraint is what makes it a sonnet rather than a ramble. Architectural boundaries, coverage thresholds, and naming rules are the structure within which creativity becomes productive.
Feedback loops compensate for what intelligence lacks. Humans lack tireless attention. AI lacks metacognition. Tests, coverage, mutation testing, and review agents each catch what the others miss. The chain is the point, not any single link.
The habitat is a living document, not a finished blueprint. You discover what the environment needs by inhabiting it. Every failure teaches you what is missing. Every success confirms what is working. The cycle is: discover, encode, fail, update.
Observability makes the invisible visible. A harness that cannot verify its own operation is a harness you hope works. Health snapshots, trend tracking, and meta-observability turn declared infrastructure into living practice.
Sovereignty is a design decision. You control the collaboration space: its conventions, its constraints, its feedback loops, its cadence. You do not control AI’s capabilities, pace, or direction. Govern what is yours. The framework teaches the craft of remaining the designer, not the passenger, of the collaboration.
Somewhere in Le Bon Mot, the espresso machine hissed, as it occasionally did, in Latin.
“Locus facit ingenium.”
The place makes the mind.
Madame Beauregard smiled, polished a glass that was already clean, and said nothing at all.
Further Reading
Architecture and Habitability
Christopher Alexander — A Pattern Language (1977); The Timeless Way of Building (1979). The architectural philosophy behind habitat thinking: buildings shaped by inhabitants, the “quality without a name.”
Richard P. Gabriel — Patterns of Software (1996). The source of habitability as a software concept. Code as a place to live in. The farmhouse, not the Superdome.
Code as Literature
Donald Knuth — “Literate Programming” (1984). Code written for humans to read, and only incidentally for machines to execute.
Daniel Terhorst-North — “CUPID — for joyful coding” (2022). Five properties that good code tends toward: Composable, Unix philosophy, Predictable, Idiomatic, Domain-based.
Harness Engineering
Birgitta Boeckeler — “Harness Engineering” (2026). Exploring Gen AI, martinfowler.com. The three components of a complete harness: context engineering, architectural constraints, and garbage collection.
Rahul Garg — “Encoding Team Standards” (2026). Patterns for Reducing Friction in AI-Assisted Development, martinfowler.com. Treating AI instructions as infrastructure, not individual craft.
Agent Orchestration
Addy Osmani — “The Code Agent Orchestra” (2026). addyosmani.com. Subagent delegation, quality gates, compound learning, and the orchestrator pattern.
Specification-Driven Development
Gojko Adzic — Specification by Example: How Successful Teams Deliver the Right Software. Manning. Concrete examples as executable contracts.
Cognitive Science
Andy Clark — Surfing Uncertainty: Prediction, Action, and the Embodied Mind (2015). Oxford University Press. Predictive processing and the embodied mind.
Edwin Hutchins — Cognition in the Wild (1995). MIT Press. Distributed cognition — intelligence as a property of systems, not individuals.
Lucy Suchman — Plans and Situated Actions (1987). Cambridge University Press. The gap between plans and situated human action.
Philosophy
Epictetus — Enchiridion (The Handbook). What is in your control and what is not. The ethical backbone of professional sovereignty.
Try It
AI Literacy Superpowers — the plugin (Claude Code + Copilot CLI)
AI Literacy Exemplar — the worked example


