The reconstruction tax.
Every time someone leaves, joins, switches projects, or asks an AI to help — someone pays a tax. They spend hours, sometimes weeks, reconstructing context that already existed in someone's head.
This isn't a new problem. But it used to be absorbed as overhead — an invisible cost of doing business. AI made it visible because AI fails loudly when the context is wrong.
What it costs you.
The tax shows up differently depending on where you sit. But everyone pays it.
Leadership
You're making decisions on concepts your teams define differently. The board report mixes definitions nobody's aligned on. You find out when something breaks, not before.
Go-to-market
Every new hire, every agency, every AI tool gets a different version of what the product is. The first draft is always wrong because the brief was incomplete. Three rounds of review is a context failure, not a quality issue.
Engineering
AI agents hallucinate APIs that don't exist because nothing defines the boundary. New developers take weeks to become productive because the "how things actually work here" lives in one person's head. Senior engineers spend more time explaining than building.
Data & Finance
"Revenue" means three different things depending on who's asking. The board deck, the analytics dashboard, and the finance model use different definitions and nobody's flagged it. Every report is an exercise in reconstruction.
It compounds.
Every AI interaction without bounded context multiplies the tax. Every new hire without it extends the ramp. Every reorg without it restarts the clock. The cost isn't static — it grows with the organisation.
And now there's a dimension most organisations haven't considered: sensitivity. Your CUSTOMER_CONTEXT.md contains pricing logic and churn patterns marked Confidential. An AI agent drafting an external partnership proposal has no way to know that — unless the capsule tells it.
The cost isn't just wrong answers. It's the right answer in the wrong place.
This isn't new thinking.
Engineers have been doing this for decades — drawing strict boundaries around services so each one has a complete, self-contained definition of what it does and what it doesn't. It works. Systems built this way are easier to reason about, easier to change, and dramatically easier for AI to work with.
Context Capsules take the same principle and apply it to everything else — products, teams, metrics, companies, processes. The format is new. The thinking isn't.
The hardest part isn't the format.
It's writing down the thing everyone knows and nobody says.
That the CRM workaround is load-bearing. That the metric the board sees isn't the metric the team tracks. That onboarding is a 45-minute brain dump from the one person who knows.
Capsules work because they make this operational — not a confession, but a risk note. The owner is accountable for accuracy, not guilty for the problems they describe.
So how do you actually start?
The format. The tests. The first capsule to write.