Every engineering team I've spoken to this year has the same standup moment. A PM asks what a feature does in a specific edge case. Three engineers look at each other. The one who shipped it last week opens Cursor to re-read the code they wrote themselves.
Except they didn't, really. Cursor wrote most of it. Claude Code or Copilot filled in the rest. The human read through the change, accepted it, ran the tests, and hit release.
We have decoupled shipping from understanding, and nothing in the development process is catching it.
AI broke code review
Code review used to mean comprehension. When a human wrote every line, reading the change was reading their thinking. You could tell what they meant by how they structured it — which edge cases they had considered, which ones they had not. The review was a second pass at the logic, not just the syntax.
Now the change is an agent's output, and the review is "does it compile, does CI pass, does the naming look reasonable". That is not comprehension. That is quality control on a machine's work.
The DORA 2024 report found that AI adoption correlates with lower delivery stability — more code shipped, more rework required. The speed gain is real. The understanding gain is not.
Comprehension debt is the new technical debt
Technical debt is known shortcuts. Someone chose a quick fix, they know where it lives, they can point at it in the codebase. It shows up on a board somewhere, eventually.
Comprehension debt is different. It is shipped behaviour nobody on the team can currently explain. Not because it is complex — because no human ever fully held it in their head in the first place. Port.io's 2025 research found that only 3% of engineers completely trust their internal documentation. That number will get worse, fast, because documentation used to lag the code. Now it was never ahead of it to begin with.
You can catch technical debt by reading the code. You cannot catch comprehension debt the same way, because reading the code is precisely the step that got skipped.
Specs used to be the input. Now they need to be the output.
The old model worked like this: a PM wrote a spec, engineers built against it, the spec became the reference. It drifted from reality over time, but at least it described what someone intended to build. The drift was structural, but the starting point was a human artefact.
The new model does not start with a spec. An engineer opens Cursor, prompts an agent, iterates on the output, and releases. The agent writes five or ten times the volume a human would in the same time. Intent drifts from the first prompt to the final release, often invisibly. By the time it ships, there is no artefact anywhere that describes what actually got built — only a ticket describing what someone hoped would.
The only artefact that can represent what was actually built is one derived from the code itself. That is what a living product specification is — not a document someone maintains, but an output of the build process. Every release, the scenarios update. The Context/Action/Outcome for each feature reflects the code, not someone's memory of the conversation they had about it three weeks ago.
What "knowing your product" looks like now
If the code is machine-written, the human's job shifts. Writing was always the hard part; now writing is cheap and understanding is the bottleneck. The question "what does our product actually do?" used to have a person who could answer it. Now it needs an artefact.
That artefact has to be derived, not written. It has to update on release, not in a doc-sprint. And it has to be readable by a PM, a customer success manager, an engineering manager — not just by whoever shipped it. Otherwise the comprehension debt keeps compounding with every AI-assisted release.
The speed of AI coding tools is not going to reverse. The gap between what is built and what is understood is going to widen. The teams that ship fastest will need a living spec the most, because they are the ones least able to rebuild the mental model from the code when they need to.
Specsight reads your codebase and generates a living spec in Context/Action/Outcome — updated on every release, readable by everyone on the team. The demo project shows what that looks like with a real codebase. No account required. If you want to connect your own repository, get started free.
