
The call, as the trope goes, may come from inside the house.
In enterprise systems, it already has.
The field nobody remembers creating. The flow labeled DO NOT MODIFY. The permission set at 98% utilization ( 😨). All of these harbingers of tech debt amount to a slow accumulation that turns an entire working system into an artifact — and I’m here to argue today that they're the reason enterprises will get disrupted. From the inside than from any startup hiding behind the shower curtain.
Most explanations for disruption start in the same place: speed.
Startups can move faster. Enterprises move slower. New tools beat old ones. Case closed.
It’s a clean story. It’s also completely incomplete.
In most enterprise systems, the internal slowdown doesn’t happen when teams try to build something new. It happens earlier — when they try to understand what’s already there.
The hidden work
A not-so-secret industry secret is that the majority of systems work happens before anything gets built.
Teams spend their time tracing dependencies, checking permissions, mapping flows, and asking some version of the same question: What happens if I touch this?
Obviously this isn’t wasted effort. It’s necessary to the job. In complex systems, understanding is the prerequisite for safe change. But it’s also where velocity goes to die.
A simple change — a field update, a workflow adjustment — rarely starts with just doing the damn thing. It starts with reconstruction. What depends on this field? What automations fire? Who owns it? What breaks if this effort is wrong or incomplete?
When those answers aren’t immediately available, a one-hour task becomes an eleven-hour investigation. When a system isn’t able to be understood easily, that’s when teams are forced to drag the enormous weight of the enterprise’s systems out, lay it on the table, and get to hunting.
(And AI? AI is just making it worse. Salesforce’s decision to remove — probably necessary — friction in Headless 360 in changing the system makes this even more true.)
This is why enterprises slow down
Early-stage systems don’t have this problem. When a system is small, the people building it still hold the context in their heads. Dependencies are limited. Changes are local. You don’t need to ask what something does — you already know.
This is why lightweight teams feel fast: they don’t have to stop and figure out what they already built.
As systems grow, as new users are added, as logic builds, as AI speeds up the build process.. that all changes.
Every new field, integration, permission, and flow adds more to the surface area. The relationships between components multiply. The original intent behind decisions fades. Documentation falls behind. Institutional knowledge fragments.
Over time, the system becomes something else entirely: the drag point is passed. The system is no longer a tool, but an artifact. And a broken one at that.
Once that happens, speed becomes a function of understanding — not execution.
The ratio breaks
This gap eventually widens as companies grow. The cost of building stays relatively constant. A developer can still create a field in seconds. AI can generate flows in minutes. Execution keeps getting faster.
But the cost of understanding rises with every layer of complexity.
So the ratio snaps in half. Anybody can generate new components faster than you can understand the ones that already exist. Every change adds more unknowns. Every unknown adds more time required to make the next change safely.
Eventually, teams enter a permanent state of investigation.
That’s a tax on velocity. And at scale, it becomes a serious competitive disadvantage.
Where disruption actually happens
This is why smaller, more adaptable teams win out.
They rarely have better tools, but they can move from question to answer faster.
When a team instantly understands how their system works (what depends on what, what will break, what’s safe to change, etc) they move cleanly from understanding to execution.
In an odd way, a faster competitor doesn’t need to outbuild you to beat you: they just need to out-understand you.
Why “replace it” doesn’t work
The typical response to this problem is replacement. Rip out the old system. Start fresh. It feels like progress, like relief even. (Sometimes it is both things.)
But replacement resets the system. It doesn’t improve the underlying dynamic: the loss of important context.
If the new system accumulates changes without keeping understanding, it follows the same trajectory. Dependencies expand. Knowledge fragments. Investigation grows. Velocity drops.
You end up in the same place, just on a (much) shinier platform.
Refactor, don’t replace
The alternative is harder, but more durable. Instead of resetting the system, you make it legible.
You reduce the cost of understanding:
- Make dependencies visible
- Capture the “why” behind changes
- Turn investigation into something structured and reusable
- Ensure that context lives with the system, not outside it
This makes the system itself queryable — so that any question about how it works can be answered instantly, in context.
When teams can see clearly, they move differently. Planning accelerates. execution follows. risk drops. The system becomes something you can reason about quickly and confidently.
This is the space in which companies win. In the AI era, as the need for context explodes, teams need to make sure their processes for understanding are fast enough to keep up, alongside AI.
How quickly you can build is really not an issue any longer. The issue is how quickly can you understand what you’re building on.
In the end, enterprises that get disrupted simply be out-understood — usually by a team half their size, with a quarter of their budget, who can still answer a straight question about their own system without convening a war room two weeks from today.


