Salesforce orgs don’t usually break because someone built the wrong Flow. Instead, they break more often because no one — human nor machine — can say with confidence who has authority to do what, and why.

That condition has a name in the Salesforce Entropy Index: Fragmented Authority.

It’s one of the seven "Entropy Drivers," and it explains why permissions-related questions consistently produce the highest uncertainty — for admins, auditors, and now AI agents. This is not because permissions are exotic, but because they sit at the most unforgiving boundary in the system: the point where reasoning turns into action.

What “Fragmented Authority” actually means

Fragmented Authority exists when system behavior is governed by multiple overlapping mechanisms, none of which is clearly authoritative.

In Salesforce, authority is rarely located in one place. It’s distributed across profiles, permission sets and groups, object- and field-level security, sharing rules, role hierarchy, execution context, and layers of Flow, Apex, or managed package overrides. Each of these mechanisms is internally coherent. Each was designed to solve a real problem.

Taken together, they form something else entirely.

There is no single layer where authority lives. There is only runtime resolution. A moment-by-moment reconciliation of rules authored at different times, by different people, with different assumptions. Authority, in practice, is something Salesforce calculates, not something it stores.

Authority isn’t missing — it’s more like... smeared

This is the subtlety that often gets lost.

Fragmented Authority doesn’t mean no one is in charge. It means authority has been smeared across the system over time. A permission added to unblock a deal. A Flow switched to system context to fix a support issue. A managed package that quietly asserts its own rules.

None of these changes feel dangerous in isolation. But collectively, they create a system where the answer to “who can do what” depends on how the action is triggered, where it originates, and which layer executes last.

That pattern doesn’t stop at permissions.

It shows up anywhere the system has to decide what happens next — routing logic, automation precedence, lifecycle state changes, even who is allowed to change the system itself. Permissions just happen to be where this ambiguity becomes impossible to ignore.

Why Fragmented Authority produces high entropy

From an entropy perspective, permissions questions are entirely different in kind.

To answer something as simple as “Can this user do this thing?”, the system has to reconcile multiple metadata sources, apply precedence rules that are rarely explicit, resolve execution context dynamically, and account for exceptions that have accumulated over time. The answer may even change depending on whether the action is taken manually, via automation, or through an API.

Crucially, there is no static, authoritative answer encoded anywhere in metadata.

The same is true in other authority-heavy domains. Who owns this lead? Which automation wins? When does an opportunity actually become qualified? In each case, the system resolves authority at runtime — and only at runtime.

That’s why permissions-related interactions routinely score a 4 or 5 on the Entropy Index. Not because Salesforce is poorly designed, but because authority is computed, not declared. And anything that must be computed dynamically is harder to reason about safely.

What the Salesforce Entropy Index reveals

Across thousands of anonymized AI agent interactions, a pretty obvious pattern emerged: Tasks like explanation, auditing, and dependency tracing consistently cluster around medium entropy. Even deeply nested Flows or complex Apex classes remain explainable with enough context.

But every very high entropy interaction involved access, permissions, or some other authority boundary — ownership, routing, enforcement, or change control. No amount of sophistication elsewhere in the system produced the same failure mode.

Not a gradual slope where complexity slowly increases risk. It’s a cliff. Permissions just happen to sit closest to the edge.

Why humans cope (and AI doesn’t)

Experienced admins often feel confident answering permission questions. That confidence is real there, but it rarely comes from the system itself.

It comes from institutional memory. From knowing which permission set “actually matters.” From trial-and-error testing. From folklore like “we don’t touch that profile” or “this Flow only works because it runs in system mode.” From knowing who to Slack when something behaves strangely.

The same coping mechanisms exist in other fragmented authority domains. Humans learn the edges. They remember the exceptions. They route around ambiguity socially.

None of that lives in metadata.

AI agents don’t struggle here because they’re reckless or underpowered. They struggle because the system itself cannot state the truth unambiguously. Fragmented Authority is where years of human compensation behavior have been quietly masking systemic uncertainty.

The risk to autonomous systems

As Salesforce becomes a system of action — not just a system of record — this uncertainty stops being theoretical.

AI risk doesn’t scale with complexity. It scales with epistemic uncertainty.

An agent can safely explain a deeply nested Flow. It can reason about routing logic or lifecycle definitions. But the moment an agent crosses an authority boundary — assigning ownership, changing access, enforcing rules — it stops reasoning about the system and starts acting within it.

That’s where uncertainty becomes irreversible.

This is why permissions fail first. Not because they’re the only problem, but because they’re where ambiguity turns into consequence.

Designing for governed autonomy

The answer isn’t to slow everything down to a crawl. And it certainly isn’t to ban AI from touching sensitive areas wholesale.

It’s to be precise.

In practice, governed autonomy means allowing low- and medium-entropy interactions to proceed autonomously, while requiring review for high-entropy interactions — especially those involving authority. By classifying interactions by entropy before execution, organizations can remove a disproportionate share of AI risk without limiting everyday automation.

You move fast where authority is legible. You slow down where it isn’t.

That’s governed speed — not fear-based control.

Where Sweep fits

Sweep exists to make high-entropy domains legible.

By continuously mapping metadata, dependencies, execution context, and change history, Sweep gives both humans and agents a clearer picture of where authority is fragmented — whether in permissions, routing, automation, or system change itself. More importantly, it shows where certainty collapses, and where autonomy is actually safe.

Entropy can’t be eliminated. Salesforce will always be powerful, flexible, and layered.

But once you can see Fragmented Authority, you can finally design around it — instead of trusting what the system can’t actually guarantee.

Want to find out more? Book a demo.

Learn More