Perplexity’s new dataset of 100M+ agent interactions reveals a pattern most in the business of tech will recognize immediately: people trust agents quickly, but agents struggle the moment systems behave in ways they can’t see or predict. This extends far beyond any problem with models: it's more of a context problem.

TL;DR

  • Perplexity’s dataset shows users escalate quickly to high-stakes delegation when the environment is predictable.
  • Agents fail when systems are ambiguous, undocumented, or drifting.
  • Salesforce could be the ideal agent environment — but only if metadata, lineage, and dependencies are continuously understood.

AI agents are surging — but they are all hitting the same wall

Perplexity’s research from this week makes something clear: users adopt agents enthusiastically, push them into meaningful workflows, and escalate complexity fast. But usage tends to plateau at the exact moment agents require any iota of system-level understanding.

Across every category, the same thing happens:

Agents thrive in places that have structure; agents falter in places of ambiguity.

This is the part of the conversation that rarely gets the airtime it deserves. These models truly aren’t the bottleneck any longer. They have the skills. It's the systems that are holding the enterprise back. And systems fail for one simple reason — they can’t make their own logic legible to the agents operating inside them.

What Perplexity’s study actually shows

1. People use agents for real work, not experiments

36% of interactions sit in productivity: editing, interpreting, managing, organizing.

2. Trust grows (surprisingly) quickly

Sessions shift toward harder tasks as soon as users feel the environment is predictable.

3. High-stakes delegation is already happening

Purchases, emails, document changes — users don’t wait for “enterprise readiness.”

4. Adoption concentrates in digital and ops-heavy roles

The same roles that live inside Salesforce. (Want to learn more? Read more about making your Salesforce AI-ready here.)

5. Agents only perform well in stable, structured environments

Google Docs, YouTube, LinkedIn — systems with consistent formatting, clear relationships, and predictable behavior.

The implication here is unavoidable:

AI agents fail when the system cannot express its own logic, dependencies, and history. Salesforce should be an agent’s dream — but only if the metadata is fully intact.

Salesforce has everything an agent should love: structured objects, explicit flows, defined relationships, and mission-critical automations.

And everything that breaks automation at scale:

  • drift that builds up without notice,
  • logic held together by tribal knowledge,
  • undocumented dependencies buried five admins deep,
  • naming conventions that change with every re-org.

A concrete example:

Ask an agent to update Lead Status.

Seems simple. But in many real orgs, that one update triggers:

  • seven downstream flows,
  • two territory adjustments,
  • a third-party enrichment callout,
  • a validation rule written years ago that nobody wants to touch.

The agent doesn’t see any of that. It just sees a field.

For anyone responsible for uptime, compliance, and architectural coherence, this is the real challenge: the system is opaque and that's what makes AI unsafe.

Why metadata governance becomes the real prerequisite for enterprise AI

To operate safely, an agent needs a reliable understanding of:

  • how the system works,
  • how it is changing,
  • and what depends on what.

Salesforce doesn’t expose these things cohesively. Audit logs are incomplete. Documentation falls behind reality. Lineage is scattered across mental models and Slack threads.

Perplexity’s data shows trust scales with predictability, as you might expect. In the enterprise, predictability comes from metadata clarity — not just structure, but history, relationships, meaning, and drift.

Without that, the agent is not “autonomous.” It’s just ungoverned.

How Sweep gives agents the context they’ve been missing

Sweep maintains the living model of your Salesforce environment that AI agents rely on. Not as a report, but as an always-current understanding of:

  • what’s connected,
  • what changed,
  • what might break,
  • and why the system behaves the way it does.

When metadata drifts, Sweep surfaces it before an agent learns from the wrong behavior.

When a single field change has a dozen downstream implications, Sweep maps those dependencies so actions are deliberate, not speculative.
When any processes evolve, as they are given to do, Sweep preserves the operational lineage agents need to reason instead of guess.

In the end, users don’t adopt Sweep to “add more AI.” They adopt Sweep to eliminate architectural uncertainty — the real blocker to agent reliability.

A single strategic takeaway

Perplexity’s data validates both the rise of agents AND the need for systems that can explain themselves.

Enterprises that invest in metadata clarity will unlock safe, scalable automation. Enterprises that don’t? Well, they'll experience “AI readiness” as a series of outages, rollbacks, and emergency freeze windows.

The orgs who win the Agent Era will be the ones whose systems are understandable — to both humans and to machines.

Learn More