Right before Dreamforce 2025, Salesforce published a post that, on the surface, looks like an analytics announcement: a commitment to an “open semantic layer” and the launch of the Open Semantic Interchange (or OSI) alongside partners like Snowflake and dbt Labs.

But read more closely and it’s something bigger than a standards initiative. It’s.... an admission.

Salesforce is saying — quite explicitly, even — that the biggest blocker to the adoption of agentic AI isn’t model quality, tooling, or even access to data. It’s meaning. Context. And more specifically: inconsistent, fragmented, drifting meaning inside enterprise systems.

That framing matters, we'd say.

A lot.

The shift hidden in the headline

For years, enterprise analytics focused on outputs: dashboards, reports, charts. If the numbers looked right, the system was assumed to be healthy.

Agentic AI breaks that illusion.

When agents don’t just analyze data but reason over it and act on it, ambiguity stops being an inconvenience and starts becoming a risk. A definition mismatch isn’t just a bad chart—it’s a bad decision executed at machine speed.

Salesforce names this problem the “data meaning disconnect.” One metric, multiple definitions. Slight variations in logic. Semantic sprawl creeping in over time. Trust eroding quietly until leaders stop believing what they see.

This is an important moment because it aligns with something many operators already feel: AI doesn’t fail loudly at first. It fails subtly, through confidence decay.

Open semantics are necessary — but not sufficient, here's why

OSI’s core promise is definitely sound: define business meaning once, and let it travel cleanly across tools. Metrics, dimensions, hierarchies, and governance rules should move without being reinterpreted or rebuilt.

That’s a meaningful step forward. Interoperability matters. Vendor-locked semantics don’t scale in a best-of-breed world.

But there’s a deeper issue OSI doesn’t fully address: Semantic consistency explains what something means. It doesn’t explain why a system behaves the way it does.

And for agentic AI, that distinction is critical to making the right calls.

The missing layer: behavioral legibility

Most real-world AI failures in systems like Salesforce don’t come from misunderstood metrics. They come from misunderstood behavior.

Why did this Flow fire?
Why did this permission override that one?
Why did an automation act in system mode instead of user mode?
Why did an agent have access here but not there?

These aren’t semantic questions in the analytics sense. They’re questions of precedence, inheritance, execution context, and historical drift. They live in metadata, not dashboards.

You can standardize the definition of “churn” across tools and still have no idea what will happen when an agent updates a record in production.

This is where the agentic future gets dangerous.

Agents don’t just need shared meaning —they need explainable systems

Salesforce’s post includes a crucial line: “Even correct AI behavior can create unintended outcomes.”

To us, that's the biggest tell.

If an agent is operating inside an opaque system, correctness is irrelevant. Confidence collapses when no one (human or machine) can explain what the system will do before it does it.

Agentic readiness, then, isn’t just about open semantics. It’s about system legibility. "Legibility" here means being able to answer, ahead of time:

  • What will change?
  • What depends on it?
  • Who is affected?
  • What rules actually apply?
  • Where authority truly lives?

Without that, agents may be exceedingly well-informed — and yet still unsafe to use.

What this moment signals for the enterprise

To us, Salesforce’s OSI announcement should be read as a macro signal of more to come, not a feature launch.

The industry is finally acknowledging that AI scale depends on shared understanding, not just shared data. That’s progress.

The next step is harder: making complex systems readable enough that agents can act with confidence, not just context.

Open semantics help meaning travel between platforms. System legibility determines whether agents should act at all.

The agentic future will be built on systems we can actually understand, not dashboards.

And that work is only just beginning.

Learn more