• Most enterprise AI deployments fail at the metadata layer — agents improvise when definitions, dependencies, and logic are invisible or inconsistent.
  • A real AI readiness audit checks seven things: metadata inventory, definitional consistency, dependency mapping, logic and drift, permissions, reversibility, and continuous freshness.
  • Run the audit before launch, not after the first incident. The goal isn't to slow AI down — it's to give it ground to stand on.

*****

Most enterprises think they're AI-ready because they have clean data. They're wrong about what AI runs on.

As they are not deterministic, AI agents operate on meaning — on the fields, relationships, definitions, automations, and business logic that tell a system what those rows actually are. That layer is metadata. And in most enterprise systems, it's a mess: drifted, undocumented, inconsistently defined, and most usually full of legacy logic nobody remembers writing.

Which is why the first Agentforce deployment so often fails the same way: the data was fine. The context wasn't. The agent improvised, made wrong assumptions, took the wrong action, or gave a confidently wrong answer. The post-mortem points at the model. The actual problem was upstream.

This post will teach you how to find that out before you deploy — by running an AI readiness audit on your systems, not just your data.

What enterprise AI readiness actually means

Enterprise AI readiness has three honest tests:

  1. Can your AI see what's actually in your systems? Not the LucidChart diagram from two quarters ago — the real, current state of every object, field, automation, and dependency.
  2. Will your AI interpret those things consistently? If qualified means four different things in four different flows, the agent will average across all four and surface a lot confidence it shouldn't have.
  3. Can your AI act safely? When an agent updates a record, calls an API, or triggers a flow, do you know what breaks? Can you reverse it? Can you reconstruct what happened in an audit?

If you can't answer all three with a defensible yes, your enterprise isn't AI-ready yet — regardless of how good your data warehouse is or how senior your AI hires are.

Why most enterprises fail the first time

The shortest explanation: enterprise systems accumulate logic faster than they document it.

A flow gets added during a Q3 push. A validation rule gets layered on for a one-off compliance ask. A custom field gets created and immediately abandoned. Apex from 2019 quietly updates fields nobody's tracking.

By the time anyone points an AI agent at the system, the metadata layer looks less like a blueprint and more like sediment. The agent doesn't see your business — it sees the geological record of every admin who ever touched the org.

You can't agent your way out of that. You have to surface it.

How to run an AI readiness audit

A real AI readiness audit checks the substrate, not just the surface. Here's the seven-point version we recommend running before any production AI deployment.

1. Metadata inventory

Map everything. Every object, field, flow, validation rule, Apex class, trigger, integration, and managed package. Not a sample. Everything. If your audit doesn't surface "we found 1,600 fields nobody's used in three years," it isn't comprehensive.

2. Definitional consistency check

Pick five business terms that matter… qualified, customer, active, churned, opportunity. Find every place they're defined. If those definitions don't match, your AI inherits the contradictions and amplifies them.

3. Dependency mapping

For every important field and flow, know what touches it upstream and what depends on it downstream. Without this, every agentic action is a coin flip. The point of mapping isn't documentation theater — it's so an agent (or a human) can answer "what breaks if I change this?" before the change ships.

4. Logic and drift assessment

What automation is actually up and running? What was built three years ago and forgotten? What's contradicting what? Salesforce's native audit trail covers about six months — anything older is invisible unless you've been tracking it. Drift is where AI quietly starts being wrong.

5. Permissions and governance review

Who can do what, through what mechanism? Profiles, permission sets, sharing rules, ABAC policies, agent permissions, connected app scopes. AI raises the stakes here because agents act fast. A misconfigured permission that used to be a slow leak becomes a fast one.

6. Reversibility and audit trail

If an agent makes a wrong call at 2 a.m., what's the recovery path? Can you reconstruct what it did, why it did it, and which downstream systems were affected? If the answer is "we'd have to dig through logs for a week," you're not ready to give an agent write access.

7. Continuous freshness check

Your audit isn't done when the report ships. Systems drift the day after you finish. Whatever process you use to audit needs to be repeatable, ideally continuous — because agents will be running against whatever state your system is in today, not the state you audited last quarter.

The piece nobody talks about

Most AI readiness conversations focus on the model, the data warehouse, or the use case. Those matter. They're not where deployments fail.

Deployments fail at the metadata layer — the place where your business logic lives, where definitions are made and broken, where agents have to find their footing. You can have the best model on earth, the cleanest warehouse, and a beautifully scoped use case, and still ship an agent that improvises wrong because the context layer underneath it is invisible or stale.

Getting that right is what we mean by agentic layer, a continuously indexed model of how your systems actually work, that your team and your agents both operate inside of. Not a snapshot. Not a documentation site. A live substrate.

Before you deploy

If you're staring down a production AI launch, run the audit before the launch — not after the first incident.

The seven points above will tell you, honestly, whether your systems are ready to be acted on by something that doesn't ask permission.

Most enterprises aren't, the first time. That's fine if you find out before you deploy.

But it's expensive if you find out after.

Read more
AI Readiness9 min read
Mat Kennedy, Sweep engineer
Mat KennedySweep Staf