
Last week at Gartner D&A 2026, something happened that I've been waiting on for about three years.
The phrase "context layer" appeared in the wild. Rita Sallam called it critical infrastructure on par with cybersecurity. The opening keynote framed it as the architectural foundation that determines whether your AI investment pays off. Andres Garcia-Rodeja predicted that 60% of agentic analytics projects built on MCP alone will fail by 2028 without a real semantic layer underneath.
I'm not going to pretend I'm surprised.
This is the bet we made when we started Sweep.
But I want to say something more useful than "we told you so," because the validation is the easy part. The hard part is what comes next, and most of the conversations I'm having with CROs and CIOs right now suggest the industry is about to learn it the expensive way.
The foundations-to-tools ratio: Data point of the year
The single most important number to come out of the summit, in my view, wasn't the 60% MCP failure prediction (which was, in its own right, fairly scary). What struck me most was Gartner's finding that the highest-performing AI organizations spend about 1.78x more on foundations than on tools. That’s nearly 30% above their lower-performing peers. In those organizations, foundations consume around 60% of total AI spend.
The companies actually getting ROI from AI are the ones who understood, before everyone else, that an agent without context is a confident liar with API access.
Most enterprises did the opposite. They deployed agents first. They wired up MCP servers to systems whose metadata nobody had cleaned up in eight years and called it transformation. Now, they're staring at adoption metrics that look fine on paper and decision quality that's all the while degrading underneath.
This is especially true in Salesforce, and it's why we built what we built.
Here's the question I've been asking customers: where in your enterprise does the most consequential business context actually live? Not the most data. The most context — the records that touch revenue, pipeline, customer relationships, renewals, forecasts, and the operational logic your business runs on every day.
For almost every company we work with, the answer is Salesforce.
And Salesforce is also, by a wide margin, the most chaotic metadata environment in the modern enterprise. Years of admins, consultants, acquisitions, and well-intentioned automation have produced orgs where nobody — not the RevOps team, not the CIO, and certainly not an AI agent — can answer "what does this field mean, who depends on it, and what breaks if I change it?"
This is the gap Gartner is now naming as critical infrastructure. We've just been calling it "the reason your Salesforce-connected agent is going to make a $400K mistake."
"Too confident about bad data" is the category-defining phrase
The Alteryx/Gartner finding that 89% of US firms increased AI spending while 28% have zero confidence in the data quality feeding those systems is the contradiction the next 18 months will resolve, one way or another. The lead architect quoted in that report nailed it: in 2024, we worried about AI making things up. In 2026, the problem is AI being too confident about bad data.
A confident agent acting on bad Salesforce metadata doesn't generate a wrong dashboard. It updates an opportunity stage. It triggers a workflow. It tells a CRO that pipeline is healthy when it isn't. The impact analysis of a context failure scales with the autonomy of the system consuming that context. Agentic AI made the cost of ignoring it linear with deployment.
What the floor told you, and what it should tell every founder building in this space.
The most striking thing in the Gartner write-up, to me, was the floor reporting. Leaders showing up with specific frustrations: governance platforms with no adoption, talk-to-data deployments that don't work reliably, agents already running in production without guardrails, no clear answer to who in the org actually owns this.
Every one of those frustrations is a symptom of the same root cause: you cannot govern, query, or automate against a system whose metadata is illegible to both humans and machines. And you cannot fix that with another tool layered on top. You fix it by making the underlying context machine-readable, business-aware, and continuously maintained… which is infrastructure work.
Where this goes from here
Gartner has now given every data and AI leader the language they need to make the foundational case to their board. "Context is critical infrastructure" has become an analyst directive backed by spending data and failure predictions with dates attached.
The companies that move now, who treat their Salesforce metadata, their semantic layer, and their context graph as infrastructure, are the ones who will deploy agents that are actually trustworthy. The ones who keep buying tools without building the foundation will spend 2027 and 2028 doing remediation work… and writing post-mortems.
We've been building Sweep for the world Gartner has just come to describe.
We kept watching enterprise AI projects fail in the same place, for the same reason, over and over: the system on the other end of the agent had no idea what it was looking at.
That world is here now. The category has a name. The budget exists. Now are you going to build it before your agents need it? Or after they've already broken something?



