At TDX in San Francisco, Ronen Idrisov — our CPO at Sweep — gave a talk that cut through a lot of the noise around AI in Salesforce.

Not a hype piece. Not a prediction. Just a clear look at what actually happened over the last year as teams started using AI to build inside real orgs.

The title: “From Vibe Coding to Agentic Engineering.”

What follows is a written version of that talk, truncated and summarized.

****

At Dreamforce last year, you could feel it in the air. Long lines at developer demos. People describing features in plain English and watching them come to life. A new phrase spreading across the ecosystem: vibe coding.

The promise was intoxicating. Describe what you want. Let AI generate the code. Don’t overthink it. Steer by feel. Accept by vibes.

For simple things, it worked. Buttons, small UI tweaks, lightweight automation—teams saw quick wins. It felt like the future had arrived early.

Then teams tried to use it in real Salesforce orgs.

That’s where things broke...

The Moment Vibe Coding Hit Reality

Salesforce environments are not blank canvases. They’re living systems. Thousands of fields, overlapping automations, legacy workflows, partial migrations, edge-case logic no one fully remembers.

Vibe coding doesn’t see any of that.

It generates in a vacuum.

And when teams started applying it to real work—migrations, automation changes, production updates — the gap became obvious. Code looked correct but didn’t follow actual org constraints. Flow XML failed to deploy. “Self-fixes” made problems worse instead of better. Teams moved faster, but only in one direction: toward more technical debt.

The deeper issue wasn’t quality. It was context.

AI wasn’t asking what already existed. It wasn’t checking dependencies. It wasn’t verifying assumptions. It was generating first and hoping the system would cooperate.

Salesforce doesn’t cooperate.

Even the Inventor Moved On

Vibe coding didn’t fail because it was a bad idea. It failed because it was incomplete.

Even the person who popularized it recognized that quickly. Within a year, the conversation shifted. The focus moved away from generation toward orchestration. Away from “just prompt it” toward structured systems.

The new term: agentic engineering.

Instead of writing code directly, engineers orchestrate agents that do the work. Instead of prompting loosely, they define specs. Instead of trusting output, they validate it.

And most importantly, instead of generating blindly, the system understands its environment before acting.

That last part is where Salesforce teams either succeed—or break production.

Why Salesforce Requires a Different Approach

Salesforce is not just code. It’s metadata.

Objects, fields, picklists, flows, Apex, integrations, permissions—everything is interconnected. Changing one element can trigger five others. Deactivating one workflow can break a downstream process that no one documented.

Any AI system that ignores this structure will fail, no matter how good the model is.

This is why agentic engineering matters more in Salesforce than almost anywhere else.

The challenge isn’t generation. It’s alignment with reality.

What Agentic Engineering Actually Looks Like in Practice

In theory, agentic engineering sounds simple: define the problem, break it into tasks, let AI execute, validate the results.

In Salesforce, that only works if every step is grounded in the org itself.

Before anything is built, the system needs to read the org. It needs to understand which objects exist, which workflows are active, what dependencies are in play, and what partial work has already been done. Without that, even the best plan is based on incorrect assumptions.

Planning comes next, but not as a single prompt. Work is scoped into smaller units, often by object or feature area. Humans review the plan before anything is generated, not after. The goal is to catch mistakes at the architecture level, where they’re cheap to fix.

Then comes the build phase. This is where most AI tools focus—but in agentic systems, generation is constrained by context. The system doesn’t guess field names. It uses real ones. It doesn’t invent picklist values. It references existing ones. It doesn’t ignore dependencies. It resolves them.

Finally, deployment is controlled, not hopeful. Changes are applied in atomic steps. Each unit is verified. Humans approve before anything reaches production.

Same idea as vibe coding — use AI to move faster. Completely different execution.

The Difference Shows Up Immediately

Take a common task: migrating workflow rules to Flow.

In a typical org, you might have dozens of rules across multiple objects. Some overlap. Some conflict. Some already have partial Flow equivalents.

A vibe-coded approach treats this as a single prompt. “Convert all workflow rules to Flow.” The system generates output without checking what exists. It doesn’t group by object. It doesn’t detect existing Flows. It doesn’t plan merges. It just generates.

That’s how you get broken deployments.

An agentic approach starts by inventorying everything. It identifies how many rules exist, where they live, which are active, and whether draft Flows already cover part of the logic. It groups work by object. It proposes whether to merge into existing Flows or create new ones.

A human reviews that plan.

Only then does the system generate changes, one scoped unit at a time. Each step is validated. Each deployment is deliberate.

Same task. Opposite outcome.

Where Sweep Fits Into This Shift

This is exactly the gap we built Sweep to solve.

The missing layer in AI-assisted Salesforce work isn’t another model. It’s context.

Sweep connects directly to your org and gives AI the ability to read before it writes. It understands your metadata, your dependencies, your existing automations. It turns AI from a generator into a participant in your system.

That changes everything.

Instead of guessing, it queries. Instead of generating blindly, it plans with awareness. Instead of suggesting risky deployments, it structures changes into safe, reviewable steps.

You remain the architect. The AI becomes the builder.

The Real Lesson From the Last 12 Months

The Salesforce ecosystem has started to learn how to use AI correctly.

Vibe coding showed what was possible. It made AI accessible. It proved that natural language could drive real work.

But production systems don’t run on vibes.

They run on structure, verification, and context.

The teams that succeed with AI in Salesforce won’t be the ones who generate the fastest. They’ll be the ones who understand their systems deeply enough to guide the generation process.

So for now, read before you write. Verify before you deploy. And keep humans in the loop when decisions matter.

Think Engineered, not vibed.