Teaching Agents Like You'd Teach a New Hire (Because That's Literally What You're Doing)
ou wouldn't throw a new hire into an enterprise account on day one with nothing but a login and a "good luck." No onboarding doc. No playbook. No explanation of why your team prices things the way it does, or which fields in Salesforce actually matter, or what happens after a contract is signed. You wouldn't do that because the outcome is obvious: they'd make confident, well-intentioned mistakes at speed, and you'd spend weeks cleaning up after them.
So why is that exactly what most companies are doing with AI agents?
The Enablement Problem Wearing a Technology Costume
I wrote a few months ago about the enablement gap in our industry — how companies refuse to hire teachers, refuse to be teachers, and then act surprised when their people go looking for answers in public forums instead of internal knowledge bases. The argument was simple: enablement isn't a luxury. It's strategy.
That argument just got significantly more urgent, because now the "new hire" is a machine that can execute at scale before anyone notices it's doing the wrong thing.
Here's what I keep seeing in client environments: a company decides to deploy Agentforce or some other agentic AI tool. They've read the press releases. They've sat through the demos. They're excited. And the first thing they do is point the agent at their existing Salesforce org — the same org with inconsistent field usage, tribal knowledge trapped in three people's heads, and a process map that was last updated during the Obama administration — and expect intelligence to emerge.
It doesn't.
Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027, citing escalating costs, unclear business value, and inadequate risk controls. And the failure mode is almost never "the AI isn't smart enough." It's that the AI was dropped into an environment it was never equipped to navigate. Fragmented systems. Brittle workflows. Decades of accumulated process debt that nobody documented because the humans knew how to work around it.
This isn't hypothetical. Salesforce itself forecast fiscal 2026 revenue below Wall Street expectations earlier this year, weighed down by slower-than-expected Agentforce adoption. As Rebecca Wettemann, CEO of Valoir, put it: companies aren't writing blank checks until they see Agentforce actually work. The technology isn't the bottleneck. Readiness is.
Sound familiar? It should. It's the same reason your new hires struggle. The only difference is that a new hire will eventually corner someone in the hallway and ask. An agent won't. It'll just do its best with what you gave it — which, if what you gave it is garbage, means you now have garbage at scale.
A Prompt Is Just an Onboarding Document for a Machine
Let's make this concrete. In Agentforce, you control agent behavior through three building blocks: topics, instructions, and actions. These aren't abstract AI concepts. They're the same things you'd put in a new hire's onboarding packet, just formatted differently.
Topics are the agent's job description. What is this agent responsible for? What's in scope and what's not? If you can't clearly articulate that for a machine, I'd wager you can't clearly articulate it for a person either. And that's not an AI problem. That's an organizational clarity problem you've been living with for years.
Instructions are the playbook. When a customer asks about pricing, what do you do? When a lead comes in from this channel versus that channel, how does routing work? What are the exceptions? What are the edge cases? This is where most implementations fall apart, because — and I cannot stress this enough — most companies do not have this written down. They have it stored in the heads of their senior reps. They have it in the muscle memory of the ops person who's been there since the Series A. They have it everywhere except somewhere an agent can access it.
Actions are the tools the agent can use. These are relatively straightforward — API calls, Salesforce automations, external system integrations. The technology part. And predictably, this is the part companies spend 90% of their implementation time on, because it feels like "real work." Building the connectors. Configuring the integrations. Writing the code.
Meanwhile, the instructions — the part that determines whether the agent actually does the right thing — get written in an afternoon by someone who's already moved on to the next project.
This is the enablement problem in a new costume. We've always been bad at documenting how work actually gets done. We've always been bad at transferring institutional knowledge. We've always treated onboarding as a checkbox rather than an investment. And it was expensive before. According to SHRM, companies with weak onboarding programs lose 25% of all new employees within the first year. Gallup found that only 12% of employees strongly agree their organization does a great job with onboarding. And the cost per failed hire? Enboarder's research puts it at $25,000 according to HR managers and closer to $50,000 according to C-level executives — while CareerBuilder reports that 41% of businesses say a single bad hire cost them at least $25,000.
But at least a failed human hire is one person. A failed agent deployment is that same bad onboarding applied to every customer interaction the agent touches, simultaneously, at machine speed.
The Tribal Knowledge Tax
Here's a pattern I see in almost every RevOps engagement we run at Alternative Partners: there is a person — sometimes two or three people — who hold the entire operation together through sheer institutional memory. They know that the "Standard" pricing tier doesn't actually apply to accounts acquired before 2021. They know that one particular integration breaks if a certain field is left blank, so they always fill it in manually. They know that when the CRM says "Closed Won," it doesn't really mean closed won until finance confirms the PO.
None of this is documented. It lives in their heads. And the business runs on it every single day.
When you deploy an AI agent into that environment, you are deploying it without access to any of that knowledge. The agent will apply the "Standard" pricing tier to everyone, because that's what the system says. It won't fill in the field that prevents the integration from breaking, because nobody told it to. It will treat "Closed Won" as closed won, because that's what the label says.
And here's the part that should keep you up at night: it will do all of this confidently, at volume, without raising its hand to ask if something feels off.
The tribal knowledge tax isn't new. But it used to be a slow leak — new hires rambling for six months before they figured things out, a few dropped balls during the learning curve, the occasional billing error that got caught in QA. Annoying, but survivable.
With agents, the leak becomes a flood. Every undocumented exception, every informal workaround, every piece of knowledge that exists only in someone's head becomes a failure mode that an agent will hit repeatedly, without learning from the mistake, until someone notices and manually intervenes.
What "AI Readiness" Actually Means
The industry loves to talk about AI readiness in terms of technology. Is your data clean? Are your APIs documented? Do you have the right integrations? Those things matter. But they're table stakes.
Real AI readiness is an enablement question. It's whether your organization has done the hard, unglamorous work of documenting how things actually work — not how the process map says they work, not how the training deck from 2019 says they work, but how they actually work today, including the workarounds, the exceptions, and the judgment calls.
If you've done that work — if you have clear, current, well-maintained process documentation — then deploying an AI agent is a configuration exercise. You already have the playbook. You just need to translate it.
If you haven't done that work, then deploying an AI agent is an expensive way to discover everything you don't have written down. You'll learn a lot. It'll just cost you in customer experience, data quality, and trust while you're learning.
The data backs this up. Brandon Hall Group found that organizations with a strong onboarding process improve new hire retention by 82% and productivity by over 70%. The same logic applies to your agents: the quality of the "onboarding" — the instructions, the documentation, the institutional context — directly determines the quality of the output.
Where to Start
If you're reading this and realizing that your organization is about to hand an agent the equivalent of a blank onboarding packet, here's where I'd tell you to begin:
Audit your tribal knowledge before you touch the technology. Sit down with the three to five people who really run your revenue operations — not the people with the titles, the people who actually know where the bodies are buried — and document what they know. Every exception. Every workaround. Every "oh, we don't do it that way anymore, but the system still says..." moment. This is your agent's real instruction set.
Map the full process, including the parts you're embarrassed about. Your process map probably looks clean on a whiteboard. The reality involves spreadsheets, manual steps, Slack messages that serve as approvals, and at least one critical handoff that happens via email because nobody ever built the integration. An agent will expose every one of these gaps. Better to find them yourself first.
Write the instructions like you're onboarding someone who's never worked here. Not someone who's never worked in your industry — someone who's never worked here. The difference matters. Industry knowledge is general. Institutional knowledge is specific. Your agent needs the specific stuff.
Test with your most complicated scenarios, not your simplest ones. Everyone demos the happy path. Your agent's instructions need to handle the customer who's on a legacy contract with custom terms, being billed on a non-standard cycle, with a discount that was supposed to expire two quarters ago. That's the scenario that will break things. Test it first.
Treat the instructions as a living document. This is the same mistake companies make with human onboarding — they build the deck once and never update it. Your agent's instructions will need to evolve as your processes change, your products change, and your edge cases multiply. If nobody owns maintaining the instructions, they'll be outdated within a quarter.
The Bigger Point
This isn't really about AI. It's about something this industry has been bad at for a long time: taking enablement seriously.
We've always treated documentation as overhead. We've always assumed that smart people will figure it out. We've always valued building over teaching. And the cost of that was real but diffuse — slower onboarding, inconsistent execution, knowledge walking out the door when people leave.
AI agents just turned the volume up. The same organizational weaknesses that made human onboarding mediocre will make agent deployments fail. The same investment in documentation, process clarity, and institutional knowledge transfer that makes humans effective will make agents effective.
If you won't hire teachers, be a teacher. That advice hasn't changed. It just applies to a much larger classroom now.