FREE TEMPLATE: Get The Playbook Quickstart DOWNLOAD NOW
All Protocols
Foundational Protocol

Own the Playbook, Rent the Tech

Your processes are the asset, not the AI tools you run them in. The playbook-first approach - a tool-agnostic AI SOP - means your operations survive when the tech changes.

There’s a pattern we see in many teams that adopted AI early. They jumped in, made real progress fast, and then hit a ceiling they can’t explain.

They’ve built GPTs, saved skills, set up automations. They’ve spent months refining outputs and training the AI to understand their work. By any reasonable measure, they’re ahead. But things break and they’re not sure why. They have a nagging sense that what they’ve built is fragile.

They’re doing everything that seemed right… and yet, they’ve plateaued.

The reason, almost always, is that they put the value in the wrong place.

Most teams adopt AI backwards.

They pick a tool, learn the tool, build their processes inside the tool. The tool becomes the foundation. Their thinking, their standards, their process logic: it all ends up living inside a product they don’t control.

This seems obvious once you say it, but it took us a while to see it clearly: the people who are most stuck are often the ones who went furthest, fastest.

They built a lot inside the tool layer and almost nothing in the thinking layer.

The tool layer and the thinking layer.

There are two layers to any AI-powered process. There’s the tool layer: which AI tool you are using, which automation moves the pieces, which databases and where your documents are stored. But then there’s the thinking layer: what are we trying to do, what are the steps, what counts as good.

The thinking layer is the asset. We call it a playbook. You might call it an SOP, a runbook, or a process doc.

A playbook is the structured capture of how work gets done: inputs, steps, outputs, decision logic, quality standards, written down in a document you own. And we’re serious when we say a document - we write our playbook in Google Docs and it works just fine.

If the thinking layer is the asset, what’s the tool layer? Think of it like a rental.

You feed the playbook to whichever AI you’re using as context and instructions. If the playbook is written well, the tool is interchangeable.

Operations people already know this.

Processes should be documented. And a good process works even when the main person is out sick and someone new has to step in and execute (aka the tool changed).

But something strange happens with AI: teams skip the documentation step because the tool feels like it’s doing the thinking for them. It’s not.

When you prompt AI and get it to do a task, you’re still the one who decided what “good” looks like. You refined the output until it matched your standards.

But because you built it inside the tool, iteratively, through conversation and feedback, you never wrote any of it down. And you were more likely to be reactive in your process definition than proactive.

Don’t believe me? Open up the instructions of a GPT or a skill, or your memory in one of these tools and start counting how many odd lines, rules or decisions show up.

That’s why we start by writing in a Google Doc for any playbook we create.

The act of slowing down and thinking - what is the process you actually want AI to follow - is where something interesting happens. When you force yourself to write down the inputs, the steps, the standards, you have to name decisions that were previously just instinct.

Most teams discover things about their own process they didn’t know, identify gaps they couldn’t see, and get stronger operations as a result. In these cases, the documentation doesn’t just protect the process. It improves it.

And there’s a bonus, though it’s not the main point: because your playbooks live in documents you control, you’re never locked into a tool. If something changes, you upload your playbooks into a new tool and keep moving.

Spot if your team is falling into this trap.

If you have shared AI tools, ask someone on your team to explain how it works without running it. Not what it produces. How it works. What decisions it makes, what it checks for. If the best they can do is open the tool and show you, that’s your answer. The process logic is inside the product, not inside your organization.

If each person runs their own AI tools, look at where the AI knowledge actually lives. Is it in one person’s account? In saved skills tied to individual logins? In chat threads only one person knows exist? If that person left tomorrow, ask yourself honestly what you’d lose. Most teams find the answer uncomfortable.

Automations are the same. Can anyone on the team explain the full chain of a Zapier or n8n workflow? Could you extend it if you needed to? There’s a version of this where nobody wants to touch the automation, and everyone calls that “stable.” It’s not stable. It’s fragile disguised as reliable.

The most revealing test, though, is to open up your AI’s memory or instructions or saved context and start counting. How many odd rules, preferences, and decisions are in there that nobody consciously put there? That’s your thinking, absorbed into a tool over months of use. It’s doing something. You’re just not sure what.

The pattern across all of these is the same. The work happened inside the tool, iteratively, and nobody pulled it out. The value accumulated in a place you can’t see, can’t inspect, and can’t hand to anyone else.

Fix your foundation to own your playbooks.

Once you start pulling thinking out of tools and into playbooks, you’ll probably start seeing how fragile they were. This is your next opportunity: fix the foundation of your AI operations now, by making sure your playbooks get specific enough.

How do you do that? We use something we call the Smart Intern Check. Could a smart intern follow these instructions? Not an expert. Not someone who already knows how you work. A sharp person who’s never done this before, reading your written steps for the first time.

The difference is easy to see in practice. “Use the Zapier integration to automatically create a task from the Claude output” is automation implementation advice. A smart intern can’t do anything with that.

But “create one task per idea in the ‘Team Board’ in ClickUp, title should be 5-10 words and descriptive, assigned to Rachel for review, description has the content idea” is something anyone could follow.

This test also catches tool dependency. If instructions only make sense inside one specific AI or automation tool, a smart intern won’t be able to follow them.

If a playbook works better in one tool than another, the instructions aren’t clear enough.

We tell teams: get that tool to help you tighten the playbook up. The goal is instructions so clear that the tool genuinely doesn’t matter. Strong operations are built off reliable processes. AI operations are no different.

A simple way to avoid starting from scratch.

If you’re already using AI for various workflows, start there.

  1. Ask the AI to write down what it does: what inputs it expects, what steps it follows, what standards it applies.
  2. The AI produces a first draft of its own playbook.
  3. Then you inspect it. Is that actually what you want it to do? Is anything missing? Is anything wrong?
  4. That inspected, corrected version becomes the playbook. It lives in a document now. Anyone on the team can read it, improve it, or hand it to a different tool.

The most common objection is “the AI already knows what to do, why would we rewrite it?” Because what the AI “knows” is opaque. You’re hoping it remembers. A playbook means you know. And once it’s written down, you can actually improve it deliberately, which is where the compounding value lies of good operations.

The hard part isn’t the writing. It’s the shift: accepting that the work you did inside the tool needs to come out and live somewhere you own.

Building a full playbook-first operation, where every new workflow starts in a document before it ever touches a tool, is a bigger transformation but what we see as being the true path to AI Momentum. Reach out to us to learn more.

This is a Foundational protocol. It changes the way you work, not just the way you think about work.

The question it teaches you to ask: “Where does our AI operations actually live right now, and do we own it?”

Know someone dealing with this? If you’ve got a colleague who’s been “training” their AI for months and still feels like it could fall apart, send them this. Or someone whose team has plateaued and can’t figure out why. The answer is almost always the same: the value is in the wrong place.

Read next: Protocol: The Five AI Ops Roles

Figure out your path: Who Is This For?

Stay connected:

I lead a team and want to build an AI-capable organization. [Subscribe to Leading Momentum]

I’m operationally minded and want to build AI operations skills. [Subscribe to the Operator newsletter]

Ready to put this into practice?

Find the path that fits where you are right now.

Get Started

Newsletter

Leading Momentum

The AI operations newsletter for leaders. Get insights from Rachel Woods on top advice, use cases, and strategies that are working right now with AI.

Free. No spam. Unsubscribe anytime.