Your AI strategy is a deck, not a system
Most AI strategies we review are a list of use cases with a timeline. That is not a strategy. A strategy is a set of bets with a kill criteria.
We reviewed a lot of AI strategies in early 2024. Decks, mostly. 30-60 slides. A few had Notion docs. One had a Miro board the size of a city block.
They all looked roughly the same. A list of use cases. A vendor comparison matrix. A timeline with phases — “quick wins” in Q1, “medium-term” in Q2-Q3, “transformative” in Q4. Maybe a slide about responsible AI. Maybe a slide about data governance. Definitely a slide about ROI projections that nobody believed.
These were not strategies. These were wishlists with a Gantt chart.
What a strategy is not
A strategy is not a list of things you could do with AI. Every team can generate that list. The list is infinite. You could summarize documents, classify tickets, generate emails, power search, automate onboarding, build a chatbot, score leads, detect anomalies, predict churn. The list writes itself — and that’s the problem.
A list of use cases is the starting point, not the strategy. The strategy is what you cut from the list, and why.
A strategy is not a vendor comparison. “We evaluated OpenAI, Anthropic, Google, and Cohere and selected X” is a procurement decision, not a strategy. The model is a component. It will change. Your strategy should survive a model swap.
A strategy is not a timeline. “We’ll do RAG in Q1, agents in Q2, fine-tuning in Q3” is a project plan. Project plans are useful. They are not strategies. A strategy tells you what to do when the plan falls apart — and it will fall apart, because this is AI and the ground is moving under your feet.
What a strategy actually is
A strategy is a set of bets with kill criteria.
A bet has three parts. What you’re building. How you’ll know if it works. When you’ll kill it if it doesn’t.
That’s it. Most of the decks we reviewed had the first part — what you’re building. Almost none had the second — how you’ll know if it works. And we never saw the third — when you’ll kill it.
The absence of kill criteria is the tell. It means the organization has not confronted the possibility that any of these initiatives might fail. And in AI, the failure rate is high. Not because the technology doesn’t work — it often does — but because the integration is hard, the data is messy, the use case doesn’t generate the value you expected, or the users don’t adopt it.
The three questions
Every AI initiative should answer three questions before it starts:
What is the success metric? Not “we’ll improve efficiency.” A number. “We’ll reduce average ticket resolution time from 14 minutes to 9 minutes.” Or “we’ll increase the percentage of customer queries resolved without human escalation from 30% to 50%.” If you can’t name a number, you’re not ready to build.
What is the measurement timeline? When will you check? Not “at the end of the project.” A date. “We’ll measure after 4 weeks of production traffic.” This forces you to define what “production traffic” means, which forces you to define what “launched” means, which forces a dozen useful conversations you’d otherwise skip.
What is the kill criteria? Under what conditions do you stop? “If we haven’t hit 40% of target improvement after 6 weeks, we stop and reallocate the team.” This is the one that hurts. This is the one that separates a strategy from a wishlist. A wishlist never dies. A strategy has conditions under which you walk away.
Why kill criteria matter
Without kill criteria, AI projects become zombies. They don’t die. They don’t succeed. They linger. The team keeps working on them because nobody explicitly decided to stop. The feature is in production but nobody’s using it. The model is running but the results aren’t good enough to trust. The dashboard exists but nobody looks at it.
We saw this pattern repeatedly. A team ships an AI feature. Adoption is low. Quality is mediocre. But the initiative was on the roadmap, it was in the strategy deck, the VP mentioned it in the all-hands. So it stays. It absorbs engineering time. It creates maintenance burden. It blocks the team from working on something that might actually work.
Kill criteria prevent this. They make it safe to stop. They make stopping an expected outcome — not a failure, but a planned checkpoint. “We said we’d measure at 6 weeks and kill it below threshold X. We’re below threshold X. We’re stopping. That’s the system working.”
The portfolio view
Once you have individual bets with kill criteria, you can think about the portfolio.
A good AI portfolio has a mix of bets at different risk levels. A few near-certain things — classification, extraction, structured output. These have well-defined inputs and outputs, they’re easy to eval, and the failure mode is obvious. They build organizational muscle.
A few medium-risk things — RAG, summarization, conversational interfaces. These require more integration work, the eval is harder, and the failure modes are subtle. They’re where the real value often is.
And maybe one high-risk thing — something genuinely novel, where you’re not sure it’ll work. An agent workflow, a generative feature, something creative. This is the one that might be transformative — or might be a waste. That’s why it needs the clearest kill criteria of all.
The portfolio view also helps you sequence. You don’t start with the high-risk bet. You start with the near-certain things, because they build the infrastructure — evals, deployment pipelines, monitoring — that the harder bets need. Teams that jump straight to the ambitious use case skip the boring work that makes ambitious work possible.
The org problem underneath
The reason most strategies are decks instead of systems is organizational, not technical.
Building a deck is a planning exercise. One person or a small team can do it. It requires research, synthesis, maybe some vendor conversations. It produces a deliverable that looks good in a leadership review.
Building a system — bets, metrics, kill criteria, portfolio management — is a governance exercise. It requires cross-functional alignment. It requires someone with the authority to kill initiatives. It requires ongoing measurement and honest reporting. It requires admitting that some things aren’t working.
Most orgs aren’t structured for this. The AI strategy is owned by someone who doesn’t have kill authority. Or the metrics aren’t instrumented. Or the honest reporting culture doesn’t exist. The deck is a symptom of the org, not the cause.
The heuristic
Open your AI strategy document. For each initiative, check whether it has a success metric, a measurement date, and written kill criteria. If any of the three are missing, you don’t have a strategy — you have a deck.
The fix takes an afternoon. Sit down with the people who own each initiative. Write down the three answers. Put them somewhere visible. Review them on the date you wrote down. Kill what needs killing. That’s the system.
tl;dr
The pattern. Most AI strategies are a list of use cases with a Gantt chart — they name what to build but never define how you’ll know if it works or when you’ll stop if it doesn’t. The fix. For every initiative, write down three things before work starts: a specific success metric, a date you’ll measure it, and the threshold below which you kill the project. The outcome. Initiatives that aren’t working get stopped instead of becoming zombies, teams are freed to pursue bets that might actually land, and “we have an AI strategy” means something more than a slide deck.