← all field notes
№ 25 scope Apr 11, 2025 · 8 min read

When to kill an AI project

The hardest decision in AI is not what to build. It is what to stop building. Here are the five signals that a project should be killed, and why most teams see them too late.


The hardest decision in AI is not what to build. It is what to stop building. Every team we work with has at least one project that should have been killed months ago. They know it. Their engineers know it. But nobody wants to say it out loud because saying it out loud means admitting the last six months were a write-off.

They were not a write-off. But the next six months will be if you keep going.

The problem with AI projects

AI projects fail differently than software projects. A software project that is going badly shows obvious symptoms — missed deadlines, broken builds, escalating bugs. An AI project that is going badly looks like progress. The model improves from 70% to 80%. Then from 80% to 83%. Then from 83% to 84.5%. The team is working hard. The demos are getting better. The charts go up and to the right.

But the charts are lying. The difference between 84.5% and the 95% you need for production is not 10 percentage points of effort. It is a fundamentally different problem — one that might require different data, different architecture, different people, or a different approach entirely. And nobody on the team wants to be the one to say that.

This is how AI projects become zombies. Not dead, not alive. Just consuming resources and producing demos.

The five kill signals

We have seen dozens of AI projects across different companies and domains. The ones that should have been killed — and eventually were — all showed at least two of these signals.

1. The accuracy plateau

You have been at 85% accuracy for three sprints. Each sprint, the team tries something new — more data, different preprocessing, a bigger model, a fancier training recipe. Each time, the needle moves a fraction of a point. Sometimes it moves backward.

This is the most common kill signal and the hardest to act on. The team is doing real work. The experiments are legitimate. But the results are asymptotic. You are converging on a ceiling that is below where you need to be.

The question to ask: is the remaining gap a problem of scale — more data, more compute — or is it a problem of kind? If adding 2x the training data moved you from 80% to 85%, you would need roughly 16x more data to get to 95%. Do you have 16x more data? Can you get it? At what cost? If the answer is no, the accuracy gap is telling you something about the problem itself, not about your execution.

2. The integration wall

The model works. In a notebook. With clean data. On your test set.

Now you need to connect it to your actual systems — the ERP that exports CSV over SFTP, the CRM with a rate-limited API that returns XML, the data warehouse that updates on a 6-hour lag. The model that worked beautifully in isolation needs to handle missing fields, stale data, encoding issues, and a schema that changes without notice.

This is the integration wall. The model was the easy part. The plumbing is the hard part. And the plumbing is not an AI problem — it is a systems engineering problem that was hidden by the excitement of the model working.

The question to ask: is the integration cost proportional to the value the model delivers? If you are spending 3 months integrating a model that saves 20 minutes of manual work per day, the math does not work. Kill it or simplify the integration to something you can ship in a week.

3. The champion left

Every successful AI project has an executive sponsor — someone who fights for budget, clears organizational blockers, and shields the team from “can you also make it do X” requests. When that person leaves, gets reassigned, or loses interest, the project enters a quiet death spiral.

Nobody cancels it. The team keeps working. But the air cover is gone. Other priorities start pulling team members away. The project slips from “strategic initiative” to “that thing the ML team is doing.” Within two quarters, it is a line item that nobody can justify but nobody wants to kill.

The question to ask: who is the new champion? If you cannot name a specific person — not a team, not a function, a person with a name and a title — the project is already dead. It just does not know it yet.

4. The use case shifted

You started building a model to predict customer churn. Halfway through, the business pivoted to a new pricing model that makes churn less relevant. Or you started building a document classification system, and then the company switched document platforms and the old categories no longer apply.

The use case shifted but the project did not. The team is still building the thing they scoped six months ago because that is what the roadmap says. Nobody updated the roadmap because nobody wants to admit the original scope is no longer the right scope.

The question to ask: if you were starting from scratch today — no sunk cost, no existing code, no commitments — would you build this exact thing? If the answer is no, you are building the wrong thing. Stop.

5. The cost math broke

The pilot worked. Ten users, curated data, generous latency budget. The results were good. Everyone was excited. Then you ran the production cost model.

The pilot cost $200/month. Scaling to 10,000 users would cost $200,000/month. The business case assumed $20,000/month. The gap is a full order of magnitude, and no amount of optimization will close it. You can cache aggressively, batch requests, use a smaller model — and you might cut costs by 3x. You are still 3x over budget.

The question to ask: does the unit economics work at production scale? Not at pilot scale. Not with “future cost reductions we expect from model providers.” Does the math work today, with today’s costs, at today’s scale? If not, you are betting on the market to make your business case viable. That is a venture capital strategy, not an engineering strategy.

Killing is a leadership act

Killing a project is not a failure. Continuing a project that should be dead — that is a failure. Every month you spend on a zombie project is a month you are not spending on the project that would actually work.

The best AI leaders we have worked with treat project kills the same way they treat launches. They do a retro. They document what was learned. They celebrate the team for the work, not the outcome. And they move fast — the longer you wait to kill a project, the harder it gets, because the sunk cost grows and the emotional investment deepens.

How to extract value from a killed project

A killed project is not wasted work if you are intentional about what you keep.

The data. The data you collected and cleaned is almost certainly useful for something else. Label it, document it, store it somewhere accessible. Future projects will thank you.

The evals. If you built an evaluation framework — and you should have — it transfers. The methodology, the tooling, the habit of measuring things rigorously. That is organizational muscle that survives the project.

The team’s skills. The engineers who worked on the project learned things. They learned what does not work, which is often more valuable than knowing what does. They built intuition about the problem space. That intuition goes with them to the next project.

The relationships. The stakeholders you worked with, the domain experts who labeled data, the ops team that helped with integration — those relationships are assets. Maintain them. You will need them again.

The worst thing you can do with a killed project is pretend it never happened. The second worst thing is let it continue.

tl;dr

The pattern. Teams keep AI projects alive long past their expiration date because killing feels like failure — so the project becomes a zombie that consumes resources and produces demos. The fix. Watch for the five kill signals — accuracy plateau, integration wall, champion departure, use case shift, broken cost math — and act when you see two of them. The outcome. You free up your best people and your limited budget for the project that will actually ship and compound.


← all field notes Start a retainer →