Features your users didn't ask for and won't use
Your team is building AI features because AI is exciting, not because users need them. The tell: nobody can name the user who asked for it.
Here is a conversation we have had, in some variation, at least once a month for the past year.
Us: “Why are you building this?”
Them: “Users will love it.”
Us: “Which users?”
Them: “… Users in general.”
Us: “Has a specific user asked for this?”
Them: “Not exactly, but it’s an obvious improvement.”
The feature is always something AI-powered. An AI summarizer for a dashboard nobody reads. An AI-generated insight panel on a page with a 4-second average dwell time. A chatbot overlay on a product that already has a perfectly functional search bar.
The team is not dumb. They are excited. AI is genuinely powerful and the urge to apply it everywhere is understandable. But excitement is not a product strategy, and “we could” is not the same as “we should.”
The pattern
The pattern has a specific shape, and it repeats with remarkable consistency.
Step 1: Someone on the team — usually an engineer, sometimes a PM — sees a demo or reads a blog post about a new AI capability. They get excited. The excitement is genuine and well-founded. The capability is real.
Step 2: They map the capability to their product. “We have a lot of text data. We could summarize it.” “We have user questions. We could answer them automatically.” “We have reports. We could generate insights.” The mapping is logical. It makes sense on a whiteboard.
Step 3: They build it. Sometimes in a hackathon, sometimes as a side project, sometimes as an official initiative. The prototype is impressive. Demos go well. Leadership is excited.
Step 4: They ship it. Usage is low. Not zero — some users try it because it is new and shiny. But sustained usage is low. The feature does not become part of anyone’s workflow. It sits there, consuming compute, requiring maintenance, and slowly becoming the thing nobody wants to own.
Step 5: Six months later, someone asks whether the feature can be removed. The answer is always “not yet, because some users might be using it.” Nobody checks. The feature persists.
Why AI features are especially prone to this
Every product team ships features that do not land. That is not new. But AI features are uniquely susceptible to this pattern for a few reasons.
AI demos are disproportionately impressive. A summarizer that condenses a 10-page document into three bullet points looks magical in a demo. It looks less magical when the user already knows what is in the document because they wrote it. Demos show the capability in isolation. Users experience it in context.
AI features are expensive to build and maintain. A traditional feature that nobody uses wastes engineering time but is cheap to run. An AI feature that nobody uses wastes engineering time and burns compute on every invocation. LLM calls are not free. An unused AI feature has a recurring cost that a static UI element does not.
AI features create quality obligations. Once you ship an AI feature, you are on the hook for its output quality indefinitely. The model might degrade. The data might change. Edge cases will surface. Each one requires attention. You are not just maintaining code — you are maintaining behavior. That is harder and less predictable.
AI creates a false sense of user value. “We added AI to it” feels like a value proposition. It is not. “We solved a user problem” is a value proposition. The AI is an implementation detail. If you cannot articulate the user problem independent of the technology, you are selling the technology, not the solution.
The tell
The single most reliable indicator that an AI feature will fail: nobody on the team can name a specific user who asked for it, or point to a specific user behavior that suggests they need it.
This does not mean every feature needs to come from a user request. Sometimes you build things users did not know they wanted. But in those cases, you should be able to point to a behavior — something users are currently doing manually, inefficiently, or painfully — that the feature addresses. “Users spend 20 minutes reading these reports every morning” is a reason to build a summarizer. “We have reports” is not.
The naming test is simple. Before you commit to building an AI feature, sit down with the team and ask: “Who is this for? Name them. What are they doing today without this feature? What will they do differently with it?”
If the answers are abstract — “knowledge workers,” “data analysts,” “busy professionals” — the feature is speculative. It might work. But the odds are against it.
If the answers are specific — “Sarah on the compliance team, who spends three hours every Friday manually cross-referencing these two reports” — the feature has a fighting chance. You know who to test it with. You know what success looks like. You know how to measure adoption.
The cost of keeping it
The insidious part of AI feature creep is not the features that fail obviously. It is the features that half-succeed — the ones with just enough usage to justify not removing them, but not enough usage to justify the maintenance cost.
These features accumulate. Each one adds a small amount of ongoing work: monitoring model performance, updating prompts when the model version changes, handling edge cases that surface slowly, answering support tickets from confused users.
Individually, the cost is small. Collectively, it is a tax on the team’s velocity. We see teams where 30-40% of AI engineering time is spent maintaining features that serve a tiny fraction of users. The team is too busy maintaining yesterday’s experiments to build tomorrow’s products.
The fix is ruthless prioritization. Set a usage threshold for AI features — something concrete, like “if fewer than X% of eligible users engage with this feature weekly after 90 days, we remove it.” Apply this retroactively to existing features. Yes, some users will complain. That is fine. More users will benefit from the engineering time freed up.
Building for pull, not push
The best AI features we have seen share a common trait: they were built in response to an observed user need, not a technology capability.
A customer support team was drowning in ticket volume. They needed help triaging — not answering, just triaging. The AI feature they built did one thing: classify incoming tickets by urgency and route them to the right team. It was not flashy. It did not demo well. But it saved 6 hours of manual work per day and the team adopted it immediately.
Compare that to another team that built an AI-powered “insight engine” that generated natural-language summaries of business metrics. The demo was stunning. The product page was beautiful. Usage after launch: negligible. Users already had dashboards. They did not want a natural-language overlay on data they could already read.
The difference was not technical quality. The insight engine was well-built. The difference was demand. One feature was pulled by user need. The other was pushed by technology capability.
The heuristic
Before building any AI feature, name the specific user or user behavior it serves. If you cannot point to a real person or a real workflow, you are building for the technology, not the user. The feature might still work — but you are gambling, and the house edge on speculative AI features is steep.
Build for the pull. The push features are the ones you will be maintaining — and apologizing for — a year from now.
tl;dr
The pattern. Teams build AI features because the technology is exciting, not because a specific user asked for them or exhibits a behavior that signals the need. The fix. Before committing to any AI feature, name the specific person or workflow it serves — if you can only describe the user in abstract terms like “knowledge workers,” stop building. The outcome. Features built for named users with observable needs get adopted; features built for the technology get maintained indefinitely by a team that’s too busy to build the next thing.