AI is not a department
The moment you put AI in a silo, you have guaranteed it will not work. AI is a capability that lives inside your existing teams, not a team that lives beside them.
Someone on your leadership team is going to suggest creating an AI department. It will sound reasonable. You have AI work scattered across three teams. Nobody owns the model infrastructure. The data scientists report to different managers with different priorities. Centralizing makes sense — in theory.
In practice, it is the single most reliable way to ensure your AI investments produce nothing of value.
We have seen this pattern at a dozen companies. The AI department gets formed with great fanfare. Smart people are hired. A roadmap is produced. Six months later, the AI team has built impressive internal tools that nobody uses, the product teams are still doing AI work on their own because the AI team’s queue is 8 weeks long, and the CEO is asking why the AI investment has not moved any business metrics.
The problem is not the people. The problem is the org chart.
Why centralized AI teams fail
A centralized AI team fails for the same reason a centralized “innovation lab” fails. It separates the people with the technical capability from the people with the business context.
They build things nobody asked for. Without daily proximity to customers, product managers, and business problems, the AI team gravitates toward technically interesting work. They build a better embedding model. They create a prompt management framework. They prototype a multi-agent system. These might be impressive artifacts. They are not products. Nobody in the business asked for them because nobody in the business knows they exist.
Product teams can’t use what they build. The AI team builds a recommendation engine. They hand it to the product team. The product team looks at the API, realizes it does not handle their edge cases, does not integrate with their data pipeline, and returns results in a format their frontend cannot consume. The AI team says “that’s an integration problem, not an AI problem.” The product team says “it’s your problem because you built it.” The recommendation engine sits unused.
They become a service org. Eventually, the product teams do start sending requests. Now the AI team is drowning. They have 14 requests from 6 teams, no product ownership of any of them, and no ability to prioritize because they do not understand the business context well enough to know which request matters most. They become an internal agency — taking briefs, estimating timelines, delivering work that satisfies the brief but misses the intent. This is the worst possible configuration for AI work, which requires tight iteration loops and deep domain understanding.
The talent gets frustrated. Good AI engineers want to ship products, not service tickets. When the centralized team becomes a service org, the best people leave. They go to companies where they sit on product teams and ship things that users touch. You are left with the people who are comfortable being an internal consultancy — which is rarely the talent profile you need.
The embedded model
The fix is embedding. Not the vector kind — the organizational kind.
The AI engineer sits on the product team. They attend the standups. They know the customers. They understand the data. They ship with the product team and are on-call with the product team. Their manager is the product team’s engineering manager, not a central AI lead.
This is not a new idea. It is how the best companies have always organized specialized engineering capability. Infrastructure engineers sit on product teams. Security engineers sit on product teams. Data engineers sit on product teams. AI engineers should sit on product teams.
When the AI engineer is embedded, the iteration loop is tight. The product manager describes the problem. The AI engineer proposes a solution. They prototype it together. They test it with users. They ship it. The whole cycle takes days, not months. There is no handoff, no requirements document, no integration gap.
The embedded engineer also develops something no centralized team can replicate — domain expertise. After 6 months on a product team, the AI engineer understands the domain as well as anyone on the team. They know which edge cases matter. They know where the data is messy. They know what “good enough” looks like for this specific product. This domain expertise is the difference between an AI feature that works in a demo and an AI feature that works in production.
The role of the central function
Some companies need a central AI function. But its role is different from what most companies assume.
The central AI function does not build features. It builds the platform that feature teams build on. There is a meaningful difference.
Standards. The central function sets standards for how AI systems are built, evaluated, and monitored. Which models are approved for production use. How evals are structured. What monitoring is required. What the incident response process looks like. These standards keep the embedded engineers from reinventing the wheel and ensure consistency across teams.
Shared infrastructure. The central function maintains the shared infrastructure — the model gateway, the eval platform, the prompt management system, the cost monitoring dashboard. This infrastructure is used by every product team. Building it centrally avoids duplication and ensures quality.
Hiring and career development. The central function owns the AI engineering discipline. They define the role, run the hiring process, set the career ladder, and facilitate knowledge sharing across embedded engineers. The embedded engineers report to their product team managers for day-to-day work, but the central function ensures they are connected to a community of practice.
The eval platform. This deserves special emphasis. A shared eval platform — where every team runs their AI evaluations in a consistent way, with comparable metrics and shared tooling — is the single most valuable thing a central AI function can build. It is the thing that turns AI development from “we think it works” to “we know it works.”
What the central function does not do: take feature requests from product teams. Build features for product teams. Own the roadmap for any product team’s AI work. Those responsibilities stay with the product teams, where they belong.
The three org models
There are three ways to organize AI capability. Each has a place.
Centralized
All AI engineers report to a single AI leader. The AI team takes requests from product teams and delivers solutions.
When it works: early stage, 1-3 AI engineers, one or two use cases. The team is small enough that communication overhead is low, and there are not enough AI engineers to embed in multiple teams.
When it fails: at scale. The moment you have more than 3 product teams requesting AI work, the centralized team becomes a bottleneck. Prioritization is political. Delivery is slow. Domain context is thin.
Embedded
All AI engineers report to product team engineering managers. No central AI function exists.
When it works: when your AI engineers are senior enough to set their own standards and your product teams are mature enough to manage specialized talent. This is the leanest model and produces the fastest iteration.
When it fails: when standards diverge. Team A uses one eval framework, Team B uses another, Team C does not eval at all. There is no consistency, no shared learning, and no way to compare AI performance across teams. The AI capability becomes fragmented.
Hybrid
AI engineers are embedded in product teams but a small central function (2-4 people) sets standards, maintains shared infrastructure, and facilitates community. The embedded engineers have a dotted-line reporting relationship to the central function for discipline and a solid-line to their product team manager for delivery.
When it works: at most scales. This is the model we recommend to most companies with 5+ AI engineers. It combines the speed and domain context of embedding with the consistency and infrastructure of a central function.
When it fails: when the dotted-line becomes a solid-line — when the central function starts pulling embedded engineers into central projects, or when product team managers treat the AI engineer as a temporary resource rather than a permanent team member. The model requires organizational discipline to maintain.
The transition
If you already have a centralized AI team and want to move to an embedded model, do it gradually. Pick one product team — the one with the clearest AI use case and the most receptive engineering manager. Move one AI engineer to that team. Give it a quarter. Measure the output. If it works — and it usually does — move the next engineer. Let the central team shrink naturally as engineers embed.
The central team’s remaining members become the platform team. They stop building features and start building infrastructure. This is a career transition for some people and not everyone will want it. That is okay. The people who want to build features should be on product teams. The people who want to build platforms should be on the platform team. Let people self-select.
The transition takes 2-3 quarters to complete. Resist the temptation to do it all at once. Organizational change is a deployment — you roll it out incrementally, monitor the results, and adjust.
tl;dr
The pattern. Companies create centralized AI departments that build impressive demos nobody uses — because the team with the AI capability is disconnected from the teams with the business context. The fix. Embed AI engineers directly into product teams where they ship alongside product managers and engineers, with a small central function that owns standards, shared infrastructure, and the eval platform. The outcome. AI features ship in days instead of months, domain expertise compounds inside product teams, and your AI investment shows up in business metrics instead of internal showcases.