Build versus buy is the default frame every board uses to think about AI, and it is the wrong frame. The decision is not binary. It is layered. A good AI operating model buys some layers, integrates some layers, and builds some layers, and knows the difference.
Most wealth firms we meet have either bought everything (and now sit inside a vendor's product with no differentiation) or have tried to build everything (and have burned eighteen months reinventing work somebody else has already solved). Neither outcome is inevitable. Both come from skipping the layered question.
The three layers of an AI stack
Every AI capability in a wealth firm sits on top of three layers. Decisions are taken independently for each.
Layer one. The foundation model. The large language model, the speech model, the vision model. GPT, Claude, Gemini, Llama, the rest. These are built by a very small number of companies with capital structures and research teams no advice firm could replicate. Nobody sensibly builds these. Everyone buys or accesses them.
Layer two. The orchestration and platform layer. The software that sits between the foundation model and the workflow. Prompt management, retrieval, vector stores, guardrails, tool calls, audit logging, user interface, identity. This is where most of the product market is crowded right now. Some firms build this themselves. Some use an orchestration platform. Some use their core CRM or planning software's AI capabilities if those exist.
Layer three. The workflow and operating model layer. The specific advice workflow the AI supports. Client onboarding. Annual review. File note generation. Compliance triage. SOA drafting. AML monitoring. This is the layer closest to the business and the one where every firm is different.
Build versus buy as a single question collapses these three layers into one answer. That is why the answer is usually wrong.
Layer one is always bought
Foundation models are a capital and research category that is out of reach for every firm in wealth management. That is not a concession. That is the shape of the market.
The real questions at layer one are not build versus buy. They are.
Which provider. Which model. Which region. Which retention policy. Which pricing model. Which contract terms. Which fallback when the primary provider has an outage.
These are procurement questions with technical implications, not engineering decisions. Treat them as such.
The one place where this gets more nuanced is for large firms with enough volume to consider private model instances, dedicated endpoints, or fine-tuned variants hosted inside the firm's own environment. That decision is not usually about capability. It is about data boundary, cost predictability, and regulatory posture. For APRA-regulated entities in particular, the CPS 234 and CPS 230 implications of where the model runs are worth the architecture time.
For most advice firms though, layer one is settled. Buy. Choose well. Move on.
Layer three is always built
The workflow and operating model layer is the opposite. Nobody sells it. Nobody can.
Every advice firm has a different client service proposition, a different licensee framework, a different review cadence, a different template set, a different paraplanner operating model. A generic AI workflow that works across every firm is a workflow that fits nobody.
The mistake at layer three is to treat vendor-supplied workflow templates as the operating model. They are not. They are starting points. The actual workflow, the one that matches how your advisers run a review, how your paraplanners assemble evidence, how your compliance team approves documents, is something the firm has to design.
This is also why AI implementations that skip the operating model conversation stall. The technology layer deploys. The workflow layer is not redesigned. Work continues roughly as before, the AI sits on the side, and the efficiency gain is 10 to 15% where it should be double that.
Layer three is always built. Not because firms have to write code. Because firms have to rewrite process.
Layer two is where the real decision lives
The interesting question is always layer two.
The orchestration and platform layer is where every vendor is competing, every firm is confused, and where build versus buy actually means something. Three options matter.
Option one. Use a vendor's end-to-end product. A vendor has packaged the foundation model, the orchestration, and a set of workflow templates into one product. You buy it. You configure it. You deploy.
The upside. Fastest to value. Vendor handles the hard engineering problems. Regular feature releases. Typically cheaper than building for firms under a certain scale.
The downside. Your data goes through the vendor. Your workflow ends up looking like the vendor's opinion. Switching cost rises over time. You are buying into the vendor's roadmap, which may or may not match yours.
Right for. Small to mid-sized practices. Firms with limited engineering capability. Firms where the efficiency gain on day one matters more than long-term differentiation.
Option two. Use an orchestration platform and configure on top. A platform supplier provides the prompt management, retrieval, guardrails, and integrations. You provide the workflow logic and configuration. You still buy the foundation model separately.
The upside. More control over the workflow layer. Less dependency on a single vendor's roadmap. You keep the business logic.
The downside. You need someone in your team, or a partner, who can configure and maintain this. The platform is not a full product and requires active operation.
Right for. Licensees. Dealer groups. Mid-sized to large advice firms. Firms that have decided AI is a strategic capability rather than a productivity feature. Firms with at least light engineering or configuration capability in-house.
Option three. Build the orchestration layer in-house. Your engineering team builds the platform from open-source or primitive cloud components. You operate the whole stack other than the foundation model itself.
The upside. Maximum control. Maximum differentiation. Every piece can be tuned to the specific regulatory and operational context. Your data does not leave systems you control.
The downside. Significant engineering investment. Ongoing platform maintenance. You are now running a technology company inside a wealth firm, which is an operating model most firms are not structured for.
Right for. Very few firms. Mostly larger integrated platforms, vertically integrated wealth groups with meaningful technology teams, or firms whose AI is genuinely a customer-facing product rather than an internal tool. If you are reading this to decide whether this applies to you, it probably does not.
The hidden fourth option most firms should consider
There is a fourth option that gets less attention than it deserves.
Buy the platform, build the integration layer.
The idea. You buy a capable orchestration platform. You also build a thin integration layer that sits between your core systems (CRM, planning software, document store, email, compliance) and the platform. The integration layer is yours. The platform underneath can be swapped.
Why this matters. The integration layer is where your regulatory context, your system of record, your data hierarchy, and your audit trail live. If those things are embedded in a vendor's platform, you are locked in. If they are embedded in a thin integration layer you control, the vendor choice underneath becomes reversible.
This is the pattern we push clients towards when they have the engineering capacity for it. The core IP of an AI-native advice firm is not the model and not the platform. It is the integration and data layer that knows exactly how this firm's data, workflows, and controls fit together. That is what is worth owning.
A decision framework
For each AI capability you are considering, answer five questions.
One. Is this capability differentiating for my firm, or is it a shared industry utility?
A client onboarding workflow that enforces your specific licensee controls is differentiating. Generic email drafting is a utility. Buy the utilities. Build the differentiators.
Two. How sensitive is the data involved, and who are my regulators?
The more sensitive the data and the more concentrated the regulatory scrutiny, the more the firm should own the data boundary. For APRA-regulated firms and firms handling trust or superannuation data, that often tilts the answer towards more in-house control.
Three. How fast does this capability need to be in production, and what is the opportunity cost of delay?
A firm six months behind its competitors on AI-assisted reviews faces a real commercial cost. Buying a vendor product that ships in six weeks may be right even if the long-term differentiation is lower.
Four. What is my engineering and configuration capacity today, and what will it be in eighteen months?
Honest answers only. "We plan to hire a head of AI" is not capacity. Capacity is people who are employed today, know your systems, and can deliver this year. If the capacity does not exist, building is a fantasy.
Five. What is my reversal cost if this decision turns out to be wrong in two years?
A built platform is hard to replace. A bought product with a clean integration layer is easy. A bought product tightly coupled to your core systems is somewhere between "very painful" and "impossible" to replace. Optimise for reversibility early.
The pattern we see working
Successful firms tend to follow a consistent pattern, regardless of size.
Buy the foundation model. Buy or use an orchestration platform, with a strong preference for one that can be swapped. Build a thin integration layer that owns the firm's data boundary and audit trail. Build the workflow and operating model logic, because nobody else can.
This pattern gives the firm speed where speed matters (layers one and two), keeps control where control matters (integration and data), and reserves engineering effort for the layer where it cannot be avoided (workflow).
It is also the pattern that scales. A firm that starts here in year one can add capability, swap vendors, or move deeper into in-house components over time without a platform rewrite.
The question that cuts through
The single question that determines the right answer for your firm is this.
In three years, do we want to be a firm that uses AI, or a firm whose operating model is AI-native?
A firm that uses AI can buy most things. Vendor product. Configured lightly. Efficiency gains captured. The firm remains a firm that sells advice, with a smarter tooling layer.
A firm whose operating model is AI-native cannot buy most things, because most things do not fit. The workflows are rebuilt around what AI can do. The adviser role is rethought. The paraplanner role is redesigned. The client experience is different. That shape needs to be built, because it is the shape of your particular firm's future.
Both are valid strategies. Neither is universally right. But the board needs to decide which one the firm is pursuing, because the technology decisions flow from that answer.
What to do on Monday morning
For the next AI capability you are considering, run it through three filters before making the buy or build call.
- Identify which layer you are actually deciding on. Foundation, orchestration, or workflow.
- Rate the capability for differentiation, data sensitivity, speed-to-value, internal capacity, and reversal cost.
- Choose the option that gives you the most reversibility for the lowest first-year cost, consistent with the regulatory posture the firm needs.
Ninety per cent of the time, the answer for a mid-sized advice firm will be: buy the model, use an orchestration platform, build a thin integration layer, and build the workflow.
That is not a compromise. That is the pattern that actually ships.
Build versus buy as a single question is a decision nobody can get right.
Layer by layer, with honest inputs, the answer almost writes itself.