Platforms 12 April 2026 6 min read ChipStack Research

Custom Silicon Could Become the Second Wave of AI Capex

The first phase of AI spending rewarded general-purpose accelerator leaders. The next phase may increasingly reward the companies that can design custom silicon around specific workloads, economics, and deployment constraints.

Custom Silicon Could Become the Second Wave of AI Capex

The first leg of the AI buildout was easy to understand. Demand exploded for the best general-purpose training accelerators, and the market concentrated its attention on the companies with the most obvious leverage to that surge.

The next leg may be more nuanced. As AI deployments mature, hyperscalers and platform companies have stronger incentives to optimise around their own workloads, software stacks, and unit economics. That is where custom silicon starts to matter.

Why custom silicon becomes more attractive over time

In the early innings of a platform shift, buyers care most about speed to deployment. They want a proven ecosystem, leading performance, and the shortest path to productising AI demand. General-purpose accelerators dominate that phase because they reduce friction.

But once workloads become persistent and large enough, incentives change. Operators start asking different questions:

  • can inference be delivered more cheaply per token?
  • can networking and memory be tailored more tightly to the actual workload?
  • can power efficiency improve at scale?
  • can the platform capture more value by reducing dependence on third-party roadmaps?

Custom silicon becomes attractive precisely because AI is moving from experimentation toward industrial deployment.

The economics are the point

This is not only a technology story. It is an economics story.

If a hyperscaler serves billions of inference requests, even modest gains in performance per watt or performance per dollar can matter enormously. The more stable and repeatable the workload, the more rational it becomes to design hardware around it.

That does not mean custom silicon replaces the leading merchant accelerator vendors overnight. In fact, the opposite is more likely: the ecosystem grows larger and more layered.

General-purpose accelerators will remain critical for frontier training and many high-performance workloads. But custom silicon can increasingly capture value in areas where the economics of scale justify tighter optimisation.

Who benefits

The obvious winners are not only the platform companies designing the chips. The broader beneficiary set can include:

  • design enablement ecosystems,
  • foundries and packaging partners,
  • networking and interconnect suppliers,
  • memory vendors,
  • software and orchestration layers that help mixed hardware environments operate efficiently.

In other words, custom silicon is not a separate theme from the AI supply chain. It is another branch of the same industrial system.

What changes for investors

The market narrative around AI has been dominated by the idea that one category of accelerator captures most of the economic value. That was always too simple. The more durable framing is that AI infrastructure is a hierarchy of systems, and each phase of deployment changes which part of the stack deserves attention.

Custom silicon matters because it marks a transition from buying capacity to optimising economics.

That transition is important. It implies the AI trade will widen rather than narrow. It also suggests investors should pay closer attention to the companies enabling bespoke hardware strategies, not only the firms supplying the most visible merchant chips.

The buildout is becoming less about one universal product and more about a portfolio of architectures tuned for different parts of the AI economy. That makes the next phase more complex — and potentially more interesting.


If you are new to the framework, read the core AI Supply Chain Thesis. For the physical constraints around deployment, pair this note with our work on power and networking.