Jensen Huang has built Nvidia into what many consider the defining infrastructure company of the AI era. But when Nvidia itself projects it will sell a total of $1 trillion worth of chips based on its Blackwell and Vera Rubin architectures in 2026 and beyond, that number doesn’t just signal confidence — it signals a target. And the most serious entity taking aim isn’t AMD. It isn’t Intel. It isn’t even Broadcom, despite that company’s very real ambitions in custom silicon. The threat is Alphabet.
As someone who spends most of my working hours thinking about how AI systems are architected at the hardware and inference layer, I’ll admit that sentence still carries a certain weight every time I write it. Alphabet. The search company. The YouTube company. Except, of course, it hasn’t really been just those things for a long time.
Why the Usual Suspects Keep Missing the Point
The conversation around Nvidia competition tends to follow a predictable script. AMD releases a new GPU. Intel announces a new accelerator. Analysts write about market share. Investors get briefly nervous. Then Nvidia posts another record quarter and the cycle resets.
This framing misses something structurally important. AMD and Intel are competing with Nvidia on Nvidia’s terms — building chips they hope to sell to the same cloud providers, enterprises, and AI labs that currently buy from Nvidia. That is an extraordinarily difficult position to win from. You are asking customers to switch vendors for hardware that sits at the center of their most critical workloads, with software ecosystems, toolchains, and institutional knowledge all pointing toward CUDA.
Alphabet is doing something different. It isn’t trying to sell you a chip. It is building chips so it doesn’t have to buy yours.
The Vertical Integration Play Nobody Took Seriously Enough
Google’s Tensor Processing Units, the TPUs, have been in development for over a decade. What started as an internal tool for accelerating Google’s own machine learning workloads has matured into a serious piece of silicon that now underpins significant portions of Google’s AI infrastructure — including the training and serving of its Gemini model family.
From an architectural standpoint, this matters enormously. TPUs are purpose-built for the specific mathematical operations that dominate modern AI workloads: large matrix multiplications, attention mechanisms, and the kind of high-throughput, low-latency inference that agentic systems demand. They are not general-purpose accelerators trying to be good at AI. They are AI accelerators, full stop — designed from first principles around the workload.
When you control the chip, the compiler, the framework, the model, and the cloud platform it all runs on, you have a degree of optimization use that no external vendor can match. Alphabet has all of those pieces.
What This Means for the AI Chip Space in 2026
Nvidia’s $1 trillion projection for Blackwell and Vera Rubin sales is a real number, and I don’t doubt the demand signals behind it. The appetite for AI compute is not slowing down. But the more interesting question is: who is not in that customer pool, and why?
Every chip Alphabet runs internally on TPU infrastructure is a chip it didn’t buy from Nvidia. At the scale Google operates — training frontier models, serving billions of queries, running one of the world’s largest cloud platforms — that displacement is not trivial. It compounds over time. And as Google Cloud makes TPUs available to external customers through its infrastructure, the competitive surface expands further.
The other hyperscalers are watching this closely. Amazon has its Trainium and Inferentia lines. Microsoft is investing in custom silicon. Meta has its MTIA chips. The pattern is consistent: the largest consumers of AI compute are all, to varying degrees, trying to reduce their dependence on any single external supplier.
A Different Kind of Competition
What makes Alphabet the most credible threat in this group is the depth of its vertical integration and the maturity of its silicon program. This isn’t a company that recently decided to build chips because AI got hot. This is a company that has been doing it quietly, seriously, and at scale for years.
For Nvidia, the risk isn’t that Alphabet takes its customers. The risk is that Alphabet — and the hyperscalers following its model — simply stop being customers for the highest-volume, highest-margin workloads. That’s a different kind of competitive pressure, and it doesn’t show up cleanly in traditional market share analysis.
As AI systems grow more agentic, more persistent, and more deeply embedded in infrastructure, the question of who controls the underlying compute becomes more consequential. Alphabet understood that early. The rest of the industry is still catching up to what that means.
đź•’ Published: