Are you investing in AI companies, or are you investing in a circular financing scheme that happens to produce some AI?
This question keeps me up at night as I watch my portfolio—Nvidia, Microsoft, and Meta—navigate what might be the most consequential inflection point in computing history. As someone who spends my days analyzing agent architectures and intelligence systems, I’m not concerned about whether AI works. I’m concerned about who’s actually building sustainable moats versus who’s riding a capital carousel.
The Circular Economy Problem
Recent analysis has exposed an uncomfortable truth about AI economics: Microsoft invests billions in OpenAI, which then spends those billions on Azure compute and Nvidia chips. Nvidia’s revenue looks spectacular, but how much of it represents genuine demand versus vendor financing dressed up as customer spending? When your biggest customers are funded by your other biggest customers, you’re not observing a market—you’re observing a closed loop.
This matters enormously for portfolio strategy. The question isn’t whether these companies are technically impressive. They are. The question is whether their current valuations reflect sustainable business models or temporary capital flows that could reverse rapidly.
Nvidia: Holding, But Watching the Exits
I’m holding my Nvidia position, but I’ve stopped adding to it. Here’s why: Nvidia has built genuine technical moats in CUDA, in their compiler stack, in their networking fabric. These aren’t trivial to replicate. But their current valuation assumes that AI training demand will continue growing exponentially, and I’m increasingly skeptical of that assumption.
The shift from training to inference changes everything. Inference workloads favor different architectures—ones where Nvidia’s advantages are less pronounced. Google’s TPUs, custom ASICs, and even CPU-based inference are all viable alternatives. More importantly, as models become more efficient, the compute requirements per query drop. Nvidia’s revenue might grow, but probably not at rates that justify current multiples.
I’m holding because Nvidia remains the best positioned for the next 18-24 months of AI infrastructure buildout. But I’m watching carefully for signs that inference economics are maturing faster than expected.
Microsoft: Time to Reduce Exposure
I’m actively reducing my Microsoft position, and the reasoning is straightforward: they’re paying the highest prices for the least differentiated position in AI.
Microsoft’s AI strategy is essentially “rent OpenAI’s models and integrate them everywhere.” That’s not a moat—that’s a distribution play that assumes OpenAI maintains a decisive technical lead. But we’re already seeing that lead narrow. Anthropic, Meta’s Llama, and others are closing the gap. When model capabilities converge, Microsoft’s $13 billion OpenAI investment starts looking less like strategic positioning and more like overpaying for temporary access.
The Azure AI revenue numbers look impressive until you realize how much of that is subsidized by Microsoft’s own investments in OpenAI. They’re essentially paying themselves, booking revenue, and calling it growth. That’s not a sustainable business model—it’s financial engineering.
Meta: Doubling Down on the Contrarian Bet
I’m adding to my Meta position, and I suspect I’m in the minority here. The market seems to view Meta’s AI spending as wasteful because it’s not immediately monetizable. I view it as the most strategically sound AI investment among the three.
Meta is building genuine AI infrastructure for their own products—recommendation systems, content moderation, ad targeting. These systems generate real revenue today. Their Llama models are open weights, which means they’re not trying to monetize the models directly. Instead, they’re commoditizing the complement: if powerful AI models are free, Meta’s massive distribution and data advantages become more valuable, not less.
More importantly, Meta isn’t dependent on the circular financing that props up much of the AI ecosystem. They’re spending their own cash flow on AI that improves their existing products. When the music stops and the circular deals unwind, Meta will still be standing with real AI capabilities integrated into real revenue-generating products.
What This Means for Agent Intelligence
From an agent architecture perspective, the companies that will win aren’t necessarily those with the biggest models or the most compute. They’re the ones with the best data flywheels and the clearest path from AI capability to user value.
Meta has both. Their recommendation agents get better as more people use their platforms. Their ad targeting agents generate measurable ROI. Microsoft is renting capabilities. Nvidia is selling shovels in a gold rush that might be ending.
The AI bubble concerns aren’t about whether AI is real—it obviously is. They’re about whether current capital allocation reflects sustainable economics. My portfolio adjustments reflect a simple thesis: own the companies building real AI products with real business models, not the ones participating in circular financing schemes that happen to involve AI.
🕒 Published: