Remember when getting a McKinsey report meant six-figure invoices and months of consultants camping in your conference rooms? Those days might be ending, though not quite in the way the consulting giants expected. AI startup Rocket has launched a platform that promises McKinsey-style strategic reports at a fraction of the traditional cost, and it’s forcing us to ask some uncomfortable questions about what strategy consulting actually is.
From my perspective as someone who spends most days thinking about agent architectures and reasoning systems, Rocket represents something more interesting than just “cheaper consulting.” It’s a test case for whether we can automate the kind of synthesis and strategic thinking that has historically commanded premium prices precisely because it seemed irreducibly human.
The Architecture of Strategic Thinking
What Rocket is attempting isn’t trivial from a technical standpoint. Generating consulting-style product strategies requires more than just language generation. You need systems that can ingest disparate data sources, identify patterns across market dynamics, competitive positioning, and operational constraints, then synthesize recommendations that account for organizational context. That’s a multi-step reasoning problem with significant uncertainty at each stage.
The platform focuses on helping businesses decide their next moves, which is consultant-speak for “we’ll tell you what to do.” But the interesting question is how the underlying agent system handles the ambiguity inherent in strategic decision-making. Traditional consulting firms charge what they do partly because they’re selling judgment under uncertainty. Can an AI system replicate that, or is it just pattern-matching against historical strategy frameworks?
Trust and the Agentic Era
McKinsey’s own 2026 AI Trust Maturity Survey reveals something telling: organizations are making progress in trust maturity, but persistent gaps remain in strategy and governance. This creates a paradox for platforms like Rocket. Businesses are being asked to trust AI-generated strategic recommendations at precisely the moment when trust frameworks for AI decision-making are still immature.
The shift to what McKinsey calls “the agentic era” means we’re moving from AI as a tool to AI as a decision-maker. That’s a fundamentally different trust relationship. When a consultant gives you bad advice, you can interrogate their reasoning, challenge their assumptions, and hold them accountable. When an AI agent generates a strategy report, the reasoning chain is often opaque, the training data is unknown, and accountability is murky at best.
The Reshaping of Knowledge Work
Data suggests around 75% of current roles will need to be reshaped as AI embeds across workflows. Strategy consulting is just one domain, but it’s a particularly interesting one because it sits at the intersection of analysis, synthesis, and persuasion. If AI can handle this, what knowledge work is actually safe?
From an agent intelligence perspective, I’m watching to see how Rocket handles the meta-problem: not just generating strategies, but understanding which strategies will be persuasive to which stakeholders. Traditional consultants spend enormous effort on presentation and narrative framing. The content of the recommendation often matters less than how it’s packaged and delivered. Can an AI system learn that kind of organizational psychology?
What This Means for AI Development
Predictions for 2026 highlight significant AI advancements, and Rocket is positioned to ride that wave. But the real test will be whether businesses actually act on AI-generated strategies with the same confidence they’d have in human-generated ones. My suspicion is that we’ll see a hybrid model emerge: AI systems like Rocket handling the analytical heavy lifting and framework generation, with human consultants providing the judgment layer and client relationship management.
The technical challenge isn’t just building agents that can write convincing strategy documents. It’s building systems that can reason about causality in complex business environments, handle contradictory objectives, and generate recommendations that are both analytically sound and politically feasible within specific organizational contexts. That’s a harder problem than it might appear.
Rocket’s bet is that most of what consulting firms charge for is actually commoditizable pattern recognition dressed up in expensive formatting. They might be right. But if they’re wrong, we’ll learn something valuable about the limits of current agent architectures and where human judgment still matters. Either way, the experiment is worth watching closely.
đź•’ Published: