Mistral’s infrastructure bet and the two strategies for winning the AI race
Mistral AI made its first acquisition this week, snapping up Paris-based infrastructure startup Koyeb to accelerate its cloud computing ambitions. The deal brings 13 employees and platform expertise to support Mistral Compute as the company targets €1 billion in revenue by end of 2026.
But this milestone raises a fundamental question about what it actually takes to win the AI race, and whether Mistral is choosing the right strategy for its size and the capital that’s available to it.
What Elon Musk gets right
On John Collison’s Cheeky Pint podcast earlier this month, Elon Musk made perhaps the clearest case yet that the AI race is fundamentally a hardware and energy race, not a model race. His argument is simple: chip output is growing exponentially, but electricity output outside China is flat. You can’t turn chips on without power. The turbine blades needed to generate that power are made by three casting companies globally, all sold out through 2030. Fabs are maxed out. Memory prices are surging.
Musk’s response is total vertical integration. SpaceX launches the satellites and will provide orbital data centers. Tesla makes the solar panels and humanoid robots. xAI builds the models. He’s even talking about manufacturing turbine blades in-house because the supply chain can’t keep up. His prediction: within 30-36 months, space will be the cheapest place to run AI inference.
Whether or not you believe Musk’s timeline, the underlying insight is important. The companies that control their compute infrastructure will have a structural advantage over those that rent it. Google clearly agrees.
The capex giants: Google and Musk
Google is spending $175-185 billion in capex in 2026 alone — more than double the $91 billion it spent in 2025. This isn’t a one-time surge; it’s a utility buildout. Google has built its own TPU chips across seven generations, giving it a vertically integrated stack from silicon to model to distribution. Its cloud backlog has surged to $240 billion. CEO Sundar Pichai says the company remains “supply-constrained” — it is selling AI compute as fast as it can build it.
The Musk approach takes this further by going beyond the data center entirely. By merging xAI with SpaceX, he’s betting that whoever solves the energy constraint wins the AI race. The companies positioned to do this are those with existing physical infrastructure at scale: Google, Meta ($115-135 billion capex in 2026), Microsoft ($37.5 billion in a single quarter), and the Musk conglomerate.
These are trillion-dollar balance sheets making hundred-billion-dollar bets. The common thread: they all concluded that controlling infrastructure is not a distraction from AI but the actual AI race.
The partnership model: Anthropic and OpenAI
Anthropic and OpenAI have taken a different approach. Rather than building infrastructure, they’ve secured access to it through massive multi-cloud partnerships.
Anthropic has committed $30 billion in Azure compute purchases with Microsoft, signed a 1GW+ deal with Google Cloud for TPU access, and has Amazon’s $11 billion Project Rainier cluster built specifically for its workloads. Claude is now the only frontier model available across all three major cloud platforms. Nvidia and Microsoft have invested $10 billion and $5 billion respectively in Anthropic, partly to ensure it keeps buying their chips.
OpenAI has gone even further, committing to an estimated 26 gigawatts of total hardware across Nvidia, AMD, Broadcom (custom chips), and the $500 billion Stargate initiative with SoftBank and Oracle. Notably, OpenAI is also designing its own AI chips with Broadcom — 10 gigawatts of custom accelerators deploying from late 2026 through 2029. This suggests even the partnership-first players are being pulled toward vertical integration as scale demands it.
The logic of this model is compelling: let the hyperscalers fight the power and permitting wars while you focus all R&D resources on the thing that actually differentiates you — model capabilities. Anthropic maintains laser focus on frontier research and safety while Microsoft, Google, and Amazon compete to host its workloads. That’s a good position to be in.
But it comes with dependencies. When you rent your compute, your infrastructure partners are also your competitors. Microsoft hosts Claude on Azure while investing billions in OpenAI. Google provides Anthropic with TPU access while building Gemini. These partnerships work today because frontier model talent is scarce and demand exceeds supply. Whether they survive a world where infrastructure becomes the binding constraint is an open question.
Why Mistral’s bet is the hardest one
This brings us back to Mistral. The company’s €1 billion capex commitment and Koyeb acquisition signal a move toward vertical integration — the same strategic direction as Google and Musk. The rationale for European AI sovereignty makes this appealing: building European-controlled infrastructure reduces dependence on American cloud providers.
But Mistral is attempting this with 500 employees and €2.9 billion in total funding. Its infrastructure commitments now include a €1.2 billion data center investment in Sweden, the Koyeb acquisition, and the broader Mistral Compute platform buildout. It is committing over 40% of its lifetime capital to infrastructure buildout in a single year. Compare that resource base to the players running the same strategy: Google at $175 billion in annual capex, Meta at $115-135 billion, and Musk with SpaceX, Tesla, and xAI as complementary infrastructure pieces.
There is a meaningful difference between vertical integration from a position of strength and vertical integration from a position of resource constraint. Google can build TPUs, data centers, and frontier models simultaneously because it generates $400 billion in annual revenue. Musk can integrate across energy, launch, and compute because each company — SpaceX, Tesla, xAI — was already operating at scale in its vertical before the integration began.
Mistral has neither the revenue base nor the complementary infrastructure. Every engineer assigned to cloud platform integration is an engineer not working on model improvements. Every euro spent on data center buildout is a euro not spent on training runs. In a race where technological leadership shifts on 12-18 month cycles, this resource allocation creates real risk. That said, this may still be the only bet available to Mistral, however hard.
The European dilemma
Mistral’s choice reflects Europe’s broader strategic bind. The partnership model requires trusting American hyperscalers with your compute. For a company positioned as Europe’s AI champion with a €12.6 billion valuation that partly reflects a geopolitical premium, that dependency is uncomfortable.
But the vertical integration model requires capital that European companies simply don’t have at the scale this race demands. Google’s 2026 capex budget alone exceeds the total market capitalization of most European tech companies.
For the moment Europe cannot replicate the American infrastructure buildout. The question is whether it can build enough sovereign capacity to matter while partnering pragmatically for the rest. Mistral may be right that it needs its own infrastructure, but it also needs to stay competitive on models, which is what made it Europe’s most valuable AI company in the first place.
What comes next
Mistral Compute’s pricing will be decisive. If it can’t achieve meaningful premiums over AWS or Google Cloud, the infrastructure strategy becomes a cost center rather than a competitive advantage. Meanwhile, the partnership-first players face their own test: Anthropic and OpenAI are already being pulled toward vertical integration as scale demands it. Whether the multi-cloud model proves durable or merely transitional will shape the next phase of this race.
The AI race is increasingly a capital allocation question. For Europe and for Mistral, the question is whether European customers are willing to pay a sovereignty premium until Mistral can get to a scale where it can compete with the US tech giants.

