Mira Murati's Thinking Machines Strikes Multiyear Nvidia Deal with Gigawatt-Scale Compute Access
Thinking Machines, founded by former OpenAI CTO Mira Murati, secures a multiyear deal with Nvidia that includes a significant investment and access to at least one gigawatt of next-generation Vera Rubin chips — giving the young enterprise AI startup an enormous compute runway.
Thinking Machines, the AI startup founded by former OpenAI CTO Mira Murati, has struck a multiyear deal with Nvidia that includes a significant new investment and access to at least one gigawatt of next-generation Vera Rubin chips — giving the young company an enormous compute runway as it develops enterprise AI systems.
The Nvidia Partnership
The deal's most significant component is compute access: one gigawatt of Vera Rubin GPU capacity represents a computing resource comparable to what the largest AI labs use for frontier model training. For a startup that is still in its early stages, this level of compute access is extraordinary — and signals Nvidia's belief that Thinking Machines could become a major customer for its next-generation hardware. The investment component, while not publicly disclosed, further aligns Nvidia's financial interests with the startup's success.
Murati's Vision
Murati departed OpenAI in September 2024, citing a desire to build AI systems more directly integrated into enterprise workflows. Thinking Machines is building AI models specifically designed for enterprise use cases — not general-purpose chatbots, but specialized systems that can reason about business data, execute complex multi-step tasks, and integrate with existing enterprise software. The company's approach emphasizes reliability and predictability over raw capability, arguing that enterprises need AI systems they can trust to perform consistently rather than AI systems that occasionally produce brilliant results.
Competitive Positioning
Thinking Machines competes with OpenAI's enterprise offerings, Anthropic's Claude for Enterprise, and Google's Gemini for Workspace in the enterprise AI market. Murati's differentiation is vertical specialization: rather than offering a general-purpose model that enterprises adapt to their needs, Thinking Machines is building models pre-trained on industry-specific data and optimized for industry-specific tasks. The Nvidia partnership provides the compute resources needed to train these specialized models at scale.
Related Articles
Google Gemini 3.1 Flash-Lite Targets Enterprise Scale at $0.25 Per Million Tokens
Google has launched Gemini 3.1 Flash-Lite in preview, the fastest and most cost-efficient model in its Gemini 3 family, priced at just $0.25 per million input tokens with 2.5x faster time-to-first-token than its predecessor. The model targets high-volume enterprise workloads where cost and latency matter more than peak capability.
Mandiant Founder Kevin Mandia Raises $190 Million for AI Cybersecurity Startup Armadin
Kevin Mandia, who sold Mandiant to Google for $5.4 billion in 2022, has raised a record-breaking $190 million in combined seed and Series A funding for Armadin, a startup building autonomous AI security agents. Backed by Accel, GV, Kleiner Perkins, and the CIA's In-Q-Tel, Armadin is already working with Fortune 100 companies.
Nscale Raises $2 Billion Series C — the Largest Funding Round in European Tech History
London-based AI infrastructure company Nscale closes a $2 billion Series C at a $14.6 billion valuation — the largest funding round in European history — backed by Citadel, Dell, NVIDIA, and Nokia, with former Meta COO Sheryl Sandberg joining the board.