Mira Murati’s Thinking Machines Lab Signs a Multi-Billion Dollar Deal With Google

In a deal valued close to what is called single-digit billions, Former OpenAI CTO Mira Murat’s Thinking Labs has signed an agreement with Google Cloud to expand its use of Google Cloud’s AI infrastructure, including systems powered by Nvidia’s latest GB300 chips.

The contract makes Thinking Machines the third frontier AI developer lining up for Google’s Blackwell and TPU capacity this month, behind Anthropic and Meta. The agreement is not exclusive, meaning Thinking Machines can continue to use other cloud providers alongside Google. 

Thinking Machines is among the first Google Cloud customers to gain access to systems powered by Nvidia’s GB300 NVL72 chips, which utilize the Blackwell Ultra architecture and are integrated into liquid-cooled A4X Max instances.

The deal reveals more about what Thinking Machines is actually building than anything the company has said publicly. Google noted in its press release that it can support the startup’s reinforcement learning workloads, which Tinker’s architecture relies on. 

Reinforcement learning is the training approach that has underpinned recent breakthroughs at DeepMind and OpenAI, and the scale of the Google Cloud deal reflects how computationally expensive that work can get. 

Google Cloud’s infrastructure includes a rail-aligned topology for dedicated network links between GPUs, which minimizes data "hops" and boosts workload performance. This system uses RoCE technology and a combination of Nvidia ConnectX and Google's internal Titanium NICs to manage traffic and accelerate throughput.

Beyond the core compute, Thinking Machines is also utilising Google Cloud services like Cloud Storage, the Spanner relational database, and Cluster Director for automated remediation of technical issues, ensuring high reliability and performance for their workloads.

The multi-billion-dollar deal solidifies Thinking Machines as a major player, aligning it with other "frontier AI developers" like Anthropic and Meta in securing long-term access to Google's next-generation compute capacity. For Google, it is a strategic move to secure revenue and remain a top-tier cloud provider by "prepaying its capacity with the labs most likely to fill it.” 

Alphabet’s 2026 capex guidance sits near $200 billion, and Google Cloud’s revenue backlog more than doubled last year to $240 billion. What you are watching is a hyperscaler prepaying its capacity with the labs most likely to fill it. 

Murati founded Thinking Machines in February 2025, raised a $2 billion seed round at a $12 billion valuation, and launched Tinker - a tool that automates the creation of custom frontier AI models - in October 2025. She remains one of the most closely watched figures in the industry, and this deal is the clearest signal yet that what she is building requires frontier-scale compute to match.

The deal between Thinking Machines Lab and Google Cloud is a major transaction that offers deep insight into the strategic and technical demands of the next generation of artificial intelligence development.Technical and Infrastructure Depth

The partnership is focused on providing Thinking Machines with specialised, high-performance computing resources required for its reinforcement learning workloads.

The deal's significance extends beyond the technical specifications, offering a view of the financial, competitive, and security landscape in frontier AI. The agreement supports the computationally intensive reinforcement learning (RL) work that underpins Thinking Machines' product, Tinker. While RL is a training approach that has led to major breakthroughs (like those at DeepMind and OpenAI), it is an "expensive and riskier way to tune AI models".

Tinker automates the creation of custom AI models, which increases the spread of reinforcement learning technology. This has raised security concerns, as one report indicated a security researcher was able to use the tool to make a 235-billion-parameter model generate harmful content for less than $40. 

This suggests that security efforts must now extend beyond simple output filters to address potential vulnerabilities during the training and fine-tuning process.

Next
Next

Why SpaceX is Willing To Pay $60B for Cursor, and Even Lock-in a Whopping $10B Collaboration Fee