OpenAI Names Infrastructure Lead for Stargate and Expands AI Server Leasing

OpenAI has revised its Stargate infrastructure strategy, lowering projected compute spending to about $600 billion by 2030 and shifting from self-built facilities to leased cloud capacity amid financing pressure.

Summary

OpenAI revised its Stargate infrastructure plan by cutting projected compute spending to about $600 billion by 2030 and moving away from building its own data centers in favor of using AWS and Google Cloud. The company cited financing pressure for the change, exited talks on a Texas expansion, and is now targeting gigawatt-scale capacity built on Nvidia Vera Rubin in the second half of 2026. The update expands on OpenAI’s broader infrastructure reorganization and increased reliance on leased AI server capacity from major cloud providers.

Terms & Concepts
  • AWS: Amazon Web Services, a major cloud computing platform that provides on-demand infrastructure and processing capacity.
  • Google Cloud: Google’s cloud computing platform, which offers data storage, networking, and large-scale compute services.
  • Nvidia Vera Rubin: An upcoming Nvidia computing platform intended for advanced AI workloads and large-scale model training and inference.