AI ASIC Tracking – June 3, 2025AI ASIC Tracking – June 21, 2025
-
OpenAI and Google Reach Collaboration
- OpenAI is one of the major customers of Nvidia AI chips and has recently started leasing Google’s AI chips (TPUs) to support products like ChatGPT. This marks OpenAI’s first significant use of non-Nvidia chips.
- OpenAI aims to reduce inference computing costs by leasing TPUs from Google Cloud, as its computing demands are rapidly increasing with the growing number of paid and free users of ChatGPT.
- Previously, OpenAI primarily leased Nvidia server chips through Microsoft and Oracle for model development, training, and running ChatGPT, incurring substantial expenses. It is expected that spending on AI chip servers will approach $14 billion by 2025.

-
Google’s TPU Chips and Related Business Strategy
-
Google has a long-term technology and business development strategy in the AI-related software and hardware fields, and this collaboration reflects the effectiveness of that strategy.
- Initially, Google did not lease its most powerful TPU chips to OpenAI, reserving them mainly for its own AI team for the development of the Gemini model. It is currently unclear whether OpenAI intends to use TPUs for AI training.
- In addition to OpenAI, several companies are leasing TPUs from Google Cloud, partly because some employees previously worked at Google and are familiar with the operation of TPUs.
- Google opened TPUs to cloud customers in 2017, and this collaboration with OpenAI marks a significant advancement in expanding TPU applications.
-
Industry Background and Related Impacts
- Competition with Nvidia: Nvidia chips are unparalleled in performance for training AI, but many companies are developing inference chips to reduce dependence on Nvidia and lower costs. The collaboration between OpenAI and Google can be seen as a challenge to Nvidia and demonstrates the industry’s exploration of reducing reliance on Nvidia.
- Impact on Microsoft: Microsoft was once a close partner and early supporter of OpenAI, investing heavily in developing AI chips that OpenAI hoped to use. However, due to issues in chip development, the release has been delayed, and when completed, its chips may not compete with Nvidia’s.
- Situation of Cloud Service Providers and AI Developers: Other major cloud service providers like Amazon and Microsoft, as well as large AI developers like OpenAI and Meta, have begun developing their own inference chips, but with varying results. Cloud service providers often need to offer financial incentives to attract large customers when promoting alternatives to Nvidia chips.