OpenAI Says It Has No Plans to Use Google’s In-House AI Chips at Scale
OpenAI has clarified that it currently has no plans to deploy Google’s in-house AI chips, known as Tensor Processing Units (TPUs), at scale, pushing back on earlier reports that suggested a broader partnership between the two tech giants.
OpenAI Denies Large-Scale Use of Google AI Chips
In a statement issued on Sunday, OpenAI said it is only in early testing stages with Google’s TPUs and has no immediate plans to adopt them widely for powering ChatGPT or its other AI models.
“We are conducting early testing with Google’s TPUs, but there are no active plans to use them at scale,” said an OpenAI spokesperson.
The clarification comes after a Reuters exclusive reported that OpenAI had signed up for Google Cloud services to meet growing demand for compute capacity, sparking speculation that OpenAI might adopt Google’s TPUs for production workloads.
OpenAI Continues to Rely on Nvidia and AMD AI Chips
Currently, OpenAI continues to rely heavily on Nvidia GPUs — the industry leader in AI hardware — and is also leveraging AMD’s emerging AI chip portfolio to fuel its expanding services, including ChatGPT and GPT-4o. Additionally, OpenAI is developing its own custom AI chip, with the project expected to reach the “tape-out” milestone this year. Tape-out is a key point in chip design when the blueprint is finalised and sent for fabrication.
Google TPUs Gaining Traction Among Other AI Players
While OpenAI is not ready to embrace Google TPUs at scale, other major tech companies and AI startups have begun adopting them. Google has been expanding external access to its TPUs, historically reserved for internal use.
Clients now include:
- Apple
- Anthropic
- Safe Superintelligence — both founded by former OpenAI leaders
This broader TPU availability has helped Google Cloud position itself as a viable alternative to Nvidia-dominated compute options, particularly amid global GPU shortages.
OpenAI and Google: A Surprising Cloud Collaboration
Even without large-scale TPU usage, OpenAI’s decision to use Google Cloud infrastructure represents a notable collaboration between two leading competitors in the AI industry. However, most of OpenAI’s compute workloads still run on GPU servers from CoreWeave, a rising neocloud provider that specialises in AI infrastructure.
Key Takeaways:
- OpenAI is testing but not deploying Google’s TPUs at scale.
- The company still uses Nvidia GPUs and AMD chips for most AI workloads.
- OpenAI’s in-house AI chip is on track to reach the tape-out milestone in 2025.
- Google TPUs are being adopted by rivals like Anthropic and Apple.
- OpenAI’s partnership with Google Cloud is limited to compute resources, not hardware integration.