Back to Newsroom
newsroomnewsAIeditorial_board

Google Cloud launches two new AI chips to compete with Nvidia

Google Cloud has announced the launch of two new generations of Tensor Processing Units TPUs, marking a significant escalation in its competition with Nvidia for dominance in the AI compute market.

Daily Neural Digest TeamApril 23, 20266 min read1 094 words
This article was generated by Daily Neural Digest's autonomous neural pipeline — multi-source verified, fact-checked, and quality-scored. Learn how it works

The News

Google Cloud has announced the launch of two new generations of Tensor Processing Units (TPUs), marking a significant escalation in its competition with Nvidia for dominance in the AI compute market [1]. Unveiled at Google Cloud Next, the eighth-generation TPUs are designed to accelerate workloads related to the burgeoning “agentic era” of AI, a period characterized by increasingly autonomous and complex AI systems [4]. While specific architecture and performance metrics remain undisclosed, Google emphasized that the new TPUs offer improved speed and cost-effectiveness compared to their predecessors [1]. Notably, despite this internal chip development, Google continues to utilize Nvidia GPUs within its cloud infrastructure, a strategic decision that underscores the complexity of the current AI hardware landscape [1]. The announcement was made during a private gathering in Las Vegas, highlighting its strategic importance [3].

The Context

The development of these new TPUs is rooted in a decade-long collaboration between Google and Nvidia [2]. This partnership has involved co-engineering a full-stack AI platform, encompassing optimized libraries, frameworks, and cloud services [2]. While Google leverages Nvidia’s GPUs for certain workloads, its commitment to custom silicon—specifically TPUs—represents a long-term strategy to reduce reliance on external vendors and gain greater control over its AI infrastructure [3]. The decision to develop TPUs wasn’t sudden; as VentureBeat notes, "One chip a year wasn’t enough" [3]. Google’s internal need for specialized hardware stems from its extensive use of AI across core products like Search, Gmail, and Google Docs, all running on the same infrastructure as Google Cloud Platform (GCP). GCP provides modular services including computing, data storage, analytics, and machine learning, alongside management tools.

The rise of large language models (LLMs) and the subsequent demand for compute power has created a bottleneck in the AI ecosystem [3]. Most AI labs now ration electricity and compute, primarily purchased from suppliers like Nvidia at premium prices [3]. This has led to what VentureBeat calls the "Nvidia tax," referring to Nvidia’s near-monopoly position in high-end AI accelerators and its substantial gross margins [3]. Google’s TPU development is partly an attempt to circumvent this "tax" and gain a competitive edge in AI services [3]. The seventh-generation Ironwood TPU, launched in 2025, laid the groundwork for this eighth-generation release, but the shift toward agentic AI necessitated further architectural evolution [4]. Agentic AI, as described by Ars Technica, demands increased efficiency and specialized capabilities to manage complex workflows and adapt to dynamic environments [4]. The new TPU architecture is designed to meet these demands, though details remain scarce [4].

The popularity of models like NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 (1,437,787 downloads) and NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 (1,174,684 downloads) from HuggingFace demonstrates the ongoing reliance on Nvidia’s ecosystem, despite Google’s TPU efforts. Similarly, the popularity of NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 (752,985 downloads) highlights demand for efficient inference solutions. These figures, while not directly comparable to TPU usage data (which is not publicly available), illustrate Nvidia’s entrenched position in the AI development community.

Why It Matters

The launch of the new TPUs has several layers of impact, affecting developers, enterprises, and the broader AI ecosystem. For developers and engineers, TPUs present both opportunities and potential friction [1]. While they offer performance gains for specific AI workloads, they require adaptation of existing codebases and a learning curve for those unfamiliar with TPU architecture [1]. The shift to TPUs may necessitate retraining models and optimizing algorithms to fully leverage their capabilities, potentially increasing development time and costs initially [1].

Enterprises and startups stand to benefit from potential cost savings associated with TPUs [3]. By reducing reliance on Nvidia GPUs, Google Cloud can offer more competitive pricing for AI compute services, potentially lowering the barrier to entry for smaller companies and fostering innovation [3]. However, migrating workloads to a different hardware platform can be complex and disruptive, requiring careful planning and execution [1]. Furthermore, the specialized nature of TPUs may limit their applicability to a narrower range of AI tasks compared to the more general-purpose capabilities of GPUs [1].

The emergence of Google’s enhanced TPU capabilities creates a bifurcated landscape. Nvidia remains the dominant force for a wide range of AI tasks, particularly those requiring maximum flexibility and broad software support [2]. Google, meanwhile, is carving out a niche by offering a cost-effective and potentially more efficient alternative for specific workloads aligned with its internal AI strategies [3]. This competition is likely to benefit consumers by driving down prices and spurring innovation across the AI hardware market [3].

The NVIDIA Omniverse AI Animal Explorer Extension, while seemingly unrelated, exemplifies the broader trend of AI integration across industries. Its unknown pricing and description as a tool for creating 3D animal meshes highlight the expanding applications of AI beyond traditional machine learning tasks. This diversification further underscores the need for specialized hardware like TPUs to support increasingly complex AI workloads [4].

The Bigger Picture

The launch of these new TPUs fits into a broader trend of hyperscale cloud providers developing custom silicon to reduce costs and gain greater control over their infrastructure [3]. Amazon Web Services (AWS) and Microsoft Azure have also invested in custom chips, albeit with varying degrees of public visibility [3]. This trend signals a move away from reliance on third-party hardware vendors like Nvidia and toward a more vertically integrated AI ecosystem [3]. The competition between Google, Nvidia, AWS, and Microsoft is intensifying, driving innovation and ultimately benefiting consumers [3].

Looking ahead to the next 12–18 months, the AI hardware landscape is likely to see further specialization and diversification [4]. We can expect more companies to develop custom chips tailored to specific AI workloads, and a continued blurring of the lines between hardware and software [4]. The rise of agentic AI will likely accelerate this trend, as the need for specialized hardware to support increasingly complex systems becomes more pressing [4]. The ongoing development of frameworks like NVIDIA’s NeMo (16,855 stars on GitHub, 3,357 forks) in Python demonstrates the continued emphasis on software optimization to maximize hardware utilization.

Current GPU pricing on platforms like Vast.ai, RunPod, and Lambda Labs, though aggregate data remains undisclosed, reflects ongoing demand and supply constraints in the AI hardware market. This dynamic is likely to remain a key factor influencing the adoption of TPUs and other alternative compute solutions [3].


References

[1] Editorial_board — Original article — https://techcrunch.com/2026/04/22/google-cloud-next-new-tpu-ai-chips-compete-with-nvidia/

[2] NVIDIA Blog — NVIDIA and Google Cloud Collaborate to Advance Agentic and Physical AI — https://blogs.nvidia.com/blog/google-cloud-agentic-physical-ai-factories/

[3] VentureBeat — Google doesn't pay the Nvidia tax. Its new TPUs explain why. — https://venturebeat.com/orchestration/google-doesnt-pay-the-nvidia-tax-its-new-tpus-explain-why

[4] Ars Technica — Google unveils two new TPUs designed for the "agentic era" — https://arstechnica.com/ai/2026/04/google-unveils-two-new-tpus-designed-for-the-agentic-era/

newsAIeditorial_board
Share this article:

Was this article helpful?

Let us know to improve our AI generation.

Related Articles