[Photo: Shutterstock]

Nvidia's effective $20 billion acquisition of AI inference-focused AI chip startup Groq is unlikely to affect only the AI chip market.

Many analysts say it is also an issue worth watching in high-performance memory such as HBM and in cooling technology, which has emerged as a variable in the AI infrastructure market.

Tech newsletter UncoverAlpha recently summarized five main reasons for Nvidia's Groq acquisition in a report.

They were energy bottlenecks, HBM bottlenecks, CoWoS (Chip-on-Wafer-on-Substrate, TSMC semiconductor packaging technology that connects chips to a substrate after placing them on a wafer to bundle multiple high-performance chips into one package) bottlenecks, liquid-cooled data center bottlenecks, and competition.

The LPU (Language Processing Unit) provided by Groq does not require liquid cooling like Nvidia's latest Blackwell GPU.

UncoverAlpha said there are still far more air-cooled data centers than liquid-cooled data centers. It said many of Nvidia's future products, including Blackwell, will be designed mostly for liquid cooling because they aim for maximum performance, while the cloud market has many air-cooled data centers that cannot switch to liquid cooling.

UncoverAlpha stressed that "Nvidia would want all data centers to run on liquid cooling, but reality is different. Liquid cooling adds complexity, which can make it difficult for many data center operators. Nvidia relying on liquid-cooled data centers could also lead to a growth problem. That is why an air-cooling option is important for Nvidia. AWS Trainium, which is emerging as an alternative to Nvidia GPUs, is also air-cooled," it said.

The acquisition also appears to be drawing strong interest among experts from the perspective of HBM (High Bandwidth Memory), a key element in Nvidia AI chips.

Aakash Gupta, who runs an AI product management newsletter and podcast, also wrote on social media platform X (Twitter) that Nvidia's acquisition of Groq was a strategy to prepare for supply chain risks stemming from supply shortages in the DRAM market.

Nvidia GPUs depend heavily on HBM. Each H100 includes 80 GB of HBM3, and B200 systems require larger capacity. That means rising DRAM prices are a direct variable for Nvidia GPU costs and production capacity, beyond PC makers, Gupta explained.

As Google, AWS and AMD also move to expand AI chips and buy more HBM, HBM supply has already shown bottlenecks, and the situation is worsening.

Groq LPUs, by contrast, do not require HBM. Through Groq, Nvidia has effectively secured an opportunity to increase infrastructure sales in the AI inference market without being constrained by HBM supply issues.

Groq chips have no external memory and include embedded SRAM, allowing them to be made on older process nodes. UncoverAlpha said, "Because Groq chips have no external memory, they do not need the highest-density transistors for high-speed implementation. In fact, Groq's latest-generation LPU is produced on GlobalFoundries' 14 nm process node. The ability to make high-performance chips on older nodes, not TSMC, is another major advantage for a company like Nvidia. This bypasses another bottleneck: TSMC and CoWoS. With the Groq acquisition, Nvidia opened a new growth path that does not face the constraints confronting Blackwell and others," it reported.

It added, "Nvidia knows well that if HBM-energy-liquid cooling-CoWoS bottlenecks pressure the market and lead to a severe shortage of computing resources, customers and competitors will start looking for alternatives that bypass these bottlenecks. Groq, which does not face supply chain bottlenecks from the same factors, is the most likely candidate for that alternative. Therefore, it made the decision itself before Meta or Microsoft acquired Groq and opened an alternative outside GPUs."

Keyword

#Nvidia #Groq #HBM #CoWoS #Blackwell
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.