As the global leaders debate the sustainability artificial intelligence boom, companies are pouring in billions of dollars to capitalise on its growth, and Nvidia is no-surprise front-runner in that race.
The tech giant is collaborating with multiple other companies, and recently, Nvidia CEO Jensen Huang spoke about the “invaluable” contribution of Micron Technology in enabling the next generation of AI breakthroughs.
While speaking about Micron, Nvidia CEO Jensen Huang said:
Micron’s leadership in high-performance memory is invaluable to enabling the next generation of AI breakthroughs that his company is driving. And the mission-critical nature of Micron’s memory technology could increase as the focus shifts from training AI models to inference.
Micron’s critical role in high-bandwidth memory in AI
Nvidia’s GPUs remain the gold standard for powering AI models, but Jensen Huang made it clear that these chips alone cannot achieve optimal AI performance without high-speed, low-latency memory.
Generative AI and large-scale language models especially demand massive amounts of data throughput along with processing power.
This is where Micron, as a key member of the global “memory oligopoly,” plays a critical role. Alongside Samsung and SK Hynix, Micron leads the HBM market and is the only major US-based supplier of this advanced memory technology.
Micron’s HBM3E memory, branded as the fastest and highest capacity memory for AI innovation, has allowed the company to surpass Samsung and become the second-largest HBM supplier globally, capturing a 21% market share.
Huang said that as AI shifts from just training models to actually running them in the real world, Micron’s memory tech will only grow in importance.
With AI adoption exploding, demand for high-performance memory is soaring, and there are only a handful of players who can deliver the kind of advanced chips needed to power massive AI systems.
Micron and Nvidia partnership fuels AI infrastructure advancement
Micron’s high-bandwidth memory chips are integrated into Nvidia’s latest AI-driven products, including the RTX 50 series gaming chips and Nvidia’s Blackwell AI architecture systems.
The collaboration lets Nvidia’s GPUs handle AI workloads and graphics at the same time, which opens the door to new levels of performance in both consumer and enterprise AI.
During his CES 2025 keynote, Huang pointed out that Micron’s memory, which can hit speeds of up to 18 terabytes per second, is critical for getting the most out of Nvidia’s AI hardware and making future AI systems faster and more scalable.
The growing Nvidia-Micron alliance is a sign of where AI hardware is headed: GPUs and memory now have to advance in lockstep.
With the HBM market projected to be worth billions by 2030, Micron’s progress puts it in a key supplier role in the AI chip world, and supports Nvidia’s push to dominate both AI hardware and the broader AI infrastructure stack.
The post Nvidia’s Jensen Huang says this company is invaluable for next-gen AI breakthroughs appeared first on Invezz