Dark
Light

Nvidia’s Game-Changing AI Chips: Meet the Blackwell Ultra GB300 and Vera Rubin

March 19, 2025

Nvidia is making waves in the AI world with its latest announcement: two groundbreaking AI GPUs. If you’re keeping an eye on tech advancements, you’ll want to know about the Blackwell Ultra GB300, which is hitting the market later this year, and the Vera Rubin, set for release in the latter half of 2026. These chips are part of Nvidia’s ongoing quest to stay at the forefront of AI technology—a field where they’re currently raking in an impressive $2,300 in profit every second.

At the recent GDC keynote, Nvidia revealed the Vera Rubin architecture, a powerhouse promising to deliver 3.3 times the performance of the Blackwell Ultra. This comes on the heels of Nvidia’s earlier commitment to speed up its chip production cycle, showing they’re serious about innovation.

The Blackwell Ultra GB300 is a significant upgrade from its predecessor, boasting 20 petaflops of AI performance and a hefty 288GB of HBM3e memory. If you’re dealing with large data sets, this is a game-changer. Plus, the Blackwell Ultra DGX GB300 “Superpod” cluster offers a whopping 300TB of memory, a massive leap forward from what came before.

Nvidia is also emphasizing the Blackwell Ultra’s superiority over the 2022 H100 chip, particularly in enhancing AI reasoning. A standout feature is the NVL72 cluster’s ability to process 1,000 tokens per second—much faster than the previous generation.

For those looking for a desktop solution, Nvidia’s got you covered with the DGX Station. This desktop computer features a single GB300 Blackwell Ultra chip, delivering 20 petaflops of AI performance, 784GB of unified system memory, and 800Gbps networking. Expect to see versions of this desktop from major players like Asus, Dell, and HP.

Looking ahead, the Vera Rubin and Rubin Ultra chips are set to push the envelope even further. Rubin offers 50 petaflops of FP4, while Rubin Ultra takes it up a notch with 100 petaflops and 1TB of memory. The full NVL576 rack of Rubin Ultra promises an astonishing 15 exaflops of FP4 inference and 5 exaflops of FP8 training.

In 2025 alone, Nvidia has already generated $11 billion in revenue from its Blackwell chips, with the top four buyers snapping up 1.8 million chips. Nvidia’s CEO, Jensen Huang, highlighted the growing demand for computing power, noting that the industry needs “100 times more” than previously anticipated.

Looking even further into the future, Nvidia has announced its next architecture, planned for 2028, will be named Feynman, honoring the renowned physicist Richard Feynman. It’s clear that Nvidia is not just keeping pace with the future of AI; they’re setting the stage for it.

 

Don't Miss