Nvidia Unveils Game-Changing ‘Blackwell’ Chip with 30x Faster AI

Main Image
  • Like
  • Comment
  • Share

Touted as the “world’s most powerful AI chip,” Nvidia’s latest unveiling, the Blackwell B200 GPU, is just the latest example of how the company is pushing the boundaries of AI computing. Designed to make trillion-parameter AI models accessible to more businesses, Nvidia’s new GPU is going to change AI as we know it.

The Blackwell platform is very remarkable, with the ability to run Large Language Models (LLMs) much more efficiently, with 25x less cost, and less energy. Its ground-breaking GPU architecture is based on six new technologies. These technologies can speed up computation, which is useful in many domains like data processing, engineering modeling, and generative AI. 

Nvidia Blackwell Architecture
Via: Nvidia

A huge credit for its remarkable performance goes to the 208 billion transistors on the chip. For perspective, its predecessor, the H100 chip had only 80 billion transistors. When compared to the H100, Nvidia’s new GPU is 25x cost and energy-efficient.

ALSO READ: Apple Roadmap for 2024-2027 revealed: Includes Foldable iPhone, OLED iPads, iPhone SE4 & much more

B200 vs the H100
Via: Nvidia

The Blackwell chip is made using a custom-developed 4NP TSMC process that offers twice the computing power and model size than previous models, thanks to the enhanced 4-bit floating-point AI inference capabilities.

The new Blackwell B200 GPU boasts FP4 horsepower of up to 20 petaflops. The GB200 “super chip,” built with two B200 GPUs and a Grace CPU, promises a substantial improvement in energy efficiency and a performance increase of 30x for LLM inference applications. 

GB200 super chip
Via: Nvidia

ALSO READ: Apple Acquires DarwinAI To Checkmate Google and Microsoft

Thanks to these upgrades and the next-gen NVLink switch, which enables 576 GPUs to communicate with one other, Nvidia has achieved unseen levels of AI performance. The GB200 NVL72 and other larger variants of the GB200 GPU show how seriously Nvidia takes the goal of enhancing AI capabilities. 

GB200 NVL72
Via: Nvidia

One rack housing 72 GPUs and 36 CPUs can provide 1,440 petaflops of inference or 720 petaflops of AI training. The DGX Superpod for DGX GB200 from Nvidia has an astounding 11.5 exaflops (11,500 petaflops) of FP4 processing capability in addition to 240 TB of memory, 288 CPUs, and 576 GPUs.

FP4 and FP6
Via: Nvidia

With the eight-in-one DGX Superpod for DGX GB200, Nvidia also offers a one-stop solution for high-performance computing tasks. This is probably the best option for a business that wants to include AI into its regular operations.

When it comes to security, Blackwell doesn’t compromise there as well. This means that the security of AI models and consumer data is encrypted without compromising on performance. 

Later this year, partners will be able to buy devices based on Blackwell, even though Nvidia has not made it clear as to which racks will be available. We can also expect Nvidia to incorporate the Blackwell architecture into its gaming GPUs, potentially the forthcoming RTX 50 series, which will launch by late 2024 or early 2025.

ALSO READ: PS5 Pro Rumors Roundup: Specifications, Features, Price, And Expected Release Date

You can follow Smartprix on TwitterFacebookInstagram, and Google News. Visit smartprix.com for the most recent newsreviews, and tech guides.

Mehtab AnsariMehtab Ansari
Mehtab Ansari is a tech enthusiast who also has a great passion in writing. During his two years of career, he has covered news, features, and evergreen content on multiple platforms. Apart from keeping a close eye on emerging tech developments, he likes spending time at the gym.

Related Articles

ImageGoogle Pixel 7 Pro User Shares Frustrating Reality of Google Service Centers in India

The service experience at Google Pixel service centers in India can be mixed, as illustrated by a recent experience shared by a user-facing slow charging issues with his Google Pixel 7 Pro. This article delves into the specifics of his ordeal and the challenges encountered with the service center. The Service Center Saga The user’s journey (MohipGhosh1 …

ImageNvidia HGX H200: Nvidia Upgrades Top-of-the-Line Chip for AI Work

Nvidia, the leading graphics processing unit (GPU) manufacturer, has revealed the details of its latest high-performance chip for AI work, the HGX H200. This new GPU builds on the success of its predecessor, the H100, introducing significant upgrades in memory bandwidth and capacity to enhance its capability in handling intensive generative AI work. What’s the …

ImageMediaTek and NVIDIA Join Forces to Integrate Next-Generation GPU Designs into Automotive Chipsets

Taiwan-based semiconductor company MediaTek and graphics technology giant NVIDIA have joined forces for a new business venture, debunking earlier rumors of a mobile system-on-chip (SoC) collaboration. Instead, the companies announced at Computex 2023 that their partnership will focus on integrating new GPU designs and associated technologies into automotive chipsets. The announcement came during a press …

ImageSamsung Galaxy AI Features Explained: All Game-Changing Features That Are A Galaxy S24 First

Ever since the launch of the flagship Samsung Galaxy S24 Series has been confirmed, there have been reports about how Samsung will bring AI in the true sense to its S24 devices. Samsung itself had hinted about how ‘Galaxy AI’ would take centerstage during the launch event and the brand did stand true to its …

ImageiPhones Won’t Get These Game-Changing Features Anytime Soon: Check Details Here

It’s not uncommon to hear rumors about iPhones that are supposed to be released two or three years later. Today, we have a similar piece of information that suggests the delay of two game-changing techs that could be pivotal in the history of iPhones: foldable screens and under-display Face ID sensors. Keep reading to know …

Discuss

Be the first to leave a comment.