
Microsoft Azure right now introduced the brand new NDv6 GB300 VM collection, delivering the business’s first supercomputing-scale manufacturing cluster of NVIDIA GB300 NVL72 techniques, purpose-built for OpenAI’s most demanding AI inference workloads.
This supercomputer-scale cluster options over 4,600 NVIDIA Blackwell Extremely GPUs linked by way of the NVIDIA Quantum-X800 InfiniBand networking platform. Microsoft’s distinctive techniques method utilized radical engineering to reminiscence and networking to supply the large scale of compute required to attain excessive inference and coaching throughput for reasoning fashions and agentic AI techniques.
In the present day’s achievement is the results of years of deep partnership between NVIDIA and Microsoft purpose-building AI infrastructure for the world’s most demanding AI workloads and to ship infrastructure for the following frontier of AI. It marks one other management second, making certain that modern AI drives innovation in the USA.
“Delivering the business’s first at-scale NVIDIA GB300 NVL72 manufacturing cluster for frontier AI is an achievement that goes past highly effective silicon — it displays Microsoft Azure and NVIDIA’s shared dedication to optimize all elements of the fashionable AI knowledge middle,” stated Nidhi Chappell, company vp of Microsoft Azure AI Infrastructure.
“Our collaboration helps guarantee prospects like OpenAI can deploy next-generation infrastructure at unprecedented scale and pace.”
Contained in the Engine: The NVIDIA GB300 NVL72
On the coronary heart of Azure’s new NDv6 GB300 VM collection is the liquid-cooled, rack-scale NVIDIA GB300 NVL72 system. Every rack is a powerhouse, integrating 72 NVIDIA Blackwell Extremely GPUs and 36 NVIDIA Grace CPUs right into a single, cohesive unit to speed up coaching and inference for enormous AI fashions.
The system offers a staggering 37 terabytes of quick reminiscence and 1.44 exaflops of FP4 Tensor Core efficiency per VM, creating a large, unified reminiscence area important for reasoning fashions, agentic AI techniques and complicated multimodal generative AI.
NVIDIA Blackwell Extremely is supported by the full-stack NVIDIA AI platform, together with collective communication libraries that faucet into new codecs like NVFP4 for breakthrough coaching efficiency, in addition to compiler applied sciences like NVIDIA Dynamo for the very best inference efficiency in reasoning AI.
The NVIDIA Blackwell Extremely platform excels at each coaching and inference. Within the latest MLPerf Inference v5.1 benchmarks, NVIDIA GB300 NVL72 techniques delivered record-setting efficiency utilizing NVFP4. Outcomes included as much as 5x increased throughput per GPU on the 671-billion-parameter DeepSeek-R1 reasoning mannequin in contrast with the NVIDIA Hopper structure, together with management efficiency on all newly launched benchmarks just like the Llama 3.1 405B mannequin.
The Material of a Supercomputer: NVLink Swap and NVIDIA Quantum-X800 InfiniBand
To attach over 4,600 Blackwell Extremely GPUs right into a single, cohesive supercomputer, Microsoft Azure’s cluster depends on a two-tiered NVIDIA networking structure designed for each scale-up efficiency inside the rack and scale-out efficiency throughout all the cluster.
Inside every GB300 NVL72 rack, the fifth-generation NVIDIA NVLink Swap cloth offers 130 TB/s of direct, all-to-all bandwidth between the 72 Blackwell Extremely GPUs. This transforms all the rack right into a single, unified accelerator with a shared reminiscence pool — a vital design for enormous, memory-intensive fashions.
To scale past the rack, the cluster makes use of the NVIDIA Quantum-X800 InfiniBand platform, purpose-built for trillion-parameter-scale AI. That includes NVIDIA ConnectX-8 SuperNICs and Quantum-X800 switches, NVIDIA Quantum-X800 offers 800 Gb/s of bandwidth per GPU, making certain seamless communication throughout all 4,608 GPUs.
Microsoft Azure’s cluster additionally makes use of NVIDIA Quantum-X800’s superior adaptive routing, telemetry-based congestion management and efficiency isolation capabilities, in addition to NVIDIA Scalable Hierarchical Aggregation and Discount Protocol (SHARP) v4, which accelerates operations to considerably enhance the effectivity of large-scale coaching and inference.
Driving the Way forward for AI
Delivering the world’s first manufacturing NVIDIA GB300 NVL72 cluster at this scale required a reimagination of each layer of Microsoft’s knowledge middle — from customized liquid cooling and energy distribution to a reengineered software program stack for orchestration and storage.
This newest milestone marks a giant step ahead in constructing the infrastructure that can unlock the way forward for AI. As Azure scales to its purpose of deploying lots of of hundreds of NVIDIA Blackwell Extremely GPUs, much more improvements are poised to emerge from prospects like OpenAI.
Be taught extra about this announcement on the Microsoft Azure weblog.

