NVIDIA Blackwell GPUs & Servers Launch for Enterprise AI & Robotics

Artificialintelligence

NVIDIA is poised to significantly expand the reach of its accelerated computing platform, announcing that its new RTX PRO 6000 Blackwell Server Edition GPU will soon be integrated into enterprise servers from major providers. Cisco, Dell Technologies, HPE, Lenovo, and Supermicro are set to offer various configurations of these powerful GPUs within their 2U server lineups. This rollout aims to deliver substantial performance and efficiency gains across a spectrum of demanding applications, including advanced AI model training, sophisticated graphics rendering, complex simulations, data analytics, and critical industrial operations.

According to Jensen Huang, NVIDIA’s founder and CEO, artificial intelligence is instigating a fundamental shift in computing, a transformation not seen in six decades. What began as a cloud-centric phenomenon is now reshaping the very architecture of on-premises data centers. With the support of leading server manufacturers, NVIDIA intends for its Blackwell RTX PRO Servers to become the standard platform for enterprise and industrial AI workloads.

While millions of servers sold annually for business operations still rely predominantly on traditional CPUs, the introduction of RTX PRO Servers marks a pivotal shift towards GPU acceleration for common business workloads. NVIDIA asserts that these new Server Edition GPUs can deliver up to 45 times better performance and 18 times higher energy efficiency compared to CPU-only systems, dramatically boosting capabilities in analytics, simulations, video processing, and rendering. The RTX PRO line is specifically designed for companies establishing “AI factories,” where constraints on space, power, and cooling are paramount. These servers also form the foundational infrastructure for NVIDIA’s AI Data Platform, supporting advanced storage systems. For instance, Dell is updating its AI Data Platform to leverage NVIDIA’s architecture, with its PowerEdge R7725 servers featuring two RTX PRO 6000 GPUs, NVIDIA AI Enterprise software, and integrated NVIDIA networking. These new 2U servers, capable of housing up to eight GPU units, were initially unveiled at COMPUTEX in May.

At the heart of these new servers lies NVIDIA’s advanced Blackwell architecture. Key features include fifth-generation Tensor Cores and a second-generation Transformer Engine, which, with FP4 precision, can execute AI inference tasks up to six times faster than the previous L40S GPU. For visual computing, fourth-generation RTX technology provides up to four times the performance of the L40S GPU in photo rendering. The architecture also incorporates robust virtualization capabilities and NVIDIA Multi-Instance GPU technology, enabling each GPU to handle up to four separate workloads concurrently. Furthermore, improved energy efficiency helps lower overall data center power consumption.

Beyond traditional enterprise applications, the RTX PRO Servers are engineered to power physical AI and robotics. NVIDIA’s Omniverse libraries and Cosmos world foundation models, running on these servers, facilitate complex digital twin simulations, sophisticated robot training routines, and the creation of large-scale synthetic data. They also support NVIDIA Metropolis blueprints, enabling advanced video search and summarization, alongside vision language models crucial for real-world physical environments. NVIDIA has enhanced its Omniverse and Cosmos offerings with new Omniverse SDKs and expanded compatibility with MuJoCo (MJCF) and Universal Scene Description (OpenUSD), potentially opening robot simulation capabilities to over 250,000 MJCF developers. New Omniverse NuRec libraries introduce ray-traced 3D Gaussian splatting for constructing models from sensor data, while updated Isaac Sim 5.0 and Isaac Lab 2.2 frameworks, available on GitHub, add neural rendering and new OpenUSD-based schemas for robots and sensors. NuRec rendering is already integrated into the CARLA autonomous vehicle simulator and adopted by companies like Foretellix for generating synthetic AV testing data. Voxel51’s FiftyOne data engine, used by automakers such as Ford and Porsche, now also supports NuRec. Prominent adopters of these libraries and frameworks include Boston Dynamics, Figure AI, Hexagon, and Amazon Devices & Services.

The Cosmos World Foundation Models (WFMs) have seen over two million downloads, primarily aiding in the generation of synthetic training data for robots using text, image, or video prompts. The new Cosmos Transfer-2 model significantly accelerates the generation of image data from simulation scenes and spatial inputs like depth maps, with companies such as Lightwheel, Moon Surgical, and Skild AI already leveraging it for large-scale training data production. NVIDIA has also introduced Cosmos Reason, a 7-billion-parameter vision language model designed to empower robots and AI agents by integrating prior knowledge with an understanding of physics. This model can automate dataset curation, support multi-step robot task planning, and enhance video analytics systems. NVIDIA’s own robotics and DRIVE teams utilize Cosmos Reason for data filtering and annotation, while Uber and Magna have deployed it in autonomous vehicles, traffic monitoring, and industrial inspection systems.

For large-scale AI agent deployments, RTX PRO Servers are capable of running the newly announced Llama Nemotron Super model. When operating with NVFP4 precision on a single RTX PRO 6000 GPU, these servers offer up to three times better price-performance compared to using FP8 precision on NVIDIA’s H100 GPUs, underscoring their efficiency for demanding AI workloads.