Home Back

NVIDIA Partners With Computer Industry For AI Factories, Data Centers

newszii.com 2024/10/5
NVIDIA Partners With Computer Industry For AI Factories, Data Centers.

Leading computer manufacturers have introduced a diverse lineup of Blackwell-powered systems, showcasing Grace CPUs, NVIDIA networking, and infrastructure.

These comprehensive portfolios cover a wide spectrum of computing needs, including cloud, on-premises, embedded, and edge AI systems.

From single to multi-GPUs, x86 to Grace architecture, and air to liquid cooling options, the offerings cater to various preferences and requirements.

At COMPUTEX, NVIDIA, alongside leading computer manufacturers, introduced a range of systems powered by the NVIDIA Blackwell architecture, featuring Grace CPUs and NVIDIA networking infrastructure.

These systems aim to empower enterprises in constructing AI factories and data centers, propelling the next era of generative AI advancements.

During his COMPUTEX keynote, NVIDIA founder and CEO Jensen Huang revealed that ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn will roll out various AI systems—cloud, on-premises, embedded, and edge—utilizing NVIDIA GPUs and networking.

Huang emphasized, “The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar traditional data centers to accelerated computing and build a new type of data center — AI factories — to produce a new commodity: artificial intelligence.”

He highlighted the industry-wide preparation for Blackwell to fuel AI-driven innovation across diverse sectors.

To cater to a broad spectrum of applications, the offerings will span from single to multi-GPUs, x86 to Grace-based processors, and air to liquid-cooling technologies.

Furthermore, to expedite the development of systems of varying sizes and configurations, the NVIDIA MGX™ modular reference design platform now supports NVIDIA Blackwell products.

Notably, the new NVIDIA GB200 NVL2 platform, designed to elevate performance for mainstream large language model inference, retrieval-augmented generation, and data processing, stands out.

GB200 NVL2 targets burgeoning market segments like data analytics, where companies annually invest substantial sums.

Leveraging NVLink®-C2C interconnects and dedicated decompression engines within the Blackwell architecture, it accelerates data processing by up to 18x while achieving 8x better energy efficiency compared to x86 CPUs.

NVIDIA MGX

In response to the ever-evolving demands of data centers worldwide, NVIDIA MGX offers computer manufacturers a versatile reference architecture, facilitating the swift and cost-efficient creation of over 100 system design configurations.

Manufacturers initiate the process by establishing a foundational system architecture for their server chassis.

They then have the flexibility to customize their system by selecting GPUs, DPUs, and CPUs tailored to specific workloads.

The adoption of MGX has witnessed significant growth, with more than 90 systems either released or in development by over 25 partners, a substantial increase from the previous year’s 14 systems from six partners.

Leveraging MGX can lead to remarkable cost savings, reducing development expenses by up to three-quarters and streamlining development timelines to as little as six months.

Notably, both AMD and Intel are endorsing the MGX architecture by introducing their CPU host processor module designs for the first time.

This includes AMD’s forthcoming Turin platform and Intel’s Xeon 6 processor with P-cores, previously codenamed Granite Rapids.

These reference designs empower server system builders to expedite development processes while ensuring consistency in both design and performance.

NVIDIA’s latest addition to the lineup, the GB200 NVL2, further integrates MGX and Blackwell technologies.

Its scale-out, single-node design offers a broad spectrum of system configurations and networking options, seamlessly integrating accelerated computing capabilities into existing data center infrastructures.

The GB200 NVL2 complements the existing Blackwell product range, featuring NVIDIA Blackwell Tensor Core GPUs, GB200 Grace Blackwell Superchips, and the GB200 NVL72, reinforcing NVIDIA’s commitment to modular and scalable solutions for accelerated computing.

A Unified Ecosystem

NVIDIA’s extensive partner network is a cornerstone of its ecosystem, featuring collaborations with TSMC, a global leader in semiconductor manufacturing and an integral NVIDIA foundry partner.

Additionally, prominent electronics manufacturers play a crucial role, supplying essential components vital for the creation of AI-driven infrastructure.

These components encompass cutting-edge technologies, ranging from server racks to power delivery systems and advanced cooling solutions, sourced from renowned companies such as Amphenol, Asia Vital Components (AVC), Cooler Master, Colder Products Company (CPC), Danfoss, Delta Electronics, and LITEON.

This collaborative effort facilitates the swift development and deployment of new data center infrastructure, effectively catering to the evolving demands of enterprises worldwide.

This momentum is further propelled by innovative technologies like Blackwell, NVIDIA Quantum-2 or Quantum-X800 InfiniBand networking, NVIDIA Spectrum™-X Ethernet networking, and NVIDIA BlueField®-3 DPUs, seamlessly integrated into servers crafted by leading system manufacturers including Dell Technologies, Hewlett Packard Enterprise, and Lenovo.

Moreover, enterprises gain access to the NVIDIA AI Enterprise software platform, a comprehensive solution empowering them to build and operate production-grade generative AI applications with ease.

This includes leveraging NVIDIA NIM™ inference microservices, enabling seamless creation and execution of AI-driven workflows tailored to specific business needs.

People are also reading