NVIDIA Hopper H100 GPU Enters Full Production, Ada Lovelace Comes to L40 Server GPU, Grace CPU Superchip More Detailed

0

NVIDIA’s GTC 2022 keynote was overshadowed by game announcements earlier today which are definitely worth checking out here, but during the main GTC keynote, CEO Jensen Huang , spoke and revealed new products such as the Ada Lovelace L40 GPU, OVX, and IGX systems and confirming that Hopper H100 GPUs are currently in full production.

NVIDIA Shows Availability of Hopper H100, Ada Lovelace L40 GPU, IGX/OVX Systems, and Grace CPU Superchips at GTC 2022

Starting with the flagship Hopper chip, NVIDIA has confirmed that the H100 GPU is now in full production and its partners will roll out the first wave of products in October this year. It has also been confirmed that Hopper’s global rollout will consist of three phases, the first will consist of pre-orders for NVIDIA DGX H100 systems and hands-free labs to customers directly from NVIDIA with systems such as Dell’s Power Edge servers which are now available on NVIDIA LaunchPad.

NVIDIA Hopper in full production

Phase 2 will include major OEM partners who will begin shipping in the coming weeks with over 50 servers available in the market by the end of the year. Finally, the company expects dozens more to enter the market by the first half of 2023.

Hopper Global Rollout

For customers who want to try the new technology immediately, NVIDIA has announced that H100 on Dell PowerEdge servers is now available on NVIDIA LaunchPadwhich offers free hands-on workshops, giving businesses access to the latest NVIDIA AI hardware and software.

Customers can also start ordering NVIDIA DGX H100 Systemswhich include eight H100 GPUs and deliver 32 petaflops of performance with FP8 precision. NVIDIA basic command and NVIDIA AI Enterprise software power each DGX system, enabling single-node to single-node deployments. NVIDIA DGX SuperPOD supporting advanced AI development of large language models and other massive workloads.

H100-powered systems from the world’s leading computer manufacturers are expected to ship in the coming weeks, with more than 50 server models on the market by the end of the year and dozens more in the first half of 2023 Partner build systems include Atos, Cisco, Dell TechnologiesFujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro.

Additionally, some of the world’s largest higher education and research institutions will use the H100 to power their next-generation supercomputers. Among them are the Barcelona Supercomputing Center, the Los Alamos National Lab, the Swiss National Supercomputing Center (CSCS), the Texas Advanced Computing Center, and the University of Tsukuba.

through Nvidia

The NVIDIA L40, powered by the Ada Lovelace architecture

The second major announcement relates to the L40 GPU, a product focused on the data center segment and using the recently announced Ada Lovelace GPU architecture. The full specs of the L40 GPU are unknown, but it comes with 48GB of GDDR6 (ECC) memory, 4 DP 1.4a display outputs, a 300W TBP, and a dual-slot passive cooler that measures 4.4″ x 10.5″. The board is powered by a single 16-pin CEM5 connector.

The NVIDIA L40 GPU supports all major vGPU software such as NVIDIA vPC/vApps and NVIDIA RTX Virtual Workstation (vWS) and comes with NEBS Level 3 support as well as Secure Boot (root of confidence). The most important aspect of this product is that it has three AV1 encoding units and also 3 decoding units. This is already an advantage over the RTX 6000 and other GeForce RTX 40 graphics cards with dual AV1 engines.

GPU architecture NVIDIA Architecture Ada Lovelace
GPU memory 48 GB GDDR6 with ECC
Display connectors 4x DP 1.4a
Maximum consumption 300W
Form factor 4.4″ (H) x 10.5″ (W) dual slot
Thermal Passive
vGPU software support* NVIDIA vPCs/vApps, NVIDIA RTX Virtual Workstation (vWS)
NVENC | NVDEC 3x | 3x (includes AV1 encoding and decoding)
Secure Boot with Trusted Root Yes
Ready for NEBS Yes / Level 3
Power cable 1x PCIe CEM5 16 pin

Grace Hopper Superchip is ideal for next-generation recommender systems

NVIDIA also added detailed her Grace Hopper Superchip which she says is ideal for recommender systems.

NVLink Accelerates Grace Hopper

Grace Hopper achieves this because it is a superchip – two chips in one unit, sharing a super-fast chip-to-chip interconnect. It’s an Arm-based NVIDIA Grace CPU and Hopper GPU that communicate via NVIDIA NVLink-C2C. Additionally, NVLink also connects many superchips into a supersystem, a computing cluster designed to run terabyte-class recommender systems.

NVLink transports data at a whopping 900 gigabytes per second – 7 times the bandwidth of PCIe Gen 5, the interconnect that most future systems will use. This means Grace Hopper provides recommenders with 7x more integrations (context-rich data tables) they need to personalize results for users.

More memory, more efficiency

The Grace processor uses LPDDR5X, a type of memory that achieves the optimal balance of bandwidth, power efficiency, capacity, and cost for recommender systems and other demanding workloads. It provides 50% more bandwidth while using one-eighth the power per gigabyte of traditional DDR5 memory subsystems.

Any GPU Hopper in a cluster can access Grace’s memory via NVLink. It is a Grace Hopper feature that provides the largest GPU memory pools ever. Additionally, NVLink-C2C requires only 1.3 picojoules per bit transferred, giving it more than 5 times the power efficiency of PCIe Gen 5.

The overall result is that recommenders get up to 4x more performance and greater efficiency using Grace Hopper than using Hopper with traditional processors (see chart below).

through Nvidia

NVIDIA Announces OVX Computer Systems

NVIDIA has also revealed its all-new OVX system that uses the L40 GPUs just mentioned above, using up to 8 Ada Lovelace chips in total for enhanced networking technology, to deliver breakthrough real-time graphics capabilities, d ‘AI and digital twin simulation. OVX systems with L40 GPUs are expected to hit the market in early 2023 from leading partners like Inspur, Lenovo, and Supermicro.

Nvidia too introduced its IGX system motherboard which is an advanced AI platform, specially designed for industrial and medical environments

The power supply for the new OVX systems is the NVIDIA L40 GPUsalso based on Ada Lovelace GPU Architecturewhich brings the highest levels of power and performance for building complex industrial digital twins.

The L40 GPU’s third-generation RT Cores and fourth-generation Tensor Cores will deliver powerful capabilities to Omniverse workloads running on OVX, including accelerated ray-traced and path-traced material rendering, physically accurate simulations, and a generation of photorealistic 3D synthetic data. The L40 will also be available in NVIDIA Certified Systems servers from major OEM vendors to power RTX workloads from the data center.

In addition to the L40 GPU, the new NVIDIA OVX includes the ConnectX-7 SmartNIC, delivering improved network and storage performance and the precision time synchronization required for realistic digital twins. ConnectX-7 includes support for 200G networking on every port and fast inline data encryption to speed up data movement and increase security for digital twins.

through Nvidia

Share.

Comments are closed.