Purpose-built to solve the world’s largest computing problems.

Accelerate the Largest AI, HPC, Cloud, and Hyperscale Workloads

AI models are exploding in complexity and size as they enhance deep recommender systems containing tens of terabytes of data, improve conversational AI with hundreds of billions of parameters, and enable scientific discoveries. Scaling these massive models requires new architectures with fast access to a large pool of memory and a tight coupling of the CPU and GPU. The NVIDIA Grace™ CPU delivers high performance, power efficiency, and high-bandwidth connectivity that can be used in diverse configurations for different data center needs.

Learn how the NVIDIA Grace CPU is paving the fast lane to energy-efficient computing for every data center.

Get the Latest News on NVIDIA Superchips

Learn how NVIDIA Grace CPUs are powering the latest large-memory supercomputers.

GH200 Grace Hopper Superchip

NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip Platform

World’s first HBM3e processor offers groundbreaking memory and bandwidth for the era of accelerated computing and generative AI.

NVIDIA and Softbank

NVIDIA and SoftBank Reinvent 5G Data Centers With Generative AI

Arm-based NVIDIA Grace Hopper™ Superchip, BlueField®-3 DPU and Aerial™ SDK power revolutionary architecture for generative AI and 5G/6G communications.

 UK Research Alliance, GW4, Arm, HPE

New Wave of Energy Efficient Supercomputers

Take a look at the latest energy-efficient Arm supercomputers for climate science, medical research and more, powered by NVIDIA Grace CPU.

NVIDIA GH200 Grace Hopper Superchips Are in Full Production

NVIDIA GH200 Grace Hopper Superchips Are in Full Production

GH200-powered systems join 400+ system configurations that global systems makers are rolling out to meet the surging demand for generative AI.

Creating Accelerated Data Centers Faster With NVIDIA MGX

Creating Accelerated Data Centers Faster With NVIDIA MGX

Learn how QCT and Supermicro are adopting modular designs to quickly and cost-effectively build multiple data center configurations for a wide range of AI, high-performance computing (HPC) and 5G applications.

Massive Shared GPU Memory for Giant AI Models

Massive Shared GPU Memory for Giant AI Models

Learn how NVIDIA Grace Hopper Superchips are powering a new class of large memory supercomputers for emerging AI.

Take a Look at the Grace Lineup of Superchips

NVIDIA Grace Hopper Superchip

The NVIDIA Grace Hopper™ Superchip combines the Grace and Hopper architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and high-performance computing (HPC) applications.

NVIDIA Grace Hopper Superchip
NVIDIA Grace CPU Superchip

NVIDIA Grace CPU Superchip

The NVIDIA Grace CPU Superchip uses the NVLink-C2C technology to deliver 144 Arm® Neoverse V2 cores and 1 terabyte per second (TB/s) of memory bandwidth.

Explore Grace Reference Designs for Modern Data Center Workloads

System designs for digital twins, AI, and high-performance computing.

OVX-Digital Twins & Omniverse


for digital twins and NVIDIA Omniverse™.

NVIDIA Grace CPU Superchip
NVIDIA BlueField®-3



for HPC.

NVIDIA Grace CPU Superchip
NVIDIA BlueField-3
OEM-defined input/output (IO)

HGX-AI Training, Inference & HPC


for AI training, inference, and HPC.

NVIDIA Grace Hopper Superchip
NVIDIA BlueField-3
OEM-defined IO / fourth-generation NVLink

Learn More About the Latest Technical Innovations

Accelerate CPU-to-GPU Connections With NVLink-C2C

Solving the largest AI and HPC problems requires high-capacity and high-bandwidth memory (HBM). The fourth-generation NVIDIA NVLink-C2C delivers 900 gigabytes per second (GB/s) of bidirectional bandwidth between the NVIDIA Grace CPU and NVIDIA GPUs. The connection provides a unified, cache-coherent memory address space that combines system and HBM GPU memory for simplified programmability. This coherent, high-bandwidth connection between CPU and GPUs is key to accelerating tomorrow’s most complex problems.

Harness High-Bandwidth CPU Memory With LPDDR5X

NVIDIA Grace is the first server CPU to harness LPDDR5X memory with server-class reliability through mechanisms like error-correcting code (ECC) to meet the demands of the data center, while delivering 2X the memory bandwidth and up to 10X better energy efficiency compared to today’s server memory. The LPDDR5X solution coupled with NVIDIA Grace’s large high-performance last-level cache delivers the bandwidth necessary for large models while reducing system power to maximize performance for next-generation workloads. 

Boost Performance and Efficiency With Arm Neoverse V2 Cores

As the parallel compute capabilities of GPUs continue to advance, workloads can still be gated by serial tasks run on the CPU. A fast and efficient CPU is a critical component of system design to enable maximum workload acceleration. The NVIDIA Grace CPU integrates Arm Neoverse V2 cores with the NVIDIA Scalable Coherency Fabric to deliver high performance in a power-efficient design, making it easier for scientists and researchers to do their life’s work.

Supercharge Generative AI With HBM3 and HBM3e GPU Memory

Generative AI is memory and compute intensive. The NVIDIA GH200 Grace Hopper Superchip uses 96GB of HBM3 memory, delivering over 2.5X the GPU memory bandwidth of the NVIDIA A100 Tensor Core GPU. GH200 utilizes 141GB of HBM3e memory technology to power over 3X the bandwidth of A100. The high-bandwidth memory in Grace Hopper is connected to CPU memory over NVLink-C2C to provide over 600GB of fast-access memory to the GPU, delivering the memory capacity and bandwidth needed to handle the world’s most complex accelerated computing and generative AI workloads.

Meet Our Partners

Atos logo
DELL logo
Fii logo
Gigabyte logo
H3c logo
Hewlett Packard Enterprise Logo
Inspur logo
Lenovo logo
Nettrix logo

Explore More Resources

Grace Featured Content

Take a look at the newest wave of energy efficient Arm supercomputers powered by NVIDIA Grace CPU.