NVIDIA ConnectX InfiniBand Adapters

Enhancing Top Supercomputers and Clouds

Leveraging faster speeds and innovative In-Network Computing, NVIDIA ConnectX InfiniBand smart adapters achieve extreme performance and scale. NVIDIA ConnectX lowers cost per operation, increasing ROI for high-performance computing (HPC), machine learning, advanced storage, clustered databases, low-latency embedded I/O applications, and more.


NVIDIA ConnectX-7 400Gb/s InfiniBand


The ConnectX-7 smart host channel adapter (HCA), featuring the NVIDIA Quantum-2 InfiniBand architecture, provides the highest networking performance available to take on the world’s most challenging workloads. ConnectX-7 provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing acceleration engines to provide additional acceleration to deliver the scalability and feature-rich technology needed for supercomputers, artificial intelligence, and hyperscale cloud data centers. 

NVIDIA ConnectX-6 200Gb/s InfiniBand


The ConnectX-6 smart host channel adapter (HCA), featuring the NVIDIA Quantum InfiniBand architecture,  delivers high-performance and NVIDIA In-Network Computing acceleration engines for maximizing efficiency in HPC, artificial intelligence, cloud, hyperscale, and storage platforms.



The ConnectX-5 smart host channel adapter (HCA) with intelligent acceleration engines enhances HPC, machine learning, data analytics, cloud, and storage platforms. With support for two ports of 100Gb/s InfiniBand and Ethernet network connectivity, PCIe Gen3 and Gen4 server connectivity, a very high message rate, PCIe switch, and NVMe over Fabrics offloads, ConnectX-5 is a high-performance and cost-effective solution for a wide range of applications and markets.       

NVIDIA Mellanox ConnectX-4 VPI EDR/100GbE

ConnectX-4 VPI EDR/100GbE

ConnectX-4 Virtual Protocol Interconnect (VPI) smart adapters support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity. Providing data centers high performance and flexible solutions for HPC (high performance computing), Cloud, database, and storage platforms, ConnectX-4 smart adapters combine 100Gb/s bandwidth in a single port with the lowest available latency, 150 million messages per second and application hardware offloads.

NVIDIA Mellanox ConnectX-3 Pro VPI FDR and 40/56GbE

ConnectX-3 Pro VPI FDR and 40/56GbE

ConnectX-3 Pro smart adapters with Virtual Protocol Interconnect (VPI) support InfiniBand and Ethernet connectivity with hardware offload engines for Overlay Networks ("Tunneling"). ConnectX-3 Pro provides great performance and flexibility for PCI Express Gen3 servers deployed in public and private clouds, enterprise data centers, and high-performance computing.


OCP Adapters

Open Compute Project (OCP) defines a mezzanine form factor  that features best-in-class efficiency to enable the highest data center performance.

NVIDIA Multi-Host Solutions

Multi-Host Solutions

The innovative NVIDIA Multi-Host® technology allows multiple compute or storage hosts to connect into a single adapter.

NVIDIA Socket Direct Adapters

Socket-Direct Adapters

NVIDIA Socket Direct® technology enables direct PCIe access to multiple CPU sockets, eliminating network traffic having to traverse the inter-process bus.


See how you can build the most efficient, high-performance network.

Configure Your Cluster

Take Networking Courses

Ready to Purchase?