All form factors
QSFP56200G·since 2018 (IEEE 802.3cd)

QSFP56, 200G PAM4, four lanes at 50G each, mostly seen in HPC and storage.

Quad SFP56, same mechanical envelope as QSFP28 (SFF-8665), but four lanes of 50G PAM4 instead of 25G NRZ. Most production 200G deployments are NVIDIA Spectrum + Mellanox ConnectX-6 InfiniBand-on-Ethernet (RoCE) clusters, and storage fabric where 100G is too slow but 400G is overkill.

Specifications

MSA references

SFF-8665 (mechanical) · CMIS 4.0 / 5.0 (management)

Electrical interface

4 lanes × 26.5625 Gbaud PAM4 (50G per lane)

Power consumption

4.5 W – 7 W per module

Signalling

PAM4 with KP4 / RS-FEC

FEC

RS-FEC (544,514) required

Connector types

MPO-12 (SR4, DR4) · LC duplex (FR4, LR4)

Available SKUs

Part numberSpecWavelengthFiberReachNotes
NAP-QSFP56-SR4200GBASE-SR4850 nmMM OM4100 mMPO-12 PAM4
NAP-QSFP56-DR4200GBASE-DR41310 nmSM parallel500 mMPO-12
NAP-QSFP56-FR4200GBASE-FR4LAN-WDMSM2 kmLC duplex
NAP-QSFP56-LR4200GBASE-LR4LAN-WDMSM10 kmLC duplex
NAP-QSFP56-AOC200G AOC850 nm3–30 mCommon in HPC racks
NAP-QSFP56-DAC200G DACTwinax1–3 mPassive

In production with

QSFP56 modules are deployed across these platforms today. Full coding details are on the compatibility matrix.

  • NVIDIA Spectrum-3 (SN4000 series)
  • Mellanox ConnectX-6 (HDR InfiniBand, 200G Ethernet)
  • Arista 7060X4, 7280R3
  • Cisco Nexus 9300-GX (200G via 100G splits + aggregation)

Operational notes

PAM4 signalling at 50G/lane has roughly 9 dB less SNR margin than NRZ at equivalent symbol rates, clean fiber and low-loss connectors matter more here.
200G AOC is the dominant interconnect in HDR InfiniBand racks for NVIDIA SuperPOD-class deployments.

Recommended for

  • InfiniBand HDR (200 Gbps) on NVIDIA fabric
  • 200G RoCE storage interconnect
  • High-performance computing clusters between 100G and 400G generations