All form factors
QSFP56200G·since 2018 (IEEE 802.3cd)
QSFP56, 200G PAM4, four lanes at 50G each, mostly seen in HPC and storage.
Quad SFP56, same mechanical envelope as QSFP28 (SFF-8665), but four lanes of 50G PAM4 instead of 25G NRZ. Most production 200G deployments are NVIDIA Spectrum + Mellanox ConnectX-6 InfiniBand-on-Ethernet (RoCE) clusters, and storage fabric where 100G is too slow but 400G is overkill.
Specifications
MSA references
SFF-8665 (mechanical) · CMIS 4.0 / 5.0 (management)
Electrical interface
4 lanes × 26.5625 Gbaud PAM4 (50G per lane)
Power consumption
4.5 W – 7 W per module
Signalling
PAM4 with KP4 / RS-FEC
FEC
RS-FEC (544,514) required
Connector types
MPO-12 (SR4, DR4) · LC duplex (FR4, LR4)
Available SKUs
| Part number | Spec | Wavelength | Fiber | Reach | Notes |
|---|---|---|---|---|---|
NAP-QSFP56-SR4 | 200GBASE-SR4 | 850 nm | MM OM4 | 100 m | MPO-12 PAM4 |
NAP-QSFP56-DR4 | 200GBASE-DR4 | 1310 nm | SM parallel | 500 m | MPO-12 |
NAP-QSFP56-FR4 | 200GBASE-FR4 | LAN-WDM | SM | 2 km | LC duplex |
NAP-QSFP56-LR4 | 200GBASE-LR4 | LAN-WDM | SM | 10 km | LC duplex |
NAP-QSFP56-AOC | 200G AOC | 850 nm | — | 3–30 m | Common in HPC racks |
NAP-QSFP56-DAC | 200G DAC | — | Twinax | 1–3 m | Passive |
In production with
QSFP56 modules are deployed across these platforms today. Full coding details are on the compatibility matrix.
- NVIDIA Spectrum-3 (SN4000 series)
- Mellanox ConnectX-6 (HDR InfiniBand, 200G Ethernet)
- Arista 7060X4, 7280R3
- Cisco Nexus 9300-GX (200G via 100G splits + aggregation)
Operational notes
PAM4 signalling at 50G/lane has roughly 9 dB less SNR margin than NRZ at equivalent symbol rates, clean fiber and low-loss connectors matter more here.
200G AOC is the dominant interconnect in HDR InfiniBand racks for NVIDIA SuperPOD-class deployments.
Recommended for
- InfiniBand HDR (200 Gbps) on NVIDIA fabric
- 200G RoCE storage interconnect
- High-performance computing clusters between 100G and 400G generations