如果您需要大量MCX755106AC-HEAT产品,请通过我们的免费电话Whatsapp: (+86) 151-0113-5020联系我们,或在在线聊天中请求报价,我们的销售经理会很快与您联系。
Title
NVIDIA ConnectX-7 MCX755106AC-HEAT Dual-Port 200GbE / NDR200 InfiniBand Network Adapter
Keywords
NVIDIA ConnectX-7, MCX755106AC-HEAT, 200Gb Ethernet, NDR200 InfiniBand, dual-port QSFP112, PCIe5 x16 NIC, RoCE/RDMA networking, data center NICDescription
The MCX755106AC-HEAT is a high-performance network interface card from NVIDIA’s ConnectX-7 family, combining both 200Gb Ethernet and NDR200 InfiniBand capabilities in one dual-port QSFP112 adapter. It’s designed for data center, HPC, AI, cloud infrastructure, and high-throughput RDMA / RoCE traffic patterns.
This adapter fits a PCIe5 x16 interface, enabling very high bandwidth between host server and network fabric, especially under heavy loads. It supports Secure Boot and Crypto Enabled features, making it more suitable for environments where network security and tamper-resistance matter.
Form factor is half-height, half-length (HHHL), dimensions ~68.90 mm × 167.65 mm, meaning it fits a wide range of server chassis. It also has an auxiliary PCIe passive card option plus Cabline SA-II Plus harnesses, which expand compatibility for socket-direct or other deployment-specific cabling topologies.
Operating temperature range is 0-55 °C, with storage set from -40 to 70 °C, which is typical for enterprise NICs. Power draw is about 25.9 W (for the "AC" variant; "AS" variant slightly lower) when used with passive cables under PCIe Gen5 x16 mode.
Because it supports both Ethernet and InfiniBand modes, this NIC is flexible. You can use it for standard TCP/IP 200GbE networking or for high-performance RDMA over InfiniBand (NDR200), depending on network infrastructure. This duality is very useful in mixed-environment data centers.
Key Features
- Dual ports of QSFP112 supporting both 200GbE and NDR200 InfiniBand modes.
- High-speed PCIe Gen5 x16 host interface for maximum throughput, backward compatible with PCIe4/3.
- Secure Boot & Crypto Enabled (AC variant) for enhanced security in networking.
- RoCE/RDMA support for low latency, high throughput in data center / HPC workloads.
- Auxiliary PCIe extension option via passive card + Cabline SA-II Plus harnesses allows socket-direct style deployment.
- Compact HHHL form factor with heatsink (HEAT version), suitable for most standard server racks.
- Low power (~25.9W typical under load) for its class, reducing thermal and power overhead per port).
Configuration
Component | Specification / Detail |
---|---|
Part Number / Model | MCX755106AC-HEAT (NVIDIA ConnectX-7) |
Ports | 2 × QSFP112 (dual-port) |
Network Types | Ethernet (200/100/50/25/10GbE), InfiniBand (NDR200/HDR/HDR100 etc.) |
Host Interface | PCIe5 x16 (also backward support for PCIe4/3 via SERDES 16/32 GT/s) |
Power Consumption | ~25.9W (AC variant) ≈24.9W for AS variant under passive cable / typical load |
Physical Size | Half-Height Half-Length, ~68.90 mm × 167.65 mm |
Operating Temp | 0-55 °C (operational) / -40-70 °C (storage) |
Special Features | Secure Boot, Crypto Enabled, Socket-Direct passive cable option |
Compatibility Modes | RoCE, RDMA, Ethernet, InfiniBand |
Compatibility
The MCX755106AC-HEAT works with any server motherboard with a free PCIe x16 slot, preferably supporting PCIe5 to achieve full speed. Servers with PCIe4 will run lower speeds but are compatible. Ensure sufficient cooling and proper bracket height (tall / HHHL) to fit heatsink.
For network connectivity, you must have compatible QSFP112 ports on your network switches or use optical / DAC/CR4/QSFP112 optical transceivers that match the cable type. For InfiniBand mode, your switch must support NDR200 or HDR etc. For Ethernet mode, switches must support 200GbE or lower fallback speeds.
Drivers and firmware should be current: NVIDIA’s ConnectX-7 driver stack (for your OS — Linux, Windows, etc.) plus firmware that supports Secure Boot / Crypto features. Missing updates may limit performance or features (e.g. passive adapter harness option).
Power budget: because the NIC consumes ~25-26W under typical loads with passive cables, ensure the server PSU and airflow can handle this plus any adjacent NICs and cards. Heatsink (HEAT) version will generate some thermal load; airflow at appropriate speeds is necessary.
Usage Scenarios
1) High-Performance Compute / AI Cluster Fabrics: Great for interconnect in GPU/CPU clusters needing very high bandwidth and low latency, especially where RDMA or InfiniBand fabrics are used (NDR200) to avoid network bottlenecks.
2) Data Center Ethernet Backbone: Use it for 200GbE switches/routers interconnect, server uplinks, or leaf/spine architecture where high throughput is required.
3) RoCE / RDMA for Storage Systems: For NVMe-over-Fabric, or distributed storage systems like Ceph or Lustre, or other systems using RDMA, this NIC accelerates storage access by reducing latency and CPU overhead.
4) AI/ML / HPC Inference & Training: Use in inference serving nodes or training nodes where large model weights are shared, reducing data movement overhead across nodes). Also helpful in virtualization of GPU workloads.
5) Secure / Compliance Environments: Because Secure Boot and Crypto Enabled are supported, this card is appropriate where network encryption, trusted boot, and compliance (e.g. for financial, government, or healthcare systems) rank high in requirement.
Frequently Asked Questions
-
Q: Does MCX755106AC-HEAT require PCIe Gen5 to get full 200Gbps performance, or will it work with Gen4 as well?
A: It will work with PCIe Gen4 and even Gen3 (depending on host), but to reach full advertised throughput (especially for both ports and maximum lane count), PCIe5 x16 is preferred. Gen4 will likely impose some bandwidth ceiling. -
Q: Can this adapter operate in both Ethernet 200GbE mode and InfiniBand NDR200 mode interchangeably?
A: Yes — the MCX755106AC-HEAT supports both Ethernet and InfiniBand (NDR200) modes. Mode selection depends on switch/fabric configuration and firmware. It is dual-mode and highly flexible. -
Q: What cabling or transceivers are needed to make full use of its QSFP112 ports?
A: You’ll need optical or copper QSFP112/CR4/CR8 cables/transceivers rated for 200Gb/s or fallback levels (100/50/25/10Gb/s). Passive cables may limit reach; check cable spec. Also if using passive cabling, be sure the port and NIC both support that mode. -
Q: What are the thermal and mechanical requirements for the HEAT version?
A: HEAT version includes heatsink, so it needs good airflow within the server enclosure. Confirm that the server’s PCIe slot bracket height accommodates tall bracket / heatsink. Also verify power budget—~25.9W under load plus any auxiliary cabling/harness.
与此商品相关的产品
-
NVIDIA CONNECTX-7 MCX755106AC-HEAT DUAL-PORT 200GB... - 零件编号: MCX755106AC-HEAT...
- 库存情况:In Stock
- 状况:全新
- 原价:$1,999.00
- 您的价格: $1,650.00
- 您节省了 $349.00
- 立即咨询 发送邮件
-
NVIDIA H200 NVL 141GB PCIE GPU加速器(零件号900-21010-004... - 零件编号: NVIDIA H200 NVL 141G...
- 库存情况:In Stock
- 状况:全新
- 原价:$39,999.00
- 您的价格: $30,715.00
- 您节省了 $9,284.00
- 立即咨询 发送邮件
-
HPE智能内存套件P06035-B21 - 64 GB DDR4-3200双级注册模块用于Pro... - 零件编号: P06035-B21...
- 库存情况:In Stock
- 状况:全新
- 原价:$599.00
- 您的价格: $443.00
- 您节省了 $156.00
- 立即咨询 发送邮件
-
联想Thinksystem SR665服务器主板 - 兼容型号:03GX157,03GX293,03... - 零件编号: 03GX789...
- 库存情况:In Stock
- 状况:全新
- 原价:$1,899.00
- 您的价格: $1,638.00
- 您节省了 $261.00
- 立即咨询 发送邮件
-
联想01pf160- thinksystem SR850 Systoble Gen2用于联想SR85... - 零件编号: 01PF160...
- 库存情况:In Stock
- 状况:全新
- 原价:$3,199.00
- 您的价格: $3,099.00
- 您节省了 $100.00
- 立即咨询 发送邮件
-
联想01pf402 -ThinkSystem 450W(230V/115V)铂金热丝绒电源... - 零件编号: 01PF402...
- 库存情况:In Stock
- 状况:全新
- 原价:$399.00
- 您的价格: $299.00
- 您节省了 $100.00
- 立即咨询 发送邮件
-
联想4Y37A09724-思维系统440-16E SAS/SATA PCIE GEN4 12GB H... - 零件编号: 4Y37A09724...
- 库存情况:In Stock
- 状况:全新
- 原价:$799.00
- 您的价格: $599.00
- 您节省了 $200.00
- 立即咨询 发送邮件
-
Fusionserver 2488H V7 PCIE SAS/RAID卡0231Y129-9560-... - 零件编号: 0231Y129...
- 库存情况:In Stock
- 状况:全新
- 原价:$1,199.00
- 您的价格: $899.00
- 您节省了 $300.00
- 立即咨询 发送邮件