NVIDIA CONNECTX-7 MCX755106AC-HEAT DUAL-PORT 200GBE / NDR200 INFINIBAND网络适配器
  • 产品分类:其他
  • 零件编号:MCX755106AC-HEAT
  • 库存情况:In Stock
  • 状况:全新
  • 产品特色:准备发货
  • 最小订单:1单位
  • 原价:$1,999.00
  • 您的价格: $1,650.00 您节省了 $349.00
  • 立即咨询 发送邮件

安心无忧。接受退货。

运送:国际运输的商品可能会受到海关处理和额外费用的影响。 查看详情

配送:如果国际配送需要海关处理,请允许额外的时间。 查看详情

退货:14天内退货。卖家承担退货运费。 查看详情

免费送货。我们接受30天账期的采购订单。无需影响您的信用即可在几秒钟内获得决策。

如果您需要大量MCX755106AC-HEAT产品,请通过我们的免费电话Whatsapp: (+86) 151-0113-5020联系我们,或在在线聊天中请求报价,我们的销售经理会很快与您联系。

Title

NVIDIA ConnectX-7 MCX755106AC-HEAT Dual-Port 200GbE / NDR200 InfiniBand Network Adapter

Keywords

NVIDIA ConnectX-7, MCX755106AC-HEAT, 200Gb Ethernet, NDR200 InfiniBand, dual-port QSFP112, PCIe5 x16 NIC, RoCE/RDMA networking, data center NIC

Description

The MCX755106AC-HEAT is a high-performance network interface card from NVIDIA’s ConnectX-7 family, combining both 200Gb Ethernet and NDR200 InfiniBand capabilities in one dual-port QSFP112 adapter. It’s designed for data center, HPC, AI, cloud infrastructure, and high-throughput RDMA / RoCE traffic patterns.

This adapter fits a PCIe5 x16 interface, enabling very high bandwidth between host server and network fabric, especially under heavy loads. It supports Secure Boot and Crypto Enabled features, making it more suitable for environments where network security and tamper-resistance matter.

Form factor is half-height, half-length (HHHL), dimensions ~68.90 mm × 167.65 mm, meaning it fits a wide range of server chassis. It also has an auxiliary PCIe passive card option plus Cabline SA-II Plus harnesses, which expand compatibility for socket-direct or other deployment-specific cabling topologies.

Operating temperature range is 0-55 °C, with storage set from -40 to 70 °C, which is typical for enterprise NICs. Power draw is about 25.9 W (for the "AC" variant; "AS" variant slightly lower) when used with passive cables under PCIe Gen5 x16 mode.

Because it supports both Ethernet and InfiniBand modes, this NIC is flexible. You can use it for standard TCP/IP 200GbE networking or for high-performance RDMA over InfiniBand (NDR200), depending on network infrastructure. This duality is very useful in mixed-environment data centers.

Key Features

  • Dual ports of QSFP112 supporting both 200GbE and NDR200 InfiniBand modes.
  • High-speed PCIe Gen5 x16 host interface for maximum throughput, backward compatible with PCIe4/3.
  • Secure Boot & Crypto Enabled (AC variant) for enhanced security in networking.
  • RoCE/RDMA support for low latency, high throughput in data center / HPC workloads.
  • Auxiliary PCIe extension option via passive card + Cabline SA-II Plus harnesses allows socket-direct style deployment.
  • Compact HHHL form factor with heatsink (HEAT version), suitable for most standard server racks.
  • Low power (~25.9W typical under load) for its class, reducing thermal and power overhead per port).

Configuration

ComponentSpecification / Detail
Part Number / ModelMCX755106AC-HEAT (NVIDIA ConnectX-7)
Ports2 × QSFP112 (dual-port)
Network TypesEthernet (200/100/50/25/10GbE), InfiniBand (NDR200/HDR/HDR100 etc.)
Host InterfacePCIe5 x16 (also backward support for PCIe4/3 via SERDES 16/32 GT/s)
Power Consumption~25.9W (AC variant) ≈24.9W for AS variant under passive cable / typical load
Physical SizeHalf-Height Half-Length, ~68.90 mm × 167.65 mm
Operating Temp0-55 °C (operational) / -40-70 °C (storage)
Special FeaturesSecure Boot, Crypto Enabled, Socket-Direct passive cable option
Compatibility ModesRoCE, RDMA, Ethernet, InfiniBand

Compatibility

The MCX755106AC-HEAT works with any server motherboard with a free PCIe x16 slot, preferably supporting PCIe5 to achieve full speed. Servers with PCIe4 will run lower speeds but are compatible. Ensure sufficient cooling and proper bracket height (tall / HHHL) to fit heatsink.

For network connectivity, you must have compatible QSFP112 ports on your network switches or use optical / DAC/CR4/QSFP112 optical transceivers that match the cable type. For InfiniBand mode, your switch must support NDR200 or HDR etc. For Ethernet mode, switches must support 200GbE or lower fallback speeds.

Drivers and firmware should be current: NVIDIA’s ConnectX-7 driver stack (for your OS — Linux, Windows, etc.) plus firmware that supports Secure Boot / Crypto features. Missing updates may limit performance or features (e.g. passive adapter harness option).

Power budget: because the NIC consumes ~25-26W under typical loads with passive cables, ensure the server PSU and airflow can handle this plus any adjacent NICs and cards. Heatsink (HEAT) version will generate some thermal load; airflow at appropriate speeds is necessary.

Usage Scenarios

1) High-Performance Compute / AI Cluster Fabrics: Great for interconnect in GPU/CPU clusters needing very high bandwidth and low latency, especially where RDMA or InfiniBand fabrics are used (NDR200) to avoid network bottlenecks.

2) Data Center Ethernet Backbone: Use it for 200GbE switches/routers interconnect, server uplinks, or leaf/spine architecture where high throughput is required.

3) RoCE / RDMA for Storage Systems: For NVMe-over-Fabric, or distributed storage systems like Ceph or Lustre, or other systems using RDMA, this NIC accelerates storage access by reducing latency and CPU overhead.

4) AI/ML / HPC Inference & Training: Use in inference serving nodes or training nodes where large model weights are shared, reducing data movement overhead across nodes). Also helpful in virtualization of GPU workloads.

5) Secure / Compliance Environments: Because Secure Boot and Crypto Enabled are supported, this card is appropriate where network encryption, trusted boot, and compliance (e.g. for financial, government, or healthcare systems) rank high in requirement.

Frequently Asked Questions

  1. Q: Does MCX755106AC-HEAT require PCIe Gen5 to get full 200Gbps performance, or will it work with Gen4 as well?
    A: It will work with PCIe Gen4 and even Gen3 (depending on host), but to reach full advertised throughput (especially for both ports and maximum lane count), PCIe5 x16 is preferred. Gen4 will likely impose some bandwidth ceiling.
  2. Q: Can this adapter operate in both Ethernet 200GbE mode and InfiniBand NDR200 mode interchangeably?
    A: Yes — the MCX755106AC-HEAT supports both Ethernet and InfiniBand (NDR200) modes. Mode selection depends on switch/fabric configuration and firmware. It is dual-mode and highly flexible.
  3. Q: What cabling or transceivers are needed to make full use of its QSFP112 ports?
    A: You’ll need optical or copper QSFP112/CR4/CR8 cables/transceivers rated for 200Gb/s or fallback levels (100/50/25/10Gb/s). Passive cables may limit reach; check cable spec. Also if using passive cabling, be sure the port and NIC both support that mode.
  4. Q: What are the thermal and mechanical requirements for the HEAT version?
    A: HEAT version includes heatsink, so it needs good airflow within the server enclosure. Confirm that the server’s PCIe slot bracket height accommodates tall bracket / heatsink. Also verify power budget—~25.9W under load plus any auxiliary cabling/harness.
与此商品相关的产品
NVIDIA CONNECTX-7 MCX755106AC-HEAT DUAL-PORT 200GBE / NDR200 INFINIBAND网络适配器 推荐
NVIDIA H200 NVL 141GB PCIE GPU加速器(零件号900-21010-0040-000)用于生成AI&HPC 推荐
HPE智能内存套件P06035-B21  -  64 GB DDR4-3200双级注册模块用于Proliant服务器 推荐
联想Thinksystem SR665服务器主板 - 兼容型号:03GX157,03GX293,03GX789 推荐
联想01pf160- thinksystem SR850 Systoble Gen2用于联想SR850服务器 - 高性能服务器主板 推荐
联想01pf402 -ThinkSystem 450W(230V/115V)铂金热丝绒电源 推荐
联想4Y37A09724-思维系统440-16E SAS/SATA PCIE GEN4 12GB HBA用于高性能存储连接 推荐
Fusionserver 2488H V7 PCIE SAS/RAID卡0231Y129-9560-8I PCIE RAID RARENTER,4GB缓存,PCIE 4.0 X8,带超级盖和高速电缆 推荐