Mellanox QM8700-F / MQM8700-HS2F, 1U high-density HDR InfiniBand switch, used for HPC/AI clusters
  • 产品分类:交换机
  • 零件编号:QM8700-F / MQM8700-HS2F
  • 库存情况:In Stock
  • 状况:二手
  • 产品特色:准备发货
  • 最小订单:1单位
  • 原价:$15,862.00
  • 您的价格: $14,705.00 您节省了 $1,157.00
  • 立即咨询 发送邮件

安心无忧。接受退货。

运送:国际运输的商品可能会受到海关处理和额外费用的影响。 查看详情

配送:如果国际配送需要海关处理,请允许额外的时间。 查看详情

退货:14天内退货。卖家承担退货运费。 查看详情

免费送货。我们接受30天账期的采购订单。无需影响您的信用即可在几秒钟内获得决策。

如果您需要大量QM8700-F / MQM8700-HS2F产品,请通过我们的免费电话Whatsapp: (+86) 151-0113-5020联系我们,或在在线聊天中请求报价,我们的销售经理会很快与您联系。

Mellanox QM8700-F / MQM8700-HS2F 1U High-Density HDR InfiniBand Switch - Ultimate Network Hardware for HPC & AI Clusters

Title

Mellanox QM8700-F / MQM8700-HS2F 1U 40-Port HDR 200Gb/s InfiniBand Switch with 16Tb/s Bandwidth & SHARP In-Network Computing - Ideal Network Hardware for HPC Clusters, AI Training & High-Density Data Centers

Keywords

mellanox switch,qm8700-f,mqm8700-hs2f,infiniband switch,hdr infiniband,1u switch,high-density switch,hpc cluster,ai cluster,network hardware,buy infiniband switch,server network switch,200gb switch,quantum switch

Description

In the fast-evolving world of high-performance computing and artificial intelligence, having robust, low-latency network hardware is non-negotiable. The Mellanox QM8700-F and MQM8700-HS2F stand as flagship InfiniBand switch models, purpose-built for organizations looking to buy InfiniBand switch solutions that power the most demanding HPC cluster and AI cluster environments. As a 1U high-density platform, this HDR InfiniBand switch redefines what’s possible in data center networking.

At the core of the QM8700-F / MQM8700-HS2F lies the NVIDIA Quantum chip, delivering an astonishing 16Tb/s of non-blocking bandwidth and sub-130ns port-to-port latency. This 1U switch supports 40 ports of HDR 200Gb/s InfiniBand, or 80 ports of HDR100 100Gb/s with ConnectX-6 adapters, making it the ultimate high-density switch for modern data centers. For AI model training and large-scale HPC simulations, this 200Gb switch eliminates network bottlenecks, ensuring data flows at wire speed.

What sets this Mellanox switch apart is its integrated SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) technology, enabling advanced in-network computing. By offloading collective communication tasks from CPUs to the switch fabric, it accelerates AI and HPC applications by orders of magnitude—a critical advantage for AI cluster workloads like distributed deep learning. Whether deployed as a leaf or spine switch, it optimizes traffic routing for SlimFly and Dragonfly+ topologies.

Designed for reliability and ease of management, the MQM8700-HS2F features hot-swappable redundant power supplies and fans, plus P2C airflow for standard data center cooling. Managed via Mellanox’s UFM (Unified Fabric Management) platform, it provides real-time telemetry, AI-driven analytics, and self-healing capabilities that recover from link failures 5,000x faster than software solutions. This ensures maximum uptime for mission-critical HPC cluster and AI cluster operations.

Key Features

  • 1U rack-mount form factor with high port density for space-optimized data centers
  • 40 QSFP56 ports supporting HDR 200Gb/s InfiniBand; up to 80 HDR100 100Gb/s ports with ConnectX-6
  • 16Tb/s non-blocking switching capacity with sub-130ns ultra-low latency
  • Integrated SHARP in-network computing to accelerate AI/HPC collective operations
  • Adaptive routing for SlimFly, Dragonfly+, and 6DT advanced network topologies
  • Hot-swappable 1+1 redundant AC power supplies (100-240V) and 5+1 redundant fans
  • P2C (port-to-chassis) airflow for standard data center cooling environments
  • x86 dual-core management CPU with 8GB system memory for intelligent fabric control
  • UFM management platform with real-time telemetry, AI analytics, and self-healing networking
  • Optimized for HPC clusters, AI training, big data, and hyperscale cloud infrastructures

Configuration

ComponentSpecification
Part NumberQM8700-F, MQM8700-HS2F
Form Factor1U Rack Mount InfiniBand Switch
Switch ChipNVIDIA Quantum (HDR InfiniBand)
Port Configuration40 x QSFP56 (200Gb/s HDR) / 80 x HDR100 (100Gb/s)
Switching Capacity16 Tb/s (non-blocking)
LatencySub-130ns (port-to-port)
Management CPUx86 ComEx Broadwell D-1508 (Dual-Core)
System Memory8 GB DDR4
Power Supplies1+1 Hot-swappable AC (100-240V, 50/60Hz)
Fans5+1 Hot-swappable (N+1 Redundancy)
AirflowP2C (Port-to-Chassis, Standard Depth)
Typical Power253W (Max: 784W)
Operating Temp0°C to 40°C (32°F to 104°F)

Compatibility

The Mellanox QM8700-F / MQM8700-HS2F is fully compatible with standard 19-inch server rack enclosures and rack rails, making integration seamless in existing data centers. It interoperates exclusively with Mellanox (NVIDIA) ConnectX-6 InfiniBand adapters, supporting both HDR 200Gb/s and HDR100 100Gb/s speeds for maximum flexibility.

This InfiniBand switch is validated for HPC and AI cluster environments, supporting popular topologies like SlimFly, Dragonfly+, and 6DT. It works with leading HPC middleware and AI frameworks, including MPI, PyTorch, TensorFlow, and Horovod, ensuring compatibility with your existing software stack. The switch runs on MLNX-OS, Mellanox’s purpose-built operating system for InfiniBand fabrics.

Management compatibility includes full support for Mellanox UFM (Unified Fabric Management) software, enabling centralized control of multiple switches and end hosts. It also supports out-of-band management via Ethernet and in-band management over InfiniBand, providing flexible options for network administrators. The Mellanox switch is RoHS-compliant and certified for safety/EMC standards (CE/FCC), ensuring global deployment readiness.

Usage Scenarios

The Mellanox QM8700-F / MQM8700-HS2F is the gold standard for HPC cluster deployments, powering scientific computing, weather modeling, and finite element analysis workloads. Its 16Tb/s bandwidth and sub-130ns latency eliminate network bottlenecks, ensuring fast data exchange between compute nodes. Organizations can buy InfiniBand switch hardware to build clusters that scale to thousands of nodes with consistent performance.

For AI cluster environments, this 1U switch is indispensable for distributed deep learning training. The integrated SHARP technology optimizes collective communication operations like all-reduce, which are critical for training large AI models. Whether training computer vision, natural language processing, or generative AI models, the switch accelerates training times by reducing communication overhead between GPU servers.

hyperscale cloud and big data infrastructures benefit greatly from the high-density switch design. The 40-port HDR configuration enables efficient leaf-spine architectures, connecting thousands of servers with high-speed 200Gb/s links. It’s ideal for big data processing frameworks like Spark and Hadoop, where fast data movement between nodes is essential for performance. The switch’s low power consumption (253W typical) also reduces operational costs in large-scale deployments.

As a high-performance server network switch, it excels in storage-intensive environments, connecting NVMe storage arrays to compute nodes with 200Gb/s InfiniBand. This enables low-latency, high-bandwidth access to storage, critical for database, data warehousing, and content delivery workloads. The switch’s reliability and redundancy features ensure data integrity and availability for mission-critical applications.

Frequently Asked Questions

Q1: What is the main difference between QM8700-F and MQM8700-HS2F?
A: The MQM8700-HS2F is a pre-configured model with 2 AC power supplies, P2C airflow, and a rail kit, while the QM8700-F is the base model (same hardware, different bundle).

Q2: What makes this switch ideal for AI clusters?
A: It features **SHARP in-network computing**, which offloads collective communication tasks (e.g., all-reduce) from CPUs/GPUs to the switch, accelerating AI training by up to 10x compared to traditional Ethernet switches.

Q3: Can it be used in both leaf and spine roles in a data center?
A: Absolutely. The InfiniBand switch supports advanced adaptive routing for SlimFly and Dragonfly+ topologies, making it suitable for both leaf (server-facing) and spine (inter-leaf) roles in high-density fabrics.

Q4: What is the maximum number of servers it can connect in an HPC cluster?
A: With 40 HDR 200Gb/s ports, it can connect up to 40 GPU servers directly as a leaf switch. Using HDR100 mode (80 ports), it doubles capacity, supporting up to 80 servers per switch.

与此商品相关的产品
Mellanox QM8700-F / MQM8700-HS2F, 1U high-density HDR InfiniBand switch, used for HPC/AI clusters 推荐
DELL DS-7730B 高密度 128 端口光纤通道交换机,带 32G SFP+ 和 64G SFP-DD 企业捆绑包 推荐
Brocade BR-G730-128 高密度 128 端口光纤通道交换机,带 32G SFP+ 和 64G SFP-DD 企业捆绑包 推荐
Cisco Catalyst C9500-48Y4C 48-PORT 10/25G基于SFP+ GLC-TE模块的基于SFP的核心开关 推荐
G630-96-32G-R 72-port 32GB SAN开关带有企业许可证和SFPS 推荐
Brocade BR-G720-56-64G-R-64GBPS 56-端口光纤通道开关 推荐
LIS-MSRB-ips-3Y 3年授权服务的MSR3610-IE-DP+DDR4-32GB 推荐
LIS-MSRB-ips-3Y 3年授权服务的MSR3610-IE-DP+DDR4-32GB 推荐