Inspur KR6288X2-A0 AI Server | 8x NVIDIA HGX H200 | Dual Intel Xeon 8558P | 2TB DDR5
  • 产品分类:服务器
  • 零件编号:Inspur KR6288X2-A0
  • 库存情况:In Stock
  • 状况:全新
  • 产品特色:准备发货
  • 最小订单:1单位
  • 原价:$467,869.00
  • 您的价格: $411,765.00 您节省了 $56,104.00
  • 立即咨询 发送邮件

安心无忧。接受退货。

运送:国际运输的商品可能会受到海关处理和额外费用的影响。 查看详情

配送:如果国际配送需要海关处理,请允许额外的时间。 查看详情

退货:14天内退货。卖家承担退货运费。 查看详情

免费送货。我们接受30天账期的采购订单。无需影响您的信用即可在几秒钟内获得决策。

如果您需要大量Inspur KR6288X2-A0产品,请通过我们的免费电话Whatsapp: (+86) 151-0113-5020联系我们,或在在线聊天中请求报价,我们的销售经理会很快与您联系。

Inspur KR6288X2-A0 AI Server | 8x NVIDIA HGX H200 | Dual Intel Xeon 8558P | 2TB DDR5

Keywords

Inspur KR6288X2-A0, NVIDIA HGX H200, Intel Xeon 8558P, 2TB DDR5 RAM, AI Training Server, Generative AI, HPC Server, Buy Inspur Server

Description

Step into the future of hyperscale artificial intelligence with the Inspur KR6288X2-A0. This flagship AI server is engineered to train the world's most complex Large Language Models (LLMs), featuring the brand new NVIDIA HGX H200 8-GPU architecture. With a massive combined 1128GB of HBM3e memory across the HGX baseboard, this system shatters previous memory bottlenecks, allowing data scientists to run massive parameter models efficiently without requiring as many interconnected nodes.

At the heart of this compute giant are dual Intel Xeon 8558P processors. Each CPU boasts 48 cores, 260M cache, and a 2.7GHz base clock operating at a 350W TDP. This provides 96 total physical cores of premium x86 orchestration power to prepare data and manage the immense GPU workload. To keep the processing pipeline completely saturated, the system is populated with 32x 64GB DDR5-5600MHz ECC-RDIMMs, totaling a massive 2TB of ultra-fast system memory.

Storage is tiered for both reliability and extreme speed. The host operating system is secured on two 480GB SATA 6Gbps 2.5-inch Read Intensive SSDs. Meanwhile, training data and checkpoints are handled by two 3.84TB U.2 16GTps 2.5-inch NVMe solid-state drives, ensuring rapid data ingestion directly to the GPUs. Powering this immense hardware is a robust redundant power array featuring Titanium-rated high-efficiency PSUs (supporting 220VAC or 240VDC). With a comprehensive 3-year warranty, this server is a secure investment for enterprise data centers pushing the boundaries of AI Training Server capabilities.

Key Features

  • Next-Gen AI Acceleration: 1x Nvidia HGX-200-8GPU board delivering an unprecedented 1128GB of VRAM.
  • Elite Processing: 2x Intel Xeon 8558P processors (48 Cores, 2.7GHz, 260M Cache, 350W).
  • Massive Memory Bandwidth: 2TB Total System RAM configured via 32x 64GB DDR5-5600MHz ECC-RDIMMs.
  • High-Speed Data Tier: 2x 3.84TB U.2 NVMe SSDs (16GTps) for rapid checkpointing and data ingestion.
  • Reliable OS Boot: 2x 480GB SATA 6Gbps 2.5" SSDs.
  • Titanium Efficiency: Equipped with ultra-efficient Titanium power supplies (3200W/2700W 220VAC/240VDC configuration).
  • Enterprise Guarantee: Backed by a 3-Year Warranty.

Configuration

Component Specification Quantity
Brand / Model Inspur KR6288X2-A0 (H200 Complete Machine) 1
Processor (CPU) Intel 8558P Xeon 2.7GHz 48C 260M 350W 2
Memory (RAM) 64G DDR5-5600MHz ECC-RDIMM 32
System Disk 480G SATA 6Gbps 2.5in Read 2
Data Disk 3.84T U.2 16GTps 2.5in R-Standard 2
GPU Baseboard Nvidia HGX-200-8GPU 1128G 1
Power Supply 3200W / 2700W Titanium 220VAC or 240VDC
Warranty 3 Years 1

Compatibility

The Inspur KR6288X2-A0 is a premier platform designed for the NVIDIA AI Enterprise software stack. It natively supports the latest deep learning frameworks such as PyTorch, TensorFlow, and JAX. Operating system compatibility includes enterprise standards such as Ubuntu Server 22.04 LTS and Red Hat Enterprise Linux (RHEL) 9. The HGX H200 architecture utilizes NVLink interconnects internally and is designed to interface with high-speed NDR InfiniBand networking cards for massive cluster scaling.

Usage Scenarios

This server is specifically architected for Foundation Model Training. The 1128GB of total VRAM across the 8-GPU baseboard allows data scientists to load incredibly large LLMs directly into memory, enabling massive batch sizes and significantly cutting down training times for generative AI models.

It also serves as a dominant High-Throughput Inference Node. For customer-facing Generative AI applications requiring real-time text, image, or video generation, the sheer memory bandwidth of the H200 GPUs ensures multiple concurrent user requests are served with minimal latency.

Frequently Asked Questions

Q: What is the primary difference between an HGX H100 and this HGX H200 system?
A: The primary upgrade is memory capacity and bandwidth. While a standard 8-GPU H100 system features 640GB of memory, the Nvidia HGX-200-8GPU featured here includes 1128GB of faster HBM3e memory (approx. 141GB per GPU). This allows significantly larger models to run on a single node without encountering memory bottlenecks.

Q: Are the NVMe drives configured for redundancy?
A: The system includes two 3.84T U.2 NVMe drives. Typically in an AI training environment, these are configured in a RAID 0 stripe for maximum read/write performance to feed data to the GPUs as fast as possible, though they can be configured in RAID 1 if data redundancy is prioritized over speed.

与此商品相关的产品
浪潮 NF5280M6 AI-Ready 双至强服务器,配备 Tesla L2 GPU,适用于企业工作负载 推荐
浪潮 NF5466M6 双 Intel Xeon 4314 企业存储和计算服务器 推荐
Dell PowerEdge R760xs - 双至强银牌 4410Y 企业配置 推荐
Dell PowerEdge R760xs - 至强金牌 6507P 高性能服务器 推荐
Dell PowerEdge R660 1U 机架服务器 - 双 Xeon Gold 6430、1TB RAM、25GbE 和光纤通道 HBA 推荐
HPE ProLiant DL380 Gen11 2U 机架服务器 |双英特尔至强金牌 6542Y 48 核 | 1TB DDR5 内存 | 2 个 300GB SAS 硬盘 推荐
浪潮 NF8480M5 4U 企业存储服务器 — 24× LFF SAS、四核 Xeon Gold 6248R 高密度平台 推荐
Lenovo ThinkSystem SR850 V3 高性能 4-CPU 服务器,配备 Xeon Gold 6448H 和企业网络 推荐