COMING SOON! - Cheetah RAID PCI-e and Rugged RAID Storage
16127
page-template-default,page,page-id-16127,ajax_fade,page_not_loaded,,qode-theme-ver-10.0,wpb-js-composer js-comp-ver-7.9,vc_responsive
 

COMING SOON!

Redefining Performance: The Ultimate Guide to COTS Rugged NVMe 2U Server/Storage Technology
Rugged 2U HA Server/Storage ALL-IN-ONE

CHEETAH is again releasing a game changer!

2U Server 24 NVMe drives in front!

Specifications

Requirements:

  • 2U platform with 2x independent UP AMD EPYC nodes with 24xPCIe X4 2.5”-15mm NVMe U.2 bays
  • Option to use two PCIe X2 2280 NVMe M.2 SSDs per U.2 bay via a carrier
  • Support the same Cavium add-in-cards used in Gen.6.1 platform: LiquidIO, Nitrox and Arrowhead OCP 25G IOM.
  • Reduce the power consumption per node by 200W to allow more nodes per rack with 8KW power budget

Proposed Configuration

  • Custom 2U enclosure with an 1U node stacked on top of a second MB mounted on bottom chassis for total 2x nodes per 2U
  • 2x Tyan S8026 UP EPYC server boards, each supporting
  1. 2x PCIe X2 M.2 modules

  2. 2x PCIe X16 slots plus 1x PCIe X16 OCP 2.0 mezzanine slot

  • Left slot for the extended LiquidIO card
  • Right slot for the Nitrox card
  • OCP 2.0 mezzanine slot for the Arrowhead dual‐25G NIC IOM
  1. Up to 18x SATA ports, 16x from 2x OcuLink X8 flex‐ports plus 2x SATA DOM ports
  2. UP to 64x PCIe Gen3 lanes from 8x OcuLink connectors (16x shared with SATA ports)
  3. 16x DIMM slots for up to 1024GB of DDR4 2400 RAM (16x 64GB RDIMM/LRDIMM)
  • Storage configuration:
  1. 24x 2.5”-15mm tool-less NVMe SSD bays shared between two nodes (12x 2bay U.2 backplane each with PCIe X4 bus)
  2. Each PCIe X4 U.2 slot can be bifurcated to two PCIe X2 buses for use with a U.2 carrier that supports two 2280 M.2

SSDs to leverage smaller capacity and lower-cost M.2 SSDs
3. All disk bays are independent (not shared by the two Nodes) and point-to-point connected each Host via OcuLink X8

cables

  • Dual-output 800 or 1000W (@110V AC) redundant PSU, depending on the actual system power budget required
  • 2x sets of on/off switch and LEDs on two front ears
  • 4x removable/field-serviceable 11500RPM 80x38mm high‐speed cooling fans
  • Support up to 1x 180W TDP 7551P 32-Core AMD EPYC processor per Node at 35°C ambient
  • Ball-bearing slide rails to allow easier service
  • Top node is NOT hot-swappable (cables attached), but top Node can be removed for integration for the bottom Node

RELATED DOCUMENTS