ATOM™-Max

Boosted Performance for Large-Scale Inference

Boosted Performance for Large-Scale Inference

ATOM™-Max is the best in-class AI accelerator engineered for data centers and enterprises, delivering an impressive performance boost for large-scale AI inference workloads. ATOM™-Max achieves 128 TFLOPS (FP16) and up to 512 TOPS (INT8) / 1024 TOPS (INT4), with a bandwidth of 1024GB/s, ensuring maximum throughput for demanding applications.

Massive Performance Boost

Exceptional AI compute with a substantial bandwidth boost

Direct Card-to-Card Communication

Seamless and fast data exchange using PCIe Gen5 x16, reducing latency and enhancing scalability

Optimized for Large-Scale Inference

Specifically designed to meet the needs of large-scale enterprises and AI-driven infrastructures

Specifications
FP16
128 TFLOPS
INT8 / INT4
512 TOPS / 1024 TOPS
Input Power
DC 12V (CPU 8-pin power connector)
Max Power Consumption
350W
Thermal
Air Cooling (passive)
Memory
GDDR6 64GB, 1024GB/s
Host Interface
PCIe Gen5 x16, 64GB/s
Form Factor
FHFL Dual Slot