ATOM™-Max

Boosted Performance for Large-Scale Inference

Boosted Performance for Large-Scale Inference

ATOM™-Max is the best in-class AI accelerator engineered for data centers and enterprises, delivering an impressive performance boost for large-scale AI inference workloads. ATOM™-Max achieves 128 TFLOPS (FP16) and up to 512 TOPS (INT8) / 1024 TOPS (INT4), with a bandwidth of 1024GB/s, ensuring maximum throughput for demanding applications.

128 TFLOPS

FP16

512 TOPS / 1024 TOPS

INT8 / INT4

DC 12V (CPU 8-pin power connector)

Input Power

350W

Max Power Consumption

Air Cooling (passive)

Thermal

GDDR6 64GB, 1024GB/s

Memory

PCIe Gen5 x16, 64GB/s

Host Interface

FHFL Dual Slot

Form Factor

Massive Performance Boost

Exceptional AI compute with a substantial bandwidth boost

Direct Card-to-Card Communication

Seamless and fast data exchange using PCIe Gen5 x16, reducing latency and enhancing scalability

Optimized for Large-Scale Inference

Specifically designed to meet the needs of large-scale enterprises and AI-driven infrastructures