Description
Boosted Performance for Large-Scale Inference
ATOM™-Max is the best in-class AI accelerator engineered for data centers and enterprises, delivering an impressive performance boost for large-scale AI inference workloads. ATOM™-Max achieves 128 TFLOPS (FP16) and up to 512 TOPS (INT8) / 1024 TOPS (INT4), with a bandwidth of 1024 GB/s, ensuring maximum throughput for demanding applications.
Key features
Massive Performance Boost
Exceptional AI compute with a substantial bandwidth boost
Direct Card-to-Card Communication
Seamless and fast data exchange using PCIe Gen5 x16, reducing latency and enhancing scalability
Optimized for Large-Scale Inference
Specifically designed to meet the needs of large-scale enterprises and AI-driven infrastructures
Product specs
ATOM™ Series

ATOM™
Cost-efficient, Powerful AI Acceleration for Small-sized Data centers

ATOM™-Lite
Low-power, Yet Highly Powerful AI Inference at the Edge


ATOM™ SoC
Inference AI Accelerator for Data centers
With international recognition from the industry-standard benchmark MLperf™ v3.0, ATOM™ can be scaled to accelerate state-of-the-art AI models of various sizes.