ATOM™

Cost-efficient, Powerful AI Acceleration for Small-sized Data centers

Description

Cost-efficient, Powerful AI Acceleration
for Small-sized Data centers

ATOM™ is a cost-efficient AI accelerator designed to deliver maximum value for small-sized data centers and enterprise applications. Validated by MLPerf benchmark results, ATOM™ is engineered for outstanding hardware efficiency, enabling exceptional performance at a competitive price.

Equipped with 16GB of GDDR6 memory and a 256GB/s bandwidth for ultra-low latency, ATOM™ provides an unbeatable performance-per-dollar ratio, particularly for small language model (SLM) applications. Its optimized design ensures efficient AI processing for organizations looking to maximize their data center investments.

Key features

Low Latency

Built with efficient architecture for low latency

Best Value for Money

Low power usage coupled with exceptional performance

Applications

Optimized for SLMs, small-and-middle sized enterprise applications

Product specs

FP16
32 TFLOPS
INT8 / INT4
128 TOPS / 256 TOPS
Input Power
DC 12V (CPU 8-pin power connector)
Max Power Consumption
85 W
Thermal
Air Cooling (passive)
Memory
GDDR6 16 GB, 256 GB/s
Host Interface
PCIe Gen5 x16, 64 GB/s
Form Factor
FHFL Single Slot

ATOM™ Series

ATOM™-Max

Boosted Performance for Large-Scale Inference

Explore ATOM™-Max

ATOM™-Lite

Low-power, Yet Highly Powerful AI Inference at the Edge

Explore ATOM™-Lite

ATOM™ SoC

Inference AI Accelerator for Data centers

ATOM™ is a fast and power-efficient System-on-Chip for AI inference with remarkably low latency, conceived for deployment in data centers or cloud service providers.
With international recognition from the industry-standard benchmark MLperf™ v3.0, ATOM™ can be scaled to accelerate state-of-the-art AI models of various sizes.
Explore ATOM™ SoC