ATOM™-Lite

Low-power, Yet Highly Powerful AI Inference at the Edge

Description

Low-power, Yet Highly Powerful AI Inference at the Edge

ATOM™-Lite is a compact yet powerful AI accelerator tailored for edge inference applications. Designed with efficiency in mind, this AI accelerator operates in a Half Height Half Length (HHHL) form factor, making it an ideal fit for workstations, edge devices, and laboratory environments.

With a power consumption of only up to 65W, ATOM™-Lite delivers impressive performance, reaching 32 TFLOPS (FP16) and 128 TOPS (INT 8) / 256 TOPS (INT4)—the same performance as its sibling, the ATOM™, but in a smaller, more power-efficient package.

Key features

Low Power Consumption

Compact Design

HHHL form factor fits seamlessly into various form factors like edge boxes and workstations

Applications

Optimized for use in workstations, edge environments, or laboratory setups

Product specs

FP16
32 TFLOPS
INT8 / INT4
128 TOPS / 256 TOPS
Power Connector
Gold Finger
Max Power Consumption
65 W
Thermal
Air Cooling (passive)
Memory
GDDR6 16 GB, 256 GB/s
Host Interface
PCIe Gen5 x16, 64 GB/s
Form Factor
HHHL Single Slot

ATOM™ Series

ATOM™-Max

Boosted Performance for Large-Scale Inference

Explore ATOM™-Max

ATOM™

Cost-efficient, Powerful AI Acceleration for Small-sized Data centers

Explore ATOM™

ATOM™ SoC

Inference AI Accelerator for Data centers

ATOM™ is a fast and power-efficient System-on-Chip for AI inference with remarkably low latency, conceived for deployment in data centers or cloud service providers.
With international recognition from the industry-standard benchmark MLperf™ v3.0, ATOM™ can be scaled to accelerate state-of-the-art AI models of various sizes.
Explore ATOM™ SoC