ATOM™(RBLN-CA22)

Cost-efficient, Powerful AI Acceleration for Small-sized Data Centers

Description

Cost-efficient, Powerful AI Acceleration
for Small-sized Data-Centers

RBLN-CA22 (ATOM™) is a cost-efficient AI accelerator designed to deliver maximum value for small-sized data centers and enterprise applications. Validated by MLPerf benchmark results, ATOM™ is engineered for outstanding hardware efficiency, enabling exceptional performance at a competitive price.

Equipped with 16GB of GDDR6 memory and a 256GB/s bandwidth for ultra-low latency, ATOM™ provides an unbeatable performance-per-dollar ratio, particularly for small language model (SLM) applications. Its optimized design ensures efficient AI processing for organizations looking to maximize their data center investments.

Key features

Low Latency

Built with efficient architecture for low latency

Best Value for Money

Low power usage coupled with exceptional performance

Applications

Optimized for SLMs, small-and-middle sized enterprise applications

System specs

Single Chip
Product SKU
CA12E100B
FP16
32 TFLOPS
INT8 / INT4
128 TOPS/256 TOPS
Multi-Instance
HW Isolation up to 16 independent tasks
Input Power
DC 12V (CPU 8-pin power connector)
Max. TDP
75-90W
Thermal
Air Cooling (passive)
Memory
GDDR6 16GB, 256GB/s
Host & CIC I/F
PCIe Gen5 x16, 64GB/s
Form Factor
266.5 mm x 111 mm x 19 mm
Weight
615 g

RBLN-CA Series

RBLN-CA25

Boosted Performance for Hyperscalers

Explore RBLN-CA25

RBLN-CA21

Low-power, Yet Highly Powerful AI Inference at the Edge

Explore RBLN-CA21

Explore Chip

ATOM™

Inference AI Accelerator for Data Centers

ATOM™ is a fast and power-efficient System-on-Chip for AI inference with remarkably low latency, conceived for deployment in data centers or cloud service providers.
With international recognition from the industry-standard benchmark MLperf™ v3.0, ATOM™ can be scaled to accelerate state-of-the-art AI models of various sizes.
Explore ATOM™