ATOM™ Max(RBLN-CA25)

Boosted Performance for Hyperscalers

Description

Boosted Performance for Hyperscalers

ATOM™ Max (RBLN-CA25) is the best in-class AI accelerator engineered for hyperscale data centers and enterprises, delivering an impressive performance boost for large-scale AI inference workloads. RBLN-CA25 achieves 128 TFLOPS (FP16) and up to 512 TOPS (INT8) / 1024 TOPS (INT4), with a bandwidth of 1024 GB/s, ensuring maximum throughput for demanding applications.

Key features

Massive Performance Boost

Exceptional AI compute with a substantial bandwidth boost.

Direct Card-to-Card Communication

Seamless and fast data exchange using PCIe Gen5 x16, reducing latency and enhancing scalability.

Optimized for Hyperscalers

Specifically designed to meet the needs of large-scale enterprises and AI-driven infrastructures.

System specs

Single Chip
Product SKU
CA25M200x
FP16
128 TFLOPS
INT8 / INT4
512 TOPS / 1024 TOPS
Multi-Instance
HW isolation up to 64 independent tasks
Input Power
DC 12V (CPU 8-pin power connector)
Max. TDP
350W
Thermal
Air Cooling (passive)
Memory
GDDR 64GB, 1024 GB/s
Host & CIC I/F
PCIe Gen5 x16, 64GB/s
Form Factor
266.5 mm x 111 mm x 19 mm

Energy Efficient AI Accelerator with Flexible Interconnect Topology

RBLN-CA Series

RBLN-CA22

Cost-efficient, Powerful AI Acceleration for Small-sized Data Centers

Explore RBLN-CA22

RBLN-CA21

Low-power, Yet Highly Powerful AI Inference at the Edge

Explore RBLN-CA21

Explore Chip

ATOM™

Inference AI Accelerator for Data Centers

ATOM™ is a fast and power-efficient System-on-Chip for AI inference with remarkably low latency, conceived for deployment in data centers or cloud service providers.
With international recognition from the industry-standard benchmark MLperf™ v3.0, ATOM™ can be scaled to accelerate state-of-the-art AI models of various sizes.
Explore ATOM™