Description
Boosted Performance for Hyperscalers
ATOM™ Max (RBLN-CA25) is the best in-class AI accelerator engineered for hyperscale data centers and enterprises, delivering an impressive performance boost for large-scale AI inference workloads. RBLN-CA25 achieves 128 TFLOPS (FP16) and up to 512 TOPS (INT8) / 1024 TOPS (INT4), with a bandwidth of 1024 GB/s, ensuring maximum throughput for demanding applications.
Key features
Massive Performance Boost
Exceptional AI compute with a substantial bandwidth boost
Direct Card-to-Card Communication
Seamless and fast data exchange using PCIe Gen5 x16, reducing latency and enhancing scalability
Optimized for Hyperscalers
Specifically designed to meet the needs of large-scale enterprises and AI-driven infrastructures
System specs
RBLN-CA Series

RBLN-CA22
Cost-efficient, Powerful AI Acceleration for Small-sized Data Centers

RBLN-CA21
Low-power, Yet Highly Powerful AI Inference at the Edge
Explore Chip


ATOM™
Inference AI Accelerator for Data Centers
With international recognition from the industry-standard benchmark MLperf™ v3.0, ATOM™ can be scaled to accelerate state-of-the-art AI models of various sizes.