Product Highlights

Discover Energy-Optimized Gen AI, Powered by Rebellions

ATOM™-Max (RBLN-CA25)

GDDR6 SoC for Versatile AI Inference at High TPS/Watt

Optimized for powerful, low-latency AI inference

ATOM™-Max(RBLN-CA25)is an energy-efficient AI accelerator that leads its class in TPS per watt. Its direct intercard communication over PCIe Gen5, bypassing the CPU, enables low-latency inference with exceptional energy efficiency. Designed for linear scalability, RBLN-CA25 supports adaptable topologies across up to eight RBLN-CA25cards, seamlessly expanding from single-chip to full-rack deployments.

RBLN-CA25 PDF Brief Download

REBEL(RBLN-CR13)

HBM3e Chiplet-based Scalable AI Accelerator for Hyperscale Workloads

High-grade Chip with Formidable Compute Density and Memory Bandwidth

REBEL(RBLN-CR13)is Rebellions’ high-grade chiplet-based AI accelerator engineered specifically for data center-scale workloads. Designed from the ground up for exceptional efficiency, high hardware utilization, low latency, and seamless scalability, RBLN-CR13 sets a new standard in AI performance. Equipped with 144GB of HBM3e memory, it delivers an impressive 1 PFLOPS of FP16 compute power within a 350W power envelope. RBLN-CR13 also leverages the UCIe-Advanced specification for ultra-high bandwidth density, near-zero latency, and remarkable energy efficiency.

RBLN-CR13 PDF Brief Download

System Solutions

Rebellions Scalable Design (RSD)

Modular Framework for Scalable PODs

Rebellions Scalable Design (RSD) is the architectural foundation that empowers the seamless and flexible support for LLMs across a wide range of sizes. From inter-chip connections to multi-card configurations, servers, and even rack-level datacenter deployments, RSD provides a scalable, energy-efficient, and highly capable solution to cater to all requirements.

RSD PDF Brief Download

Certified Hardware Partnerships

Rebellions powers your AI system through our network of certified hardware partners. Meet Rebellions: stable and ready to deploy.

ATOM™

GDDR6 SoC for Versatile AI Inference at High TPS/Watt

Optimized for Powerful, Low-latency AI Inference

Our Strong Ecosystem Partners

World's Most Efficient AI Inference