ATOM™ is built on the small yet powerful silicon-proven AI core ION™, which, with its Coarse-Grained Reconfigurable Architecture (CGRA), is flexible, programmable, and scalable. This design allows ATOM™ to handle networks with varying depths and complexities, from small applications to hyperscale services, all while maximizing energy efficiency.
ATOM™ is a multi-core System-on-Chip, consolidating all essential components onto a unified substrate. The communication overhead within the chip is masterfully handled via on-chip data dependency handling.
ATOM™ can accelerate different types of neural networks, including Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Bidirectional Encoder Representations from Transformers (BERT) and recent transformers-based models (e.g. T5, GPTs, etc.).
ATOM™ can be partitioned to support up to 16 separate jobs simultaneously, with each job isolated at both the hardware and software levels. ATOM™’s Multi-Instance feature allows higher resource usage and flexibility, optimizing utilization and performance.