Description
Worldclass Inference for Edge and Cloud Computing
Standing on par with industry’s leading competitors, ATOM™ SoC delivers top performance across various types of AI tasks like computer vision, natural language processing and recommendation models. Utilizing the silicon-proven neural core ION™ as the compute granule that scales up with perfect linearity, ATOM™ SoC is the optimal chip for large-scale inference required in edge computing and data centers.
Bringing Latency-Critical AI Tasks to Hyperscale
Scalable, energy-efficient, and fast: ATOM™ is poised to be a pivotal enabler for server-level AI services, optimizing the
AI-as-a-Service (AIaaS) stack. As a multi-core System-on-Chip (SoC), ATOM™ SoC is built upon a dataflow architecture to deliver superior inference performance. ATOM™’s high system utilization rate is enhanced through the seamless interplay of our hardware, software, and firmware.
Advanced Components
Samsung Foundry’s advanced EUV process node enables ATOM™ to deliver improved performance, reduced power consumption and enhanced energy efficiency. Geared with PCIe Gen5, GDDR6 and high-speed I/O, ATOM™ is the optimal AI accelerator to serve different markets, spanning from edge computer to data centers.