Once again, Ampere Computing pushes the envelope with CPU technology in its new range of in-house designed Arm-based processors, AmpereOne Family, with a CPU core count going beyond Ampere's prior gen that topped at 128 CPU cores. In this regard, there is no overlap with the preexisting product lines, Ampere Altra and Ampere Altra Max processors. This new generation of AmpereOne Family starts at 96 cores and goes up to 192 cores, completing the product line to fulfill every kind of application. While they don't have simultaneous multithreading (SMT) like x86 processors, the 1:1 ratio of cores to threads is more ideal for cloud-native workloads. Afterall, AmpereOne Family is squarely focused on cloud service providers.
"AmpereOne Family is about more. More cores, more I/O, more memory, more performance, more cloud features,” said Jeff Wittich, Chief Product Officer at Ampere. “With our Ampere Custom Cloud Native Cores, this is the next step in the break from the constraints of legacy compute. No other CPU comes close. It is about cloud scale with the maximum performance per rack.
Ampere Computing designed an in-house Arm processor using chiplets
More tasks can be done simultaneously and faster with dedicated custom cores
Increased I/O throughput achieving 128GB/s bandwidth in x16 lanes
Doubling of L2 private cache compared to prior gen Ampere CPU
Increased memory throughput and higher DDR5 capacity per DIMM
Added to list of data formats that includes: FP16, Int8, and Int16
AI Inference has started to take place in every aspect in our lives, from heavy computing data centers to the daily use of smartphones. While it can be run on traditional server setups with GPUs installed, the hardware requirements are much lower than those for AI training. With AmpereOne Family processors, which are based on the Arm v8.6 ISA and feature vector engines, systems can easily perform AI inference in a GPU-less environment, leading to a significant reduction in costs.
View Servers with AmpereOne