"How do these 'snort your coffee' numbers arise?": Expert questions the validity of Zettascale and Exascale-class AI supercomputers, and presents a simple compelling car analogy to explain not all FLOPs are the same

Trending 3 weeks ago
supercomputer
(Image credit: Shutterstock / Timofeev Vladimir)

A starring master has raised captious questions astir nan validity of claims surrounding "Zettascale" and "Exascale-class" AI supercomputers.

In an article that delves heavy into nan method intricacies of these terms, Doug Eadline from HPCWire explains really position for illustration exascale, which traditionally denote computers achieving 1 quintillion floating-point operations per 2nd (FLOPS), are often misused aliases misrepresented, particularly successful nan discourse of AI workloads.

Eadline points retired that galore of nan caller announcements touting "exascale" aliases moreover "zettascale" capacity are based connected speculative metrics, alternatively than tested results. He writes, "How do these 'snort your coffee' numbers originate from unbuilt systems?" - a mobility that highlights nan spread betwixt theoretical highest capacity and existent measured results successful nan section of high-performance computing. The word exascale has historically been reserved for systems that execute astatine slightest 10^18 FLOPS successful sustained, double-precision (64-bit) calculations, a modular verified by benchmarks specified arsenic nan High-Performance LINPACK (HPLinpack).

Car comparison

As Eadline explains, nan favoritism betwixt FLOPS successful AI and HPC is crucial. While AI workloads often trust connected lower-precision floating-point formats specified arsenic FP16, FP8, aliases moreover FP4, accepted HPC systems request higher precision for meticulous results.

The usage of these lower-precision numbers is what leads to inflated claims of exaFLOP aliases moreover zettaFLOP performance. According to Eadline, "calling it 'AI zetaFLOPS' is silly because nary AI was tally connected this unfinished machine."

He further emphasizes nan value of utilizing verified benchmarks for illustration HPLinpack, which has been nan modular for measuring HPC capacity since 1993, and really utilizing theoretical highest numbers tin beryllium misleading.

The 2 supercomputers that are presently portion of nan exascale nine - Frontier astatine Oak Ridge National Laboratory and Aurora astatine Argonne National Laboratory - person been tested pinch existent applications, dissimilar galore of nan AI systems making exascale claims.

Sign up to nan TechRadar Pro newsletter to get each nan apical news, opinion, features and guidance your business needs to succeed!

To explicate nan quality betwixt various floating-point formats, Eadline offers a car analogy: "The mean double precision FP64 car weighs astir 4,000 pounds (1814 Kilos). It is awesome astatine navigating terrain, holds 4 group comfortably, and gets 30 MPG. Now, see nan FP4 car, which has been stripped down to 250 pounds (113 Kilos) and gets an astounding 480 MPG. Great news. You person nan champion state mileage ever! Except, you don’t mention a fewer features of your awesome FP4 car. First, nan car has been stripped down of everything isolated from a mini motor and possibly a seat. What’s more, nan wheels are 16-sided (2^4) and supply a bumpy thrust arsenic compared to nan soft FP64 sedan thrust pinch wheels that person location astir 2^64 sides. There whitethorn beryllium places wherever your FP4 car useful conscionable fine, for illustration cruising down Inference Lane, but it will not do good heading down nan FP64 HPC highway."

Eadline’s article serves arsenic a reminder that while AI and HPC are converging, nan standards for measuring capacity successful these fields stay distinct. As he puts it, "Fuzzing things up pinch 'AI FLOPS' will not thief either," pointing retired that only verified systems that meet nan stringent requirements for double-precision calculations should beryllium considered existent exascale aliases zettascale systems.

More from TechRadar Pro

  • Want to spot what an Exaflop supercomputer looks like?
  • Most formidable supercomputer ever is warming up for ChatGPT 5
  • Nvidia has opened nan doors to Eos, 1 of nan world's fastest supercomputers

Wayne Williams is simply a freelancer penning news for TechRadar Pro. He has been penning astir computers, technology, and nan web for 30 years. In that clip he wrote for astir of nan UK’s PC magazines, and launched, edited and published a number of them too.

More
Source Technology
Technology