Page 60 - Chip Scale Review_November December_2021-digital
P. 60
It also is bringing about the increasing use of local and mobile
artificial intelligence, implemented on your phone or in your
car; wireless networking, including 5G, Wi-Fi, and IoT devices;
edge artificial intelligence (AI), implemented on edge-computing
devices; and a fiber backbone that connects to a server that
performs HPC to implement high-end ML and AI. The key
takeaway is that there is no single application driving today’s
wave of semiconductor purchases—rather, the combination and
convergence of all applications is driving the wave.
Driving the need for exascale computing
The massive amounts of data that the converged technology
ecosystem generates must be processed, thereby driving the
advent of exascale computing. The term exascale refers to a
supercomputer capable of calculating at least one exaflop, or
10 floating-point operations per second—a thousand-fold
18
increase in compute power vs. the first petascale computer,
which began operating in 2008, according to the Los Alamos
National Laboratory [2]. Currently, no single exascale computer
exists, although the first, called Frontier, may debut at Oak Ridge
National Laboratory (ORNL) later this year [3]. Nevertheless,
the combined compute power currently available certainly
exceeds the exaflop number. Aggregate exascale compute power
resides on everything from a high-end server to a handheld
smartphone—whose capability exceeds that of a high-end PC of
just a few years ago.
The industry is looking at new approaches to getting even
more compute power and overcoming the challenges that the
new approaches entail. For example, the mobile computing
industry is moving to new processing nodes and will have to
address the tradeoffs between power and performance while
accommodating new failure models. Companies serving high-
performance compute and graphics applications will deploy
chiplet technology—a building-block approach that will allow
them to precisely match their speed requirements to their
intended applications and to improve yield. These companies will
also rely on advanced packaging techniques while contending
with power and thermal challenges. Furthermore, in their drive
toward exascale computing, makers of datacenter infrastructure
will deploy millions of heterogeneous cores to achieve scalability
and parallelism they need while maintaining resilience to failures
and finding innovative ways to manage power.
Implications for test
Convergence and exascale computing combine to present unique
testing requirements as chipmakers move toward more advanced
process nodes. Because transistor density increases at smaller nodes,
scan-test data volume is exploding, creating the need for more
memory, faster scan techniques, and new methodologies. Advantest
estimates that scan data volume has increased 250% since 2018 and
will reach 450% of 2018’s level by 2023.
Handling this increasing scan data volume will require automatic
test equipment (ATE) with deeper vector memory and, to keep test
times under control, faster scan-test methodologies such as scan
over high-speed input/output (HSIO), which can employ a SERDES
interface or the IEEE 1149.10 high-speed test access port and on-chip
distribution architecture. We estimate that in 2020 the classic scan-
test technique provided scan access to 90% of all digital devices,
with muxed scan taking up the remainder. By 2025, classic scan’s
share could drop to 40%, with muxed, SERDES, and 1149.10 scan
58
58 Chip Scale Review November • December • 2021 [ChipScaleReview.com]