Efficient Computation through Tuned Approximation
Prof. David Keyes
Extreme Computing Research Center
King Abdullah University of Science and Technology, Saudi Arabia
"Efficient Computation through Tuned Approximation"
Wednesday, November 2, 5 pm - 6 pm, Arabic Standard Time (AST)
7 am PDT / 10 am EDT / 2 pm UTC / 3 pm CET / 5 pm AST / 11 pm JST
Watch this webinar here and download the slide deck here!
Supercomputing Spotlights is a new webinar series, presented by the SIAM Activity Group on Supercomputing (SIAG/SC, https://siag-sc.org) as part of new initiatives that focus on raising awareness of high-performance computing (HPC) opportunities and growing the community. The webinars feature short TED-style presentations that highlight the impact and successes of HPC throughout our world. Presentations, emphasizing achievements and opportunities in HPC, are intended for the broad international community, especially students and newcomers to the field. Join us!
Abstract: Numerical linear algebra software is being reinvented to provide opportunities to tune dynamically the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point computation in science and engineering has a history of “oversolving” relative to expectations for many models. So often are real datatypes defaulted to double precision that GPUs did not gain wide acceptance until they provided in hardware operations not required in their original domain of graphics. Indeed, the condition number of discretizations of the Laplacian reaches the reciprocal of unit roundoff for single precision with just a thousand uniformly spaced points per dimension. However, many operations considered at a blockwise level allow for lower precision and many blocks can be approximated with low rank near equivalents. This leads to smaller memory footprint, which implies higher residency on memory hierarchies, leading in turn to less time and energy spent on data copying, which may even dwarf the savings from fewer and cheaper flops. We provide examples from several application domains, including a preview of a 2022 Gordon Bell finalist computation that benefits from both blockwise lower precisions and lower ranks.
Bio: David Keyes is a professor of applied mathematics, computer science, and mechanical engineering at the King Abdullah University of Science and Technology (KAUST), where he directs the Extreme Computing Research Center, and where he was founding Dean in 2009. He is also Adjunct Professor of applied mathematics at Columbia and an affiliate of several US national labs. He earned a BSE in aerospace and mechanical sciences from Princeton in 1978 and a PhD in applied mathematics from Harvard in 1984. He works at the interfaces between parallel computing and the numerical analysis of PDEs and spatial statistics, with a focus on scalable implicit solvers and exploiting data sparsity. He helped develop and popularize the Newton-Krylov-Schwarz (NKS) and Additive Schwarz Preconditioned Inexact Newton (ASPIN) methods. He has been awarded the ACM Gordon Bell Prize and the IEEE Sidney Fernbach Prize and is a fellow of the SIAM, AMS, and AAAS.