Thursday, April 15, 2021 -
Sparse problems – computer programs in which data lacks spatial locality in memory – are the main components in several application domains such as recommendation systems, computer vision, robotics, graph analytics, and scientific computing. Today, several computers and supercomputers containing millions of CPUs and GPUs are actively involved in executing sparse problems. Although sparse problems dominate, we have been designing our machines only for dense problems for such a long time. Because of the contradiction between the abilities of the hardware and the nature of the problems, even modern high-performance CPUs and GPUs and state-of-the-art domain-specific architectures are poorly suited to sparse problems, utilizing only a tiny fraction of their peak performance.
In this talk, I present my research that provides solutions to resolve four main challenges that prevent sparse problems from efficiently achieving high performance on today’s computing platforms: irregular/inefficient memory accesses, data dependencies, computation underutilization, and slow decompression. In more detail, I focus on the first two challenges. I illustrate why and how my research deals with sparsity by using an intelligent reduction tree near memory to process data while gathering them from random locations of memory – neither where data reside nor where dense computations occur. I also explain why and how my research suggests converting mathematical dependencies into gate-level dependencies at the software level and exploiting dynamic partial reconfiguration at the hardware level, to execute sparse scientific problems more quickly than conventional architectures do. In the end, I propose my plans for developing a novel approach to computing using intelligent dynamically reconfigurable computation platforms to envision the needs of data and algorithms in the future.
Bahar Asgari is a Ph.D. candidate in the School of Electrical and Computer Engineering at Georgia Tech. Her doctoral dissertation, in consultation with her advisors Professor Sudhakar Yalamanchili and Professor Hyesoon Kim, focuses on efficiently improving the execution performance of sparse problems. Her proposed hardware accelerators and hardware/software co-optimization solutions that deal with essential challenges of sparse problems contribute to widespread application domains from machine learning to high-performance scientific computing. Besides her dissertation research, Bahar has conducted research in collaboration with other research scientists and faculty at Georgia Tech as she believes that collaboration is key to innovation. Bahar’s research and collaborative work have appeared at top-tier computer architecture conferences including HPCA, ASPLOS, DAC, DATE, IISWC, ICCD, and DSN as well as high-impact journals. Bahar has been selected to participate in Rising Stars 2019, an intensive academic career workshop for women in EECS. Bahar’s personal website is https://baharasg.github.io/.