With the rapid advances in computing systems spanning from billions of IoTs (Internet of Things) to high performance exascale supercomputers, energy efficient design is an absolute must. Moreover, with the emergence of neural network accelerators for machine learning applications, there is a growing need for large capacity memories. It is estimated that by 2040, around 1 Trillion internet connected devices will be deployed generating millions of Zettabytes (1 Zetta = 10 21 ) consuming tens of Zetta-joules of compute energy/year. These trends clearly indicate the paramount importance of energy efficient memories across the compute continuum and to cater storage needs for future workloads.
In this seminar, I will discuss the circuit solutions for realizing energy efficient memory arrays. Supply voltage scaling is the primary driver to reduce energy consumption. The minimum operating supply voltage (Vmin) of a compute block consisting of static CMOS datapath logic and memory arrays is typically limited by process variations in the memory bitcells using minimum sized transistors. I will present an overview of low power memory design using novel bitcell topologies, Vmin-assist techniques and adaptive and resilient design for reducing V/F guardbands.