Processor Performance and Energy-Efficiency in Scale-Up and Scale-Out Computing

Sunday, April 03, 2011
7:00 PM
Free and open to the public




The demand for processor capability is rising in datacenters. To sustain Internet-scale workload growth for applications such as online web search and social media we must scale-up the performance of individual processors and scale-out datacenters with those processors.  The challenge lies in achieving this goal without exceeding the "power wall." In this talk, I will discuss my approach to designing and deploying power-efficient processors. First, I will introduce a resilient architecture, involving both hardware and software components, that improves processor performance by reducing the operating voltage margin.  Hardware-based voltage emergency prediction uses recurring architectural events to predict and avoid large voltage swings, while software-based voltage smoothing treats the root-cause of the problem, transforming program activity on-the-fly, such that the operating voltage is stable.

Second, in the context of scale-out computing I will discuss deploying power-efficient mobile processors in datacenters. Small cores can deliver better energy efficiency than big monolithic out-of-order superscalar processors, but small cores may impact application quality-of-service robustness and flexibility. Emerging datacenter workloads like web search and online gaming are latency-sensitive and invoke computationally intensive kernels that stress small core designs.  I will quantify this effect using Microsoft Bing as a case study. 

Finally, I will put both these works into perspective, identifying new opportunities for innovation at the architecture and runtime software system layers that can enable efficient and effective scale-up and scale-out computing.


x x


Vijay Janapa

AMD Research and Advanced Development Laboratories