With the demise of Dennard scaling and the slowing of Moore’s Law, hardware specialization is one of very few ways to improve energy efficiency and/or performance. The standard way to specialize is to add dedicated accelerators implemented as on-die fixed function circuits that are orders of magnitude more computationally efficient than standard processors. Those accelerators could be connected to the rest of the system through a network-on-chip or integrated into an application specific processor (ASP) that has further data path and/or memory system optimizations. Such approaches are often referred to as Application-Specific Integrated Circuits (ASIC).
In this talk, I will discuss an alternative hardware specialization based on reconfigurable logic as is found in field programmable gate arrays (FPGAs). Conventional wisdom says that FPGAs are an order of magnitude less power efficient, an order of magnitude less performant, and an order of magnitude larger than ASICs, as well as being harder to program. When analyzed in a full system context, however, FPGAs are significantly better than the conventional wisdom and, in some cases, are more efficient than ASIC alternatives. When combined with their other advantages such as dynamic reconfigurability and wide applicability, FPGAs are far more compelling than previously thought even at cloud-scale volumes. I will describe why the conventional wisdom should be reexamined, give examples of novel uses and cloud deployments of FPGAs, and describe how FPGAs could be made easier to program.