Prof. Mattan Erez and Prof. Michael Orshansky of Texas ECE have received an award from Facebook Research to develop new technologies in the area of “AI System Hardware/Software Co-Design.” Their project, “Low Memory-Bandwidth DNN Accelerator for Training Sparse Models”, is in response to a set of challenges defined by Facebook, focusing on “simultaneous design and optimization of several aspects of the system, including hardware and software, to achieve a set target for a given system metric, such as throughput, latency, power, size or their combination. Deep learning has been particularly amenable to such co-design processes across various parts of the software and hardware stack.”
Prof. Erez and Orshansky’s project aims to allow substantial reductions in the cost of training and inference. Expensive memory systems and limited supply of training accelerators are main drivers of cost, particularly, for large neural networks and datasets. The project aims to reduce memory bandwidth requirements through novel locality optimizations, dimensionality reduction, and compression.