Glaring Gaps in Neurally-Inspired Computing

Tuesday, December 05, 2017
3:30 PM to 4:30 PM
POB 2.402
Free and open to the public

Today's emerging neural computing substrates have their origins in either biology (spiking neural networks, like IBM's TrueNorth), or machine learning (deep convolutional networks, like Google's TPU). There are glaring gaps between these two approaches, since they differ dramatically in terms of architecture, efficiency, usability, and practical applicability to real-world tasks.  Deep neural networks, which are based on the nonlinear perceptron neuron model, have recently emerged as a very powerful tool for classifying spatial inputs, such as complex, real-world image data. DNNs re-evaluate the entire network at every time step, performing either convolutions or matrix-vector multiplications for each layer, requiring massive amounts of compute time and memory to train and process, since there can be millions of parameters that need to be learned and retained for the networks to achieve high rates of accuracy. In contrast, spiking neural networks can be very energy efficient, because they are fundamentally event driven, leading to an efficiency gap between these approaches.However, spiking networks suffer from a dearth of effective approaches for configuring and/or training these networks to perform tasks with practical value. The semantic limitations of simplified hardware spiking neurons chip can make deployment of applications with a biological inspiration challenging, leading to a semantic gap. Similarly, numerical applications require many ad hoc changes to map to such substrates, leading to an algorithmic gap. Finally, emerging technology may provide a much more efficient substrate for these algorithms, creating a technology gap.

This talk will briefly survey our prior and ongoing work in bridging these gaps.  

x x


Mikko Lipasti

Philip Dunham Reed Professor of Electrical and Computer Engineering
University of Wisconsin-Madison

Mikko Lipasti is currently the Philip Dunham Reed Professor of Electrical and Computer Engineering at the University of Wisconsin-Madison. He was named an IEEE Fellow (class of 2013) "for contributions to the microarchitecture and design of high-performance microprocessors and computer systems."In 2012, he co-founded Thalchemy Corp, a startup company that is developing novel algorithms and accelerators to enable ultra low-power continuous sensory processing in smartphones and other battery-operated devices. 

He earned his BS in Computer Engineering from Valparaiso University in 1991, his M.S. in Electrical and Computer Engineering from Carnegie Mellon University in 1992, followed by his Ph.D. in 1997. Before and after his Ph.D. work, he learned to ply his craft during his years at IBM, where he helped develop software and hardware for PowerPC servers. He joined Wisconsin in Fall 1999, was granted tenure in 2005, and was promoted to Full Professor in 2009. He has consulted for Intel Corporation and Sun Microsystems.

His primary research interests include high-performance, low-power, and reliable processor cores; networks-on-chip for many-core processors; and fundamentally new, biologically-inspired models of computation.