Today's emerging neural computing substrates have their origins in either biology (spiking neural networks, like IBM's TrueNorth), or machine learning (deep convolutional networks, like Google's TPU). There are glaring gaps between these two approaches, since they differ dramatically in terms of architecture, efficiency, usability, and practical applicability to real-world tasks. Deep neural networks, which are based on the nonlinear perceptron neuron model, have recently emerged as a very powerful tool for classifying spatial inputs, such as complex, real-world image data. DNNs re-evaluate the entire network at every time step, performing either convolutions or matrix-vector multiplications for each layer, requiring massive amounts of compute time and memory to train and process, since there can be millions of parameters that need to be learned and retained for the networks to achieve high rates of accuracy. In contrast, spiking neural networks can be very energy efficient, because they are fundamentally event driven, leading to an efficiency gap between these approaches.However, spiking networks suffer from a dearth of effective approaches for configuring and/or training these networks to perform tasks with practical value. The semantic limitations of simplified hardware spiking neurons chip can make deployment of applications with a biological inspiration challenging, leading to a semantic gap. Similarly, numerical applications require many ad hoc changes to map to such substrates, leading to an algorithmic gap. Finally, emerging technology may provide a much more efficient substrate for these algorithms, creating a technology gap.
This talk will briefly survey our prior and ongoing work in bridging these gaps.