If we hope to build an intelligent agent, we must solve (at least!) the following problem: by watching an incoming stream of sensor data, hypothesize a model of the external world that explains the data. An appealing representation for such a model is a dynamical systemâ€”a recursive rule for updating a state, i.e., a concise summary of past experience that we can use to predict future observations. Unfortunately, to discover the right dynamical system, we must solve difficult temporal and structural credit assignment problems; often, the result is a search space with a host of bad local optima, and disappointing performance in practice. However, recent work on spectral learning algorithms may provide an end run around these problems: these new spectral algorithms are computationally efficient, simple to implement, statistically consistent, and have no local optima. Perhaps even more interesting, they unify and generalize a number of other state-of-the-art learning methods, including Tomasi-Kanade structure from motion, subspace identification for Kalman filters and hidden Markov models, a novel approach to range-only SLAM, and Laplacian Eigenmaps for manifold learning.
Wednesday, February 22, 2012
Free and open to the public