A Representation Learning Method for Linear Dynamical System Identification
In this talk, I will first introduce representation learning and new insights into efficiently learning data representations in a convex fashion. In particular, a globally optimal and computationally efficient algorithm will be presented for the problem of matrix completion, dictionary learning and multi-view learning. In the second part of the talk, new techniques for analysis of time series data will be presented. I consider maximum likelihood estimation of linear dynamical systems (LDS). Maximum likelihood is typically considered to be hard in this setting, since both the latent states and transition parameters need to be inferred jointly.
Given that expectation-maximization does not scale and is prone to local minima, moment matching approaches from the subspace identification literature have become the standard methods for linear dynamical system estimation, despite known issues with their statistical efficiency. In this paper, I instead reconsider likelihood maximization and develop a new global estimation strategy that can simultaneously recover the latent states and transition parameters. The key insight is a two-view reformulation of maximum likelihood estimation for linear dynamical systems that enables the use of recent boosting algorithms for matrix factorization. I show that the proposed estimation strategy outperforms N4SID, a widely used subspace identification model, both in terms of accuracy and runtime. Part of this work has been presented in the NIPS time series workshop in December 2016 and was awarded the best poster of the workshop.