As we move through the world around us our senses collect huge amounts of information about the state of our environment. It is the brain's job to process this information quickly and accurately in order to make decisions, implement coordinated motor responses, and predict future events. How does the brain do this? I am interested in studying sensory processing systems in order to better understand the fundamental computations that neurons perform during the processing of external stimuli. I am also interested in how internal states such as motivation and attention influence these computations.
More broadly, I am interested in how the fundamental principles underlying these computational strategies employed by the brain can be translated into artificial information processing systems. Over the last decade it has become clear that powerful new machine learning approaches like deep neural networks have a superficial but strong connection to biological information processing systems, like the visual system. Further developing these parallels will inevitably result in significant scientific and technological advances in the years to come.
We develop a new decoding framework for estimating stimulus identity from recorded neural population activity. Our framework exploits the low-dimensional structure of this activity, resulting in a linear estimator that is more efficient than those from other common linear decoding algorithms. Furthermore, this framework admits a straighforward nonlinear extension that compares favorably to other nonlinear decoding algorithms.
Variability in neural population responses from early sensory areas often contains low-dimensional structure. Here we introduce two new classes of nonlinear latent variable models to characterize this structure. Both model classes rely on autoencoder neural networks for latent variable inference; one class models arbitrary nonlinear interactions while the other explicitly models additive and multiplicative modulations of stimulus responses.preprint | code
We propose the Rectified Latent Variable Model (RLVM) for analyzing neural population activity. The RLVM constrains latent variables to be both rectified and smooth. We demonstrate the advantages of these constraints using both simulated and experimental data, and show how initialization-dependent solutions can be improved by initializing model components with an autoencoder neural network.paper | preprint | code