Penn Arts & Sciences Logo

CNI Speaker Series: John Cunningham

Tuesday, March 13, 2018 - 11:30am

Barchi Library (140 John Morgan Building)

John Cunningham
Department of Statistics
Data Science Institute
Columbia University

Computational structure in large-scale neural data: how to find it, and when to believe it

One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains. Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed. Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding structure in this data, and (ii) methods for statistically validating that structure. First, I will review our work that has used factor modeling and dynamical systems to advance understanding of the computational structure in the motor cortex of primates and rodents. Second, while these methods and the broader class of such methods are promising, they are also perilous: novel analysis techniques do not always consider the possibility that their results are an expected consequence of some simpler, already-known feature of the data. I will present two works that address this growing problem, the first of which derives a tensor-variate maximum entropy distribution with user-specified moment constraints along each mode. This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of structure in the motor and prefrontal cortices. I will then discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling.


A pizza lunch will be served.