Barchi Library, 140 John Morgan Building
Department of Physics
Information geometry and Bayesian priors for model selection
Science is filled with toy models: abstractions of complicated systems that ignore microscopic details even when they are known. These toy models are often as good and even better than more accurate models. But why? In this talk I will discuss an information theory approach to answering this question. I will first show that typical models are sloppy, with a characteristic hyper-ribbon structure in their information geometry space. I will argue that this structure is both necessary and sufficient for a microscopic system to be amenable to description by a simpler effective theory. I will then show how renormalizable models from physics become sloppy as their data is coarse-grained, exactly when they can be described with a simpler theory. In the second part of the talk I will discuss our recent efforts to use the framework of uninformative priors to justify the correct scale of an effective theory. We consider a set of experiments as an information channel and choose a prior which optimizes the mutual information between model parameters and experimental outcomes. We find that for sloppy systems the optimal prior is typically restricted to sub-manifolds corresponding to reduced models.
A pizza lunch will be served.