SAIL Room - 111 Levin Building, 425 S. University Avenue
Department of Brain and Cognitive Sciences
Computational Neuroimaging of Human Auditory Cortex
Just by listening, humans can determine who is talking to them, whether a window in their house is open or shut, or what their child dropped on the floor in the next room. This ability to derive information from sound is enabled by a cascade of neuronal processing stages that transform the sound waveform entering the ear into cortical representations that are presumed to make behaviorally important sound properties explicit. Although much is known about the peripheral processing of sound, the auditory cortex remains poorly understood, with little consensus even about its coarse-scale organization in humans. This talk will describe our recent efforts using computational neuroimaging methods to better understand the cortical representation of sound. Our work relies on several new methods for neuroimaging experimental design and data analysis: “model-matched” stimuli, voxel decomposition of responses to natural sounds, and the use of task-optimized deep neural networks to model brain responses. We have harnessed these methods to reveal functional segregation in non-primary auditory cortex, and representational transformations occurring between primary and non-primary cortex that may support the recognition of speech, music, and other real-world sound signals.
A pizza lunch will be served.