Broadly speaking, I’m interested in how the brain manages to efficiently allocate representational resources in a world where the statistics of sensory features changes from situation to situation. I’m particularly interested in how the structure in this sensory non-stationarity makes it possible to adapt to such changes more efficiently. Speech perception serves as an excellent model organism, because the statistics of the speech signal depend both on what is being said and who is saying it, both of which introduce highly structured variability that listeners are sensitive to.

My work aims to develop explicit, computational models of perception and adaptation, with a particular emphasis on speech. I think that good theories and models draw on insights from—and try to make connections between—neuroscience, behavioral data, and broader computational-level cognitive modeling.

Topics that I have or am working on or am interested in include: perceptual category learning, phonetic adaptation/recalibration, acquisition of phonetic categories, and cue combination and complex acoustic feature extraction. I’m also particularly interested in increasing awareness and appreciation of Bayesian methods for modeling (although I don’t consider myself a capital-B-Bayesian) and—especially—data analysis.


Submitted and in prep

Conference proceedings