Sensation and Perception
Specific Research Areas:
Computational Models of Visual Processing; Depth Perception; Human Vision; Task-Specific Natural Scene Statistics; Visual Neuroscience
Estimating the speed of a passing object; estimating the distance between two objects in a scene far two objects are far each other; refocusing your eye from near to far… You perform these tasks more than 100,000 times per day. But the effortlessness with which you perform these tasks belies enormous computational complexity. Our work concentrates on understanding how to estimate depth from natural images. We determine how best to estimate individual depth cues from natural images (e.g. defocus, disparity, motion), and we use behavioral studies to investigate how well humans estimate those same cues.
A fundamental goal of vision research is to understand how vision functions in natural conditions with natural stimuli. How do we see? What are the computations that optimally transform sensory information into behaviorally relevant representations of the environment? What are the computations that humans and animals actually use? Unfortunately, natural stimuli are monstrously complicated and are difficult to characterize mathematically. As a result, most vision research uses simple, artificial stimuli that are easier to characterize (e.g. bars and blobs). Most of our knowledge about visual processing derives from research with such stimuli; however, they lack many of the properties inherent to natural stimuli, the stimuli the visual system evolved to process.
The primary aim of my research is to enable the principled study of critical visual tasks with natural stimuli. Rather than attempting to develop a general model natural stimuli, we narrow the problem by focusing on the properties of natural stimuli that are most useful for particular tasks. We develop tools to enable rigorous mathematical characterization of the task-relevant properties of natural stimuli. These tools help generate principled, quantitative hypotheses about how visual information should be processed ideally. These hypotheses are then used to make predictions about and design experiments to test behavioral performance and neural processing. In some cases, we have discovered a striking correspondence between an ideal and the human visual system. Methods that are developed for the study of a given task in the human visual system can oftentimes be applied to a similar task in animal or machine vision systems.
Professor Johannes Burge will be considering new graduate students for admission for Fall 2018.
Burge J, Jaini P (2017). Accuracy Maximization Analysis for sensory-perceptual tasks: Computational improvements, filter robustness, and coding advantages for scaled additive noise. PLoS Computational Biology, 13(2): e1005281. doi:10.1371/journal.pcbi.1005281 [ html | pdf ]
Burge J & Geisler WS (2011). Optimal defocus estimation in individual natural images. Proceedings of the National Academy of Sciences, 108 (40): 16849-16854 [ pdf ]
Burge J, Ernst MO, Banks MS (2008). The statistical determinants of adaptation rate in human reaching.Journal of Vision, 8(4) 20: 1-19. [ pdf ]
PSYC 111: Perception
PSYC 411: Seminar in Perception