A Computational Explanation for Domain Specificity in the Human Visual System

A Computational Explanation for Domain Specificity in the Human Visual System

Date Posted:  July 14, 2020
Date Recorded:  June 6, 2020
CBMM Speaker(s):  Katharina Dobs
  • All Captioned Videos
  • CBMM Research
Associated CBMM Pages: 

Katharina Dobs, MIT

Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains? In this talk, I will present recent work testing whether the segregation of face and object perception in primate brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that smaller networks cannot perform both tasks without a cost, while larger neural networks performed both tasks well by spontaneously segregating them into distinct pathways. These results suggest that for face recognition, and perhaps more broadly, the domain-specific organization of the cortex may reflect a computational optimization over development and evolution for the real-world tasks humans solve.

Speaker Bio:

From a brief glimpse of a complex scene, we recognize people and objects, their relationships to each other, and the overall gist of the scene – all within a few hundred milliseconds and with no apparent effort. What are the computations underlying this remarkable ability and how are they implemented in the brain? To address these questions, my research bridges recent advances in machine learning with human behavioral and neural data to provide a computationally precise account of how visual recognition works in humans. I am currently a postdoc at MIT where I work with Nancy Kanwisher. I completed my PhD at the Max Planck Institute for Biological Cybernetics under the supervision of Isabelle Bülthoff and Johannes Schultz investigating behavioral and neural correlates of dynamic face perception. During my first Postdoc at CNRS-CerCo working with Leila Reddy and Weiji Ma, I used a combination of behavioral modeling and neuroimaging to characterize the integration of facial form and motion information during face perception.