CBMM Panel Discussion: Should models of cortex be falsifiable?

CBMM Panel Discussion: Should models of cortex be falsifiable?

Date Posted:  December 7, 2020
Date Recorded:  December 1, 2020
CBMM Speaker(s):  Tomaso Poggio, Gabriel Kreiman, Josh McDermott, Leyla Isik, Martin Schrimpf, Susan Epstein, Jenelle Feather Speaker(s):  Thomas Serre, Michael Lee
  • All Captioned Videos
  • CBMM Special Seminars
Description: 

Presenters: Prof. Tomaso Poggio (MIT), Prof. Gabriel Kreiman (Harvard Medical School, BCH), and Prof. Thomas Serre (Brown U.)
Discussants: Prof. Leyla Isik (JHU), Martin Schrimpf (MIT), Michael Lee (MIT), Prof. Susan Epstein (Hunter CUNY), and Jenelle Feather (MIT)
Moderator: Prof. Josh McDermott (MIT)

Abstract: Deep Learning architectures designed by engineers and optimized with stochastic gradient descent on large image databases have become de facto models of the cortex. A prominent example is vision. What sorts of insights are derived from these models? Do the performance metrics reveal the inner workings of cortical circuits or are they a dangerous mirage? What are the critical tests that models of cortex should pass?

We plan to discuss the promises and pitfalls of deep learning models contrasting them with earlier models (VisNet, HMAX,…) which were developed from the ground up following neuroscience data to account for critical properties of scale + position invariance and selectivity of primate vision.