%0 Conference Proceedings %B Proceedings of the 40th International Conference on Machine Learning, PMLR %D 2023 %T System Identification of Neural Systems: If We Got It Right, Would We Know? %A Yena Han %A Tomaso A. Poggio %A Brian Cheung %X

Artificial neural networks are being proposed as models of parts of the brain. The networks are compared to recordings of biological neurons, and good performance in reproducing neural responses is considered to support the model’s validity. A key question is how much this system identification approach tells us about brain computation. Does it validate one model architecture over another? We evaluate the most commonly used comparison techniques, such as a linear encoding model and centered kernel alignment, to correctly identify a model by replacing brain recordings with known ground truth models. System identification performance is quite variable; it also depends significantly on factors independent of the ground truth architecture, such as stimuli images. In addition, we show the limitations of using functional similarity scores in identifying higher-level architectural motifs.

%B Proceedings of the 40th International Conference on Machine Learning, PMLR %V 202 %P 12430-12444 %8 07/2023 %G eng %U https://proceedings.mlr.press/v202/han23d.html %0 Generic %D 2022 %T System identification of neural systems: If we got it right, would we know? %A Yena Han %A Tomaso Poggio %A Brian Cheung %X

Various artificial neural networks developed by engineers have been evaluated as models of the brain, such as the ventral stream in the primate visual cortex. After being trained on large datasets, the network outputs are compared to recordings of biological neurons. Good performance in reproducing neural responses is taken as validation for the model. This system identification approach is different from the traditional ways to test theories and associated models in the natural sciences. Furthermore, it lacks a clear foundation in terms of theory and empirical validation. Here we begin characterizing some of these emerging approaches: what do they tell us? To address this question, we benchmark their ability to correctly identify a model by replacing the brain recordings with recordings from a known ground truth model. We evaluate commonly used identification techniques such as neural regression (linear regression on a population of model units) and centered kernel alignment (CKA). Even in the setting where the correct model is among the candidates, we find that the performance of these approaches at system identification is quite variable; it also depends significantly on factors independent of the ground truth architecture, such as scoring function and dataset.

%8 07/2022 %2

https://hdl.handle.net/1721.1/143617

%0 Journal Article %J Scientific Reports %D 2020 %T Scale and translation-invariance for novel objects in human vision %A Yena Han %A Gemma Roig %A Geiger, Gad %A Tomaso Poggio %X

Though the range of invariance in recognition of novel objects is a basic aspect of human vision, its characterization has remained surprisingly elusive. Here we report tolerance to scale and position changes in one-shot learning by measuring recognition accuracy of Korean letters presented in a flash to non-Korean subjects who had no previous experience with Korean letters. We found that humans have significant scale-invariance after only a single exposure to a novel object. The range of translation-invariance is limited, depending on the size and position of presented objects. to understand the underlying brain computation associated with the invariance properties, we compared experimental data with computational modeling results. our results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance, by encoding different scale channels as well as eccentricity-dependent representations captured by neurons’ receptive field sizes and sampling density that change with eccentricity. Our psychophysical experiments and related simulations strongly suggest that the human visual system uses a computational strategy that differs in some key aspects from current deep learning architectures, being more data efficient and relying more critically on eye-movements.

%B Scientific Reports %V 10 %8 01/2020 %G eng %U http://www.nature.com/articles/s41598-019-57261-6 %N 1411 %! Sci Rep %R 10.1038/s41598-019-57261-6 %0 Conference Paper %B Vision Science Society %D 2019 %T Eccentricity Dependent Neural Network with Recurrent Attention for Scale, Translation and Clutter Invariance %A Jiaxuan Zhang %A Yena Han %A Tomaso Poggio %A Gemma Roig %B Vision Science Society %C Florida, USA %8 05/2019 %G eng %0 Conference Paper %B Vision Science Society %D 2019 %T Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks %A Yena Han %A Gemma Roig %A Geiger, Gad %A Tomaso Poggio %B Vision Science Society %C Florida, USA %8 05/2019 %G eng %0 Conference Paper %B Vision Science Society %D 2019 %T Properties of invariant object recognition in human oneshot learning suggests a hierarchical architecture different from deep convolutional neural networks %A Yena Han %A Gemma Roig %A Geiger, Gad %A Tomaso Poggio %B Vision Science Society %C St Pete Beach, FL, USA %8 05/2019 %G eng %U https://jov.arvojournals.org/article.aspx?articleid=2749961https://jov.arvojournals.org/article.aspx?articleid=2749961 %R 10.1167/19.10.28d %0 Generic %D 2018 %T Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results %A Luke Arend %A Yena Han %A Martin Schrimpf %A Pouya Bashivan %A Kohitij Kar %A Tomaso Poggio %A James J. DiCarlo %A Xavier Boix %X

Deep neural networks have been shown to predict neural responses in higher visual cortex. The mapping from the model to a neuron in the brain occurs through a linear combination of many units in the model, leaving open the question of whether there also exists a correspondence at the level of individual neurons. Here we show that there exist many one-to-one mappings between single units in a deep neural network model and neurons in the brain. We show that this correspondence at the single- unit level is ubiquitous among state-of-the-art deep neural networks, and grows more pronounced for models with higher performance on a large-scale visual recognition task. Comparing matched populations—in the brain and in a model—we demonstrate a further correspondence at the level of the population code: stimulus category can be partially decoded from real neural responses using a classifier trained purely on a matched population of artificial units in a model. This provides a new point of investigation for phenomena which require fine-grained mappings between deep neural networks and the brain.

%8 11/2018 %2

http://hdl.handle.net/1721.1/118847

%0 Generic %D 2017 %T On the Human Visual System Invariance to Translation and Scale %A Yena Han %A Gemma Roig %A Gadi Geiger %A Tomaso Poggio %B Vision Sciences Society %0 Conference Paper %B AAAI Spring Symposium Series, Science of Intelligence %D 2017 %T Is the Human Visual System Invariant to Translation and Scale? %A Yena Han %A Gemma Roig %A Gadi Geiger %A Tomaso Poggio %B AAAI Spring Symposium Series, Science of Intelligence %G eng