Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs [video]
Date Posted:
February 10, 2020
Date Recorded:
October 25, 2019
CBMM Speaker(s):
Martin Schrimpf Speaker(s):
Jonas Kubilius
All Captioned Videos Publication Releases
Description:
Lead authors Jonas Kubilius and Martin Schrimpf discuss the challenges of measuring how closely neural networks match the brain and present a new scoring method Brain-Score they have developed to evaluate models of the brain’s ventral stream at scale together with a novel shallow and recurrent network CORnet.
[MUSIC PLAYING] PRESENTER: Before we started this project, research in neuroscience was typically at an individual, experimental level. You collected data from, for instance, before or at T, and then you test one reform model and one at T-model, and those are usually separate.
So what we're trying here is to start an integrative approach that really combines experiments at multiple levels and puts more constraints on the models to be more and more brain like.
For the first set of benchmarks, we connected two neural benchmarks and one behavioral benchmark. The two neural benchmarks were high quality, [INAUDIBLE] recordings from before and at T, sort of high level areas in visual processing. And the behavioral benchmark was from humans doing a [INAUDIBLE] sample task.
The set of these benchmarks together is what we call brain score. On the model side, we also collected daily use models in machine learning. So these were ranging from early [INAUDIBLE] all the way to the latest and greatest resonance or [INAUDIBLE] at the time.
And then we evaluate those models on how well they could predict the neural activity in before and at T and on how well they could capture human behavior on a fine grain image level.
JONAS KUBILIUS: So while we benchmarked all of these models on brain score, we found that there is a very robust global correlation, such that models that are performing better on image nets are also more predictive of brain responses. However, the state of the art model on ImageNet is not the best model for predicting brain responses.
So it seems like, if you're only optimizing for a mission that that strategy may not be sufficient anymore to get the best models off brain.
So when you look at the best brain score models, they are doing their job. They're predicting neural and behavioral responses as we want it. However, they have many layers. And that is quite at odds how we tend to think about visual system, where there is just a handful of visual areas.
The mapping becomes pretty tricky between the models and visual system. And there is another problem. All of these models are feet forward, while the visual system is quite recurrent. And recurrent plays an important role in how we recognize objects.
So we decided to develop a model that would be shallow and recurrent. And that recurrence would be compensating for the lack of depth in the model.
MARTIN SCHRIMPF: Now testing [INAUDIBLE] on the ImageNet benchmark, we found that it was actually very competitive compared to other models, especially considering its shallowness.
JONAS KUBILIUS: And we also saw that it's actually doing really well on Brain-Score, which was our target goal. Now, on top of that, we thought, well, this is a recurring model. So how about we try to predict neural responses over time? Which is not what these feed-forward models could do.
Happily enough, we found a very good correlation between these measures.
MARTIN SCHRIMPF: In addition to that, we also tested how well could this model transfer to another data set? And we found that it really outperformed comparable shallow models. Now going forward, we're trying to expand our set of integrative benchmarks even more.
So we're going to put in V1, V2 processing, more behaviors and so forth. And our plan is to test [INAUDIBLE] all of them along with the other models. In addition, we're opening up the Brain-Score platform for new submissions. So if you think you have the best model for image processing in the brain, please send it our way.
[MUSIC PLAYING]
Associated Research Module: