A neural network trained for prediction mimics diverse features of biological neurons and perception [video]
Date Posted:
May 19, 2020
Date Recorded:
May 1, 2020
CBMM Speaker(s):
Bill Lotter All Captioned Videos Publication Releases
Description:
Lead author, Bill Lotter, discusses their recent work published in Nature Machine Intelligence that demonstrates that the PredNet, a recurrent predictive neural network, can reproduce various phenomena observed in the brain.
A neural network trained for prediction mimics diverse features of biological neurons and perception - doi:10.1038/s42256-020-0170-9
[MUSIC PLAYING] BILL LOTTER: Hi, I'm Bill Lotter. I have a PhD in biophysics from Harvard, where I was jointly advised by Gabriel Kreiman and David Cox. And I was a member of CBMM.
A common goal in neuroscience is to build computational models that can reproduce and explain neural phenomena as this can help us gain a better understanding of the computational principles in the brain. Here we took a deep neural network that we had previously developed called the PredNet and tested whether it could reproduce various aspects that are observed in actual neurons.
There have been a number of recent works, for instance, that have shown that deep neural networks can be useful in predicting the responses of actual neurons to sets of images. Many of these networks, however, have been purely feed forward, meaning that they lack top-down and lateral recurrence, which we know are prevalent in the brain.
Additionally, these networks are often trained in a purely supervised sense using large numbers of label-training examples. We know that this level of supervision is also different from how humans learn. The PredNet model, on the other hand, is a neural network that has both top-down and lateral recurrent connections, and is additionally trained in a purely unsupervised or self-supervised manner. The network is trained to make next frame predictions in videos. That is, given a series of video frames, it's trained to predict the next frame in the sequence.
Here, like the original paper, we trained the PredNet model on car-mounted camera videos. So these are videos of cars driving around with cameras attached. We then took the network and tested it with a number of artificial stimuli that are similar to those commonly used in neuroscience experiments to see if it could reproduce various phenomena. We saw that it indeed could reproducing aspects ranging from single unit response properties to responses to visual illusions.
For instance, we saw that it exhibited the temporal and spatial response properties that resembled visual neuron responses. The model also showed sequence learning aspects that are similar to those observed in primate visual cortex. Finally, it was able to reproduce aspects of visual illusions, such as those like the Kanizsa triangle, where the model's response resembled responses observed in neurons.
It additionally showed correlates of the flash lag illusion. So a model that was inspired by neuroscience and trained on real-world stimuli could reproduce various aspects observed in biological neurons even though it wasn't explicitly trained to do so. These results thus suggest potentially deep connections between recurrent predictive neural networks and the brain.
[MUSIC PLAYING]
Associated Research Module: