Embedded thumbnail for Theoretical issues in deep networks [video]
Recorded:
Jul 21, 2020
Uploaded:
July 21, 2020
Part of
All Captioned Videos, Publication Releases
CBMM Speaker(s):
Andrzej Banburski
Publication Theoretical issues in deep networks Abstract:
While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about...
Embedded thumbnail for A Computational Explanation for Domain Specificity in the Human Visual System
Recorded:
Jun 6, 2020
Uploaded:
July 14, 2020
Part of
All Captioned Videos, CBMM Research
CBMM Speaker(s):
Katharina Dobs
Katharina Dobs, MIT Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design...
Embedded thumbnail for Decoding Animal Behavior Through Pose Tracking
Recorded:
Jul 9, 2020
Uploaded:
July 10, 2020
Part of
All Captioned Videos, Computational Tutorials
Speaker(s):
Talmo Pereira, Princeton University
Talmo Pereira, Princeton University Behavioral quantification, the problem of measuring and describing how an animal interacts with the world, has been gaining increasing attention across disciplines as new computational methods emerge to automate...
Embedded thumbnail for Deciphering Brain Codes to Build Smarter AI
Recorded:
Jun 24, 2020
Uploaded:
July 7, 2020
Part of
All Captioned Videos, CBMM Summer Lecture Series
CBMM Speaker(s):
Gabriel Kreiman
Embedded thumbnail for Learning to see late in life
Recorded:
Jun 29, 2020
Uploaded:
July 7, 2020
Part of
All Captioned Videos, CBMM Summer Lecture Series
Speaker(s):
Pawan Sinha
[The video is missing the first few minutes of the talk due to technical difficulties.] Pawan Sinha, MIT
Embedded thumbnail for AI for physics & physics for AI
Recorded:
May 5, 2020
Uploaded:
June 25, 2020
Part of
All Captioned Videos, CBMM Research
CBMM Speaker(s):
Max Tegmark
Max Tegmark, MIT Abstract: After briefly reviewing how machine learning is becoming ever-more widely used in physics, I explore how ideas and methods from physics can help improve machine learning, focusing on automated discovery of mathematical...
Embedded thumbnail for Is compositionality overrated? The view from language emergence
Recorded:
Jun 23, 2020
Uploaded:
June 24, 2020
Part of
All Captioned Videos, Brains, Minds and Machines Seminar Series
Speaker(s):
Marco Baroni, Facebook AI Research and University Pompeu Fabra Barcelona
Abstract: Compositionality is the property whereby linguistic expressions that denote new composite meanings are derived by a rule-based combination of expressions denoting their parts. Linguists agree that compositionality plays a central role in...
Embedded thumbnail for Improving Generalization by Self-Training & Self Distillation
Recorded:
Jun 9, 2020
Uploaded:
June 10, 2020
Part of
All Captioned Videos, CBMM Research
Speaker(s):
Hossein Mobahi, Google Research
In supervised learning we often seek a model which minimizes (to epsilon optimality) a loss function over a training set, possibly subject to some (implicit or explicit) regularization. Suppose you train a model this way and read out the predictions...
Embedded thumbnail for A neural network trained for prediction mimics diverse features of biological neurons and perception [video]
Recorded:
May 1, 2020
Uploaded:
May 19, 2020
Part of
All Captioned Videos, Publication Releases
CBMM Speaker(s):
Bill Lotter
Lead author, Bill Lotter, discusses their recent work published in Nature Machine Intelligence that demonstrates that the PredNet, a recurrent predictive neural network, can reproduce various phenomena observed in the brain. A neural network trained...
Embedded thumbnail for Sobolev Independence Criterion: Non-Linear Feature Selection with False Discovery Control.
Recorded:
Apr 28, 2020
Uploaded:
May 5, 2020
Part of
All Captioned Videos, CBMM Research
Speaker(s):
Youssef Mroueh, IBM Research and MIT-IBM Watson AI lab
Youssef Mroueh, IBM Research and MIT-IBM Watson AI lab Abstract: In this talk I will show how learning gradients help us designing new non-linear algorithms for feature selection, black box sampling and also, in understanding neural style transfer....

Pages