December 10, 2019 - 11:30 am
Object recognition models have improved by leaps and bounds over the past decade, but they’ve got a long way to go where accuracy is concerned. That’s the conclusion of a joint team from the Massachusetts Institute of Technology and IBM, which recently released a data set — ObjectNet — designed to illustrate the performance gap between machine learning algorithms and humans. Unlike many existing data sets, which feature photos taken from Flickr...
Neural Information Processing Systems logo
December 6, 2019 - 1:00 pm
The Center for Brains, Minds and Machines is well-represented at the thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). Below, you will find listings to papers/proceedings accepted and accompanying coverage:   "Metamers of neural networks reveal divergence from human perceptual systems" J. Feather, Durango, A., Gonzalez, R., and McDermott, J. H., NeurIPS 2019. Vancouver, Canada, 2019.   [video] Metamers...
December 3, 2019 - 3:15 pm
Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI. Rob Matheson | MIT News Office Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick. Now MIT researchers...
November 26, 2019 - 4:00 pm
MIT 46-5165
Nick Watters (Tenenbaum Lab)
Title:  Unsupervised Learning and Structured Representations in Neural Networks
Sample efficiency, transfer, and flexibility are hallmarks of biological intelligence and long-standing challenges for artificial learning systems. Core to these capacities is the reuse of structured...
November 22, 2019 - 11:45 am
It’s not something most Harvard faculty spend much time contemplating, but Tomer Ullman likes to think about magic. In particular, he likes to think about whether it would be harder to levitate a frog or turn it to stone. And if you’re thinking the answer is obvious (turning it to stone, right?), Ullman says that’s the point. The reason the answer seems clear, the assistant professor of psychology said, has to do with what researchers call “...
November 19, 2019 - 4:00 pm
MIT 46-5165
Shimon Ullman
Topic: Combining vision and cognition by BU-TD visual routines ​
November 19, 2019 - 12:45 pm
Using deductive reasoning, the bot identifies friend or foe to ensure victory over humans in certain online games. Rob Matheson | MIT News Office MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret. Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon...
November 12, 2019 - 4:00 pm
MIT 46-5165
Katharina Dobs(Kanwisher Lab)
Using task-optimized neural networks to understand why brains have specialized processing for faces
Previous research has identified multiple functionally specialized regions of the human visual cortex and has started to characterize the precise function of these regions. But why do brains have...
November 12, 2019 - 12:15 pm
isee, an autonomous driving startup for trucks, announced $15 million in Series A funding from Founders Fund on Monday. Unlike other autonomous driving and logistics startups, isee uses proprietary deep learning and cognitive AI technology to try to give its trucks "common sense," an area that other technologies struggle to replicate.  The startup is Founders Fund's first investment in the red-hot logistics sector. Still, partner Scott...
Photo of Thomas Serre
November 5, 2019 - 4:00 pm
Singleton Auditorium
Thomas Serre, Cognitive, Linguistic & Psychological Sciences Department, Carney Institute for Brain...
Title: Feedforward and feedback processes in visual recognition
Abstract: Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even...
Photo of Thomas Icard
October 29, 2019 - 4:00 pm
Star Seminar Room (Stata D463)
Thomas Icard, Stanford
Abstract: How might we assess the expressive capacity of different classes of probabilistic generative models? The subject of this talk is an approach that appeals to machines of increasing strength (finite-state, recursive, etc.), or equivalently, by probabilistic grammars of increasing complexity...
Photo of Mikhail Belkin
October 28, 2019 - 4:00 pm
Singleton Auditorium
Mikhail Belkin, Professor, The Ohio State University - Department of Computer Science and Engineering,...
Title: Beyond Empirical Risk Minimization: the lessons of deep learning
Abstract: "A model with zero training error is  overfit to the training data and  will typically generalize poorly"  goes statistical textbook wisdom.  Yet, in modern practice, over-parametrized deep networks with   near ...
October 8, 2019 - 4:00 pm
MIT 46-5165
Mengmi Zhang and Jie Zheng, Kreiman Lab
Photo of Jack Hidary
October 2, 2019 - 11:00 am
Singleton Auditorium
Jack Hidary, Alphabet X, formerly Google X
Abstract: Jack Hidary will take us through the nascent, but promising field of quantum computing and his new book, Quantum Computing: An Applied Approach
Bio: Jack D. Hidary is a research scientist in quantum computing and in AI at Alphabet X, formerly Google X. He and his group develop and...
October 1, 2019 - 4:00 pm
MIT 46-5165
Andrzej Banburski, Poggio Lab , Title: Biologically-inspired defenses against adversarial attacks   Abstract: Adversarial examples are a...