Michael Douglas
February 25, 2020 - 4:00 pm
Singleton Auditorium
Michael Douglas, Stony Brook
Title:  How will we do mathematics in 2030 ?
Abstract:
We make the case that over the coming decade,
computer assisted reasoning will become far more widely used in the mathematical sciences.
This includes interactive and automatic theorem verification, symbolic algebra, 
and emerging technologies...
Jacob Andreas
December 17, 2019 - 4:00 pm
MIT 46-5165
Jacob Andreas
Title: Language as a scaffold for learning
 
Abstract:
 
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to...
December 10, 2019 - 2:15 pm
Objects are posed in varied positions and shot at odd angles to spur new AI techniques. Kim Martineau | MIT Quest for Intelligence Computer vision models have learned to identify objects in photos so accurately that some can outperform humans on some datasets. But when those same object detectors are turned loose in the real world, their performance noticeably drops, creating reliability concerns for self-driving cars and other safety-...
December 10, 2019 - 11:30 am
Object recognition models have improved by leaps and bounds over the past decade, but they’ve got a long way to go where accuracy is concerned. That’s the conclusion of a joint team from the Massachusetts Institute of Technology and IBM, which recently released a data set — ObjectNet — designed to illustrate the performance gap between machine learning algorithms and humans. Unlike many existing data sets, which feature photos taken from Flickr...
Neural Information Processing Systems logo
December 6, 2019 - 1:00 pm
The Center for Brains, Minds and Machines is well-represented at the thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). Below, you will find listings to papers/proceedings accepted and accompanying coverage:   "Metamers of neural networks reveal divergence from human perceptual systems" J. Feather, Durango, A., Gonzalez, R., and McDermott, J. H., NeurIPS 2019. Vancouver, Canada, 2019.   [video] Metamers...
December 3, 2019 - 3:15 pm
Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI. Rob Matheson | MIT News Office Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick. Now MIT researchers...
November 26, 2019 - 4:00 pm
MIT 46-5165
Nick Watters (Tenenbaum Lab)
Title:  Unsupervised Learning and Structured Representations in Neural Networks
 
Abstract:
Sample efficiency, transfer, and flexibility are hallmarks of biological intelligence and long-standing challenges for artificial learning systems. Core to these capacities is the reuse of structured...
November 22, 2019 - 11:45 am
It’s not something most Harvard faculty spend much time contemplating, but Tomer Ullman likes to think about magic. In particular, he likes to think about whether it would be harder to levitate a frog or turn it to stone. And if you’re thinking the answer is obvious (turning it to stone, right?), Ullman says that’s the point. The reason the answer seems clear, the assistant professor of psychology said, has to do with what researchers call “...
November 19, 2019 - 4:00 pm
MIT 46-5165
Shimon Ullman
Topic: Combining vision and cognition by BU-TD visual routines ​
November 12, 2019 - 4:00 pm
MIT 46-5165
Katharina Dobs(Kanwisher Lab)
Using task-optimized neural networks to understand why brains have specialized processing for faces
Previous research has identified multiple functionally specialized regions of the human visual cortex and has started to characterize the precise function of these regions. But why do brains have...
November 12, 2019 - 12:15 pm
isee, an autonomous driving startup for trucks, announced $15 million in Series A funding from Founders Fund on Monday. Unlike other autonomous driving and logistics startups, isee uses proprietary deep learning and cognitive AI technology to try to give its trucks "common sense," an area that other technologies struggle to replicate.  The startup is Founders Fund's first investment in the red-hot logistics sector. Still, partner Scott...
Photo of Thomas Serre
November 5, 2019 - 4:00 pm
Singleton Auditorium
Thomas Serre, Cognitive, Linguistic & Psychological Sciences Department, Carney Institute for Brain...
Title: Feedforward and feedback processes in visual recognition
Abstract: Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even...
Photo of Thomas Icard
October 29, 2019 - 4:00 pm
Star Seminar Room (Stata D463)
Thomas Icard, Stanford
Abstract: How might we assess the expressive capacity of different classes of probabilistic generative models? The subject of this talk is an approach that appeals to machines of increasing strength (finite-state, recursive, etc.), or equivalently, by probabilistic grammars of increasing complexity...
Photo of Mikhail Belkin
October 28, 2019 - 4:00 pm
Singleton Auditorium
Mikhail Belkin, Professor, The Ohio State University - Department of Computer Science and Engineering,...
Title: Beyond Empirical Risk Minimization: the lessons of deep learning
Abstract: "A model with zero training error is  overfit to the training data and  will typically generalize poorly"  goes statistical textbook wisdom.  Yet, in modern practice, over-parametrized deep networks with   near ...
October 8, 2019 - 4:00 pm
MIT 46-5165
Mengmi Zhang and Jie Zheng, Kreiman Lab

Pages