April 14, 2020 - 4:00 pm
MIT 46-5165
Photo of Lior Wolf
March 16, 2020 - 4:00 pm
Singleton Auditorium
Lior Wolf, Tel Aviv University and Facebook AI Research.
Hypernetworks, also known as dynamic networks, are neural networks in which the weights of at least some of the layers vary dynamically based on the input. Such networks have composite architectures in which one network predicts the weights of another network. I will briefly describe the early days...
March 10, 2020 - 4:00 pm
MIT 46-5165
Michael Douglas
February 25, 2020 - 4:00 pm
Singleton Auditorium
Michael Douglas, Stony Brook
Title:  How will we do mathematics in 2030 ?
We make the case that over the coming decade,
computer assisted reasoning will become far more widely used in the mathematical sciences.
This includes interactive and automatic theorem verification, symbolic algebra, 
and emerging technologies...
February 18, 2020 - 4:00 pm
MIT 46-5165
Andrei Barbu, Katz Lab
January 14, 2020 - 8:30 am
Princeton’s Joshua Peterson and Harvard’s Arturo Deza flew earlier that week to Vancouver, British Columbia for the Neural Information Processing Systems (NeurIPS) conference, the world’s premiere machine learning venue, where they organized the Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop along with MIT-CBMM’s Ratan Murty and Princeton’s Tom Griffiths. The SVRHM workshop was sponsored in part by the Center...
December 20, 2019 - 12:15 pm
A new algorithm wins multi-player, hidden role games. Kenneth I. Blum | Center for Brains, Minds and Machines In the wilds of the schoolyard, alliances and conflicts are in flux every day, amid the screams and laughter. How do people choose friend and foe? When should they cooperate? Much past research has been done on the emergence of cooperation in a variety of competitive games, which can be treated as controlled laboratories for exploring...
December 19, 2019 - 12:45 pm
As you read this line, you’re bringing each word into clear view for a brief moment while blurring out the rest, perhaps even ignoring the roar of a leaf blower outside. It may seem like a trivial skill, but it’s actually fundamental to almost everything we do. If the brain weren’t able to pick and choose what portion of the incoming flood of sensory information should get premium processing, the world would look like utter chaos—an...
Jacob Andreas
December 17, 2019 - 4:00 pm
MIT 46-5165
Jacob Andreas
Title: Language as a scaffold for learning
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to...
December 12, 2019 - 1:30 pm
Stimuli that sound or look like gibberish to humans are indistinguishable from naturalistic stimuli to deep networks. Kenneth I. Blum | Center for Brains, Minds and Machines When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to...
December 10, 2019 - 2:15 pm
Objects are posed in varied positions and shot at odd angles to spur new AI techniques. Kim Martineau | MIT Quest for Intelligence Computer vision models have learned to identify objects in photos so accurately that some can outperform humans on some datasets. But when those same object detectors are turned loose in the real world, their performance noticeably drops, creating reliability concerns for self-driving cars and other safety-...
December 10, 2019 - 11:30 am
Object recognition models have improved by leaps and bounds over the past decade, but they’ve got a long way to go where accuracy is concerned. That’s the conclusion of a joint team from the Massachusetts Institute of Technology and IBM, which recently released a data set — ObjectNet — designed to illustrate the performance gap between machine learning algorithms and humans. Unlike many existing data sets, which feature photos taken from Flickr...
Neural Information Processing Systems logo
December 6, 2019 - 1:00 pm
The Center for Brains, Minds and Machines is well-represented at the thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). Below, you will find listings to papers/proceedings accepted and accompanying coverage:   "Metamers of neural networks reveal divergence from human perceptual systems" J. Feather, Durango, A., Gonzalez, R., and McDermott, J. H., NeurIPS 2019. Vancouver, Canada, 2019.   [video] Metamers...
December 3, 2019 - 3:15 pm
Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI. Rob Matheson | MIT News Office Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick. Now MIT researchers...
November 26, 2019 - 4:00 pm
MIT 46-5165
Nick Watters (Tenenbaum Lab)
Title:  Unsupervised Learning and Structured Representations in Neural Networks
Sample efficiency, transfer, and flexibility are hallmarks of biological intelligence and long-standing challenges for artificial learning systems. Core to these capacities is the reuse of structured...