February 11, 2020 - 11:30 am
Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects. Kris Brewer | Center for Brains, Minds and Machines Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognize her face? “Yes, of course,” you probably are thinking. If this is true, it would mean that our visual system...
February 4, 2020 - 4:00 pm
Singleton Auditorium
Leslie Pack Kaelbling, CSAIL
Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the...
connected bright lines forming a brain
January 15, 2020 - 12:00 pm
by Sabbi Lall Visual art has found many ways of representing objects, from the ornate Baroque period to modernist simplicity. Artificial visual systems are somewhat analogous: from relatively simple beginnings inspired by key regions in the visual cortex, recent advances in performance have seen increasing complexity. “Our overall goal has been to build an accurate, engineering-level model of the visual system, to ‘reverse engineer’ visual...
January 14, 2020 - 8:30 am
Princeton’s Joshua Peterson and Harvard’s Arturo Deza flew earlier that week to Vancouver, British Columbia for the Neural Information Processing Systems (NeurIPS) conference, the world’s premiere machine learning venue, where they organized the Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop along with MIT-CBMM’s Ratan Murty and Princeton’s Tom Griffiths. The SVRHM workshop was sponsored in part by the Center...
December 20, 2019 - 12:15 pm
A new algorithm wins multi-player, hidden role games. Kenneth I. Blum | Center for Brains, Minds and Machines In the wilds of the schoolyard, alliances and conflicts are in flux every day, amid the screams and laughter. How do people choose friend and foe? When should they cooperate? Much past research has been done on the emergence of cooperation in a variety of competitive games, which can be treated as controlled laboratories for exploring...
December 19, 2019 - 12:45 pm
As you read this line, you’re bringing each word into clear view for a brief moment while blurring out the rest, perhaps even ignoring the roar of a leaf blower outside. It may seem like a trivial skill, but it’s actually fundamental to almost everything we do. If the brain weren’t able to pick and choose what portion of the incoming flood of sensory information should get premium processing, the world would look like utter chaos—an...
Jacob Andreas
December 17, 2019 - 4:00 pm
MIT 46-5165
Jacob Andreas
Title: Language as a scaffold for learning
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to...
December 12, 2019 - 1:30 pm
Stimuli that sound or look like gibberish to humans are indistinguishable from naturalistic stimuli to deep networks. Kenneth I. Blum | Center for Brains, Minds and Machines When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to...
December 10, 2019 - 2:15 pm
Objects are posed in varied positions and shot at odd angles to spur new AI techniques. Kim Martineau | MIT Quest for Intelligence Computer vision models have learned to identify objects in photos so accurately that some can outperform humans on some datasets. But when those same object detectors are turned loose in the real world, their performance noticeably drops, creating reliability concerns for self-driving cars and other safety-...
December 10, 2019 - 11:30 am
Object recognition models have improved by leaps and bounds over the past decade, but they’ve got a long way to go where accuracy is concerned. That’s the conclusion of a joint team from the Massachusetts Institute of Technology and IBM, which recently released a data set — ObjectNet — designed to illustrate the performance gap between machine learning algorithms and humans. Unlike many existing data sets, which feature photos taken from Flickr...
Neural Information Processing Systems logo
December 6, 2019 - 1:00 pm
The Center for Brains, Minds and Machines is well-represented at the thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). Below, you will find listings to papers/proceedings accepted and accompanying coverage:   "Metamers of neural networks reveal divergence from human perceptual systems" J. Feather, Durango, A., Gonzalez, R., and McDermott, J. H., NeurIPS 2019. Vancouver, Canada, 2019.   [video] Metamers...
December 3, 2019 - 3:15 pm
Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI. Rob Matheson | MIT News Office Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick. Now MIT researchers...
November 26, 2019 - 4:00 pm
MIT 46-5165
Nick Watters (Tenenbaum Lab)
Title:  Unsupervised Learning and Structured Representations in Neural Networks
Sample efficiency, transfer, and flexibility are hallmarks of biological intelligence and long-standing challenges for artificial learning systems. Core to these capacities is the reuse of structured...
November 22, 2019 - 11:45 am
It’s not something most Harvard faculty spend much time contemplating, but Tomer Ullman likes to think about magic. In particular, he likes to think about whether it would be harder to levitate a frog or turn it to stone. And if you’re thinking the answer is obvious (turning it to stone, right?), Ullman says that’s the point. The reason the answer seems clear, the assistant professor of psychology said, has to do with what researchers call “...
November 19, 2019 - 4:00 pm
MIT 46-5165
Shimon Ullman
Topic: Combining vision and cognition by BU-TD visual routines ​