confused red robot
March 9, 2020 - 11:15 am
Deep-learning models can spot patterns that humans can't. But software still can't explain, say, what caused one object to collide with another. by Will Knight Here’s a troubling fact. A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child who’s just learning to walk. A new experiment shows how difficult it is for even the best artificial intelligence systems...
March 4, 2020 - 10:00 am
Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly. Anne Trafton | MIT News Office When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision. Scientists who study the brain have tried to replicate this...
February 28, 2020 - 2:00 pm
Researchers discover that no magic is required to explain why deep networks generalize despite going against statistical intuition. by Kris Brewer Introductory statistics courses teach us that, when fitting a model to some data, we should have more data than free parameters to avoid the danger of overfitting — fitting noisy data too closely, and thereby failing to fit new data. It is surprising, then, that in modern deep learning the practice...
Michael Douglas
February 25, 2020 - 4:00 pm
Singleton Auditorium
Michael Douglas, Stony Brook
Title:  How will we do mathematics in 2030 ?
We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra,  and emerging technologies...
February 24, 2020 - 1:30 pm
BEHIND THE PAPER Moving away from alchemy into the age of science for deep learning by Andrzej Banburski Imagine you’re back in elementary school and just took your first statistics course on fitting models to data. One thing you’re sure about is that a good model surely should have less parameters than data (think of fitting ten data points with a line, i.e. two parameters), otherwise you’ll ruin the predictivity of your model by overfitting....
February 18, 2020 - 4:00 pm
MIT 46-5165
Andrei Barbu, Katz Lab
February 14, 2020 - 12:00 pm
Professor Tomaso Poggio, Dr. Andrzej Banburski and M.Sc. Qianli Liao from the Center for Brains, Minds and Machines, located at the Massachusetts Institute of Technology, won the first edition of the international Scientific Award "Ratio et Spes", established jointly by the Nicolaus Copernicus University in Toruń and the Vatican Foundation Joseph Ratzinger-Benedict XVI. The prize will be presented in Toruń on February 19, on the University Day,...
February 11, 2020 - 4:00 pm
MIT 46-5165
Tiago Marques
Abstract: Object recognition relies on the hierarchical processing of visual information along the primate ventral stream. Artificial neural networks (ANNs) recently achieved unprecedented accuracy in predicting neuronal responses in different cortical areas and primate behavior. In this talk, I...
February 11, 2020 - 11:30 am
Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects. Kris Brewer | Center for Brains, Minds and Machines Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognize her face? “Yes, of course,” you probably are thinking. If this is true, it would mean that our visual system...
February 4, 2020 - 4:00 pm
Singleton Auditorium
Leslie Pack Kaelbling, CSAIL
Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the...
connected bright lines forming a brain
January 15, 2020 - 12:00 pm
by Sabbi Lall Visual art has found many ways of representing objects, from the ornate Baroque period to modernist simplicity. Artificial visual systems are somewhat analogous: from relatively simple beginnings inspired by key regions in the visual cortex, recent advances in performance have seen increasing complexity. “Our overall goal has been to build an accurate, engineering-level model of the visual system, to ‘reverse engineer’ visual...
January 14, 2020 - 8:30 am
Princeton’s Joshua Peterson and Harvard’s Arturo Deza flew earlier that week to Vancouver, British Columbia for the Neural Information Processing Systems (NeurIPS) conference, the world’s premiere machine learning venue, where they organized the Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop along with MIT-CBMM’s Ratan Murty and Princeton’s Tom Griffiths. The SVRHM workshop was sponsored in part by the Center...
December 20, 2019 - 12:15 pm
A new algorithm wins multi-player, hidden role games. Kenneth I. Blum | Center for Brains, Minds and Machines In the wilds of the schoolyard, alliances and conflicts are in flux every day, amid the screams and laughter. How do people choose friend and foe? When should they cooperate? Much past research has been done on the emergence of cooperation in a variety of competitive games, which can be treated as controlled laboratories for exploring...
December 19, 2019 - 12:45 pm
As you read this line, you’re bringing each word into clear view for a brief moment while blurring out the rest, perhaps even ignoring the roar of a leaf blower outside. It may seem like a trivial skill, but it’s actually fundamental to almost everything we do. If the brain weren’t able to pick and choose what portion of the incoming flood of sensory information should get premium processing, the world would look like utter chaos—an...
Jacob Andreas
December 17, 2019 - 4:00 pm
MIT 46-5165
Jacob Andreas
Title: Language as a scaffold for learning
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to...