Michael Douglas
February 25, 2020 - 4:00 pm
Singleton Auditorium
Michael Douglas, Stony Brook
Title:  How will we do mathematics in 2030 ?
We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra,  and emerging technologies...
February 24, 2020 - 1:30 pm
BEHIND THE PAPER Moving away from alchemy into the age of science for deep learning by Andrzej Banburski Imagine you’re back in elementary school and just took your first statistics course on fitting models to data. One thing you’re sure about is that a good model surely should have less parameters than data (think of fitting ten data points with a line, i.e. two parameters), otherwise you’ll ruin the predictivity of your model by overfitting....
February 18, 2020 - 4:00 pm
MIT 46-5165
Andrei Barbu, Katz Lab
February 14, 2020 - 12:00 pm
Professor Tomaso Poggio, Dr. Andrzej Banburski and M.Sc. Qianli Liao from the Center for Brains, Minds and Machines, located at the Massachusetts Institute of Technology, won the first edition of the international Scientific Award "Ratio et Spes", established jointly by the Nicolaus Copernicus University in Toruń and the Vatican Foundation Joseph Ratzinger-Benedict XVI. The prize will be presented in Toruń on February 19, on the University Day,...
February 11, 2020 - 4:00 pm
MIT 46-5165
Tiago Marques
Abstract: Object recognition relies on the hierarchical processing of visual information along the primate ventral stream. Artificial neural networks (ANNs) recently achieved unprecedented accuracy in predicting neuronal responses in different cortical areas and primate behavior. In this talk, I...
February 11, 2020 - 11:30 am
Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects. Kris Brewer | Center for Brains, Minds and Machines Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognize her face? “Yes, of course,” you probably are thinking. If this is true, it would mean that our visual system...
February 4, 2020 - 4:00 pm
Singleton Auditorium
Leslie Pack Kaelbling, CSAIL
Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the...
connected bright lines forming a brain
January 15, 2020 - 12:00 pm
by Sabbi Lall Visual art has found many ways of representing objects, from the ornate Baroque period to modernist simplicity. Artificial visual systems are somewhat analogous: from relatively simple beginnings inspired by key regions in the visual cortex, recent advances in performance have seen increasing complexity. “Our overall goal has been to build an accurate, engineering-level model of the visual system, to ‘reverse engineer’ visual...
January 14, 2020 - 8:30 am
Princeton’s Joshua Peterson and Harvard’s Arturo Deza flew earlier that week to Vancouver, British Columbia for the Neural Information Processing Systems (NeurIPS) conference, the world’s premiere machine learning venue, where they organized the Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop along with MIT-CBMM’s Ratan Murty and Princeton’s Tom Griffiths. The SVRHM workshop was sponsored in part by the Center...
December 20, 2019 - 12:15 pm
A new algorithm wins multi-player, hidden role games. Kenneth I. Blum | Center for Brains, Minds and Machines In the wilds of the schoolyard, alliances and conflicts are in flux every day, amid the screams and laughter. How do people choose friend and foe? When should they cooperate? Much past research has been done on the emergence of cooperation in a variety of competitive games, which can be treated as controlled laboratories for exploring...
December 19, 2019 - 12:45 pm
As you read this line, you’re bringing each word into clear view for a brief moment while blurring out the rest, perhaps even ignoring the roar of a leaf blower outside. It may seem like a trivial skill, but it’s actually fundamental to almost everything we do. If the brain weren’t able to pick and choose what portion of the incoming flood of sensory information should get premium processing, the world would look like utter chaos—an...
Jacob Andreas
December 17, 2019 - 4:00 pm
MIT 46-5165
Jacob Andreas
Title: Language as a scaffold for learning
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to...
December 12, 2019 - 1:30 pm
Stimuli that sound or look like gibberish to humans are indistinguishable from naturalistic stimuli to deep networks. Kenneth I. Blum | Center for Brains, Minds and Machines When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to...
December 10, 2019 - 2:15 pm
Objects are posed in varied positions and shot at odd angles to spur new AI techniques. Kim Martineau | MIT Quest for Intelligence Computer vision models have learned to identify objects in photos so accurately that some can outperform humans on some datasets. But when those same object detectors are turned loose in the real world, their performance noticeably drops, creating reliability concerns for self-driving cars and other safety-...
December 10, 2019 - 11:30 am
Object recognition models have improved by leaps and bounds over the past decade, but they’ve got a long way to go where accuracy is concerned. That’s the conclusion of a joint team from the Massachusetts Institute of Technology and IBM, which recently released a data set — ObjectNet — designed to illustrate the performance gap between machine learning algorithms and humans. Unlike many existing data sets, which feature photos taken from Flickr...