Seminars

Can we Contain Covid-19 without Locking-down the Economy?

Mar 31, 2020 - 1:00 pm
Venue:  Zoom Webinar - registration Required Speaker/s:  Profs. Amnon Shashua and Shai Shalev-Shwartz, The Hebrew University of Jerusalem, Israel

Registration is required, please see details below.

Abstract: We present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle the question of whether this model is safe, in the sense that the health system can contain the number of low-risk people that require severe ICU care (such as life support systems).

Link to related CBMM Memo: https://bit.ly/2UCMbdl

Prof. Amnon Shashua: https://www.cs.huji.ac.il/~shashua/  and https://www.mobileye.com/about/management/

Prof. Shai Shalev-Shwartz’s research website: https://www.cs.huji.ac.il/~shais/

Register in advance for this webinar:

https://mit.zoom.us/webinar/register/WN_OcTAwqtAQxKzqQgaX-jjgQ

After registering, you will receive a confirmation email containing information about joining the webinar.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Doing for our robots what nature did for us

Feb 4, 2020 - 4:00 pm
Venue:  Singleton Auditorium Address:  Singleton(46-3002), 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Leslie Pack Kaelbling, CSAIL

Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Canceled: Brains, Minds + Machines Seminar Series: Hypernetworks and a New Feedback Model

Mar 16, 2020 - 4:00 pm
Photo of Lior Wolf
Venue:  Singleton Auditorium Address:  Singleton(46-3002), 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Lior Wolf, Tel Aviv University and Facebook AI Research.

Please note that this talk has been canceled.

We will reschedule his talk at the earliest convenience.

 

Abstract: Hypernetworks, also known as dynamic networks, are neural networks in which the weights of at least some of the layers vary dynamically based on the input. Such networks have composite architectures in which one network predicts the weights of another network. I will briefly describe the early days of dynamic layers and present recent results from diverse domains: 3D reconstruction from a single image, image retouching, electrical circuit design, decoding block codes, graph hypernetworks for bioinformatics, and action recognition in video. Finally, I will present a new hypernetwork-based model for the role of feedback in neural computations.

Organizer:  Frederico Azevedo Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: How will we do mathematics in 2030?

Feb 25, 2020 - 4:00 pm
Michael Douglas
Venue:  Singleton Auditorium Address:  Singleton(46-3002), 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Michael Douglas, Stony Brook

Title:  How will we do mathematics in 2030 ?

Abstract:
We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra,  and emerging technologies such as formal knowledge repositories, semantic search and intelligent textbooks. 

After a short review of the state of the art, we survey directions where we expect progress, such as mathematical search and formal abstracts, developments in computational mathematics, integration of computation into textbooks, and organizing and verifying large calculations and proofs. For each we try to identify the barriers and potential solutions.

Organizer:  Frederico Azevedo Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Feedforward and feedback processes in visual recognition

Nov 5, 2019 - 4:00 pm
Photo of Thomas Serre
Venue:  Singleton Auditorium Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Thomas Serre, Cognitive, Linguistic & Psychological Sciences Department, Carney Institute for Brain Science, Brown University

Title: Feedforward and feedback processes in visual recognition

Abstract: Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive fields that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture which addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.​

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Beyond Empirical Risk Minimization: the lessons of deep learning

Oct 28, 2019 - 4:00 pm
Photo of Mikhail Belkin
Venue:  Singleton Auditorium Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Mikhail Belkin, Professor, The Ohio State University - Department of Computer Science and Engineering, Department of Statistics, Center for Cognitive Science

Title: Beyond Empirical Risk Minimization: the lessons of deep learning

Abstract: "A model with zero training error is  overfit to the training data and  will typically generalize poorly"  goes statistical textbook wisdom.  Yet, in modern practice, over-parametrized deep networks with   near  perfect  fit on  training data still show excellent test performance.  This apparent  contradiction points to troubling cracks in the conceptual foundations of machine learning. While classical analyses of Empirical Risk Minimization rely on balancing the  complexity of  predictors with  training error, modern models are best described by interpolation. In that paradigm  a predictor is chosen by minimizing (explicitly or implicitly) a norm corresponding to a certain inductive bias over a space of functions that  fit the training data exactly. I will discuss the nature of the challenge to our understanding of machine learning and point the way forward to first analyses that account for the empirically observed phenomena.  Furthermore, I will show how  classical and modern models can  be unified within a single  "double descent" risk curve,  which subsumes the classical U-shaped bias-variance trade-off.

Finally, as an example of a particularly interesting inductive bias, I will show evidence that deep  over-parametrized autoencoders networks, trained with SGD, implement a form of associative memory with training examples as attractor states.

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Quantum Computing: Current Approaches and Future Prospects-Jack Hidary

Oct 2, 2019 - 11:00 am
Photo of Jack Hidary
Venue:  Singleton Auditorium Address:  MIT Bldg 46 Rm 3002, 43 Vassar Street, Cambridge MA 02139   Speaker/s:  Jack Hidary, Alphabet X, formerly Google X

Abstract: Jack Hidary will take us through the nascent, but promising field of quantum computing and his new book, Quantum Computing: An Applied Approach

Bio: Jack D. Hidary is a research scientist in quantum computing and in AI at Alphabet X, formerly Google X. He and his group develop and research algorithms for NISQ-regime quantum processors as well as create new software libraries for quantum computing.  In the  AI field, Jack and his  group focus on fundamental research such as the generalization of deep networks as well as applied AI technologies.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy

Oct 29, 2019 - 4:00 pm
Photo of Thomas Icard
Venue:  Star Seminar Room (Stata D463) Address:  Stata D463, Building 32, 32 Vassar Street Cambridge, MA 02139 Speaker/s:  Thomas Icard, Stanford

Abstract: How might we assess the expressive capacity of different classes of probabilistic generative models? The subject of this talk is an approach that appeals to machines of increasing strength (finite-state, recursive, etc.), or equivalently, by probabilistic grammars of increasing complexity, giving rise to a probabilistic version of the familiar Chomsky hierarchy. Many common probabilistic models — hidden Markov models, generative neural networks, probabilistic programming languages, etc. — naturally fit into the hierarchy. The aim of the talk is to give as comprehensive a picture as possible of the landscape of distributions that can be expressed at each level in the hierarchy. Of special interest is what this pattern of results might mean for cognitive modeling.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: A distributional point of view on hierarchy

Sep 17, 2019 - 4:00 pm
Photo of Maia Fraser
Venue:  MIT Building 46-3002 (Singleton Auditorium) Speaker/s:  Maia Fraser, Assistant Professor University of Ottawa

Abstract: Hierarchical learning is found widely in biological organisms. There are several compelling arguments for advantages of this structure. Modularity (reusable components) and function approximation are two where theoretical support is readily available. Other, more statistical, arguments are surely also relevant, in particular there's a sense that "hierarchy reduces generalization error". In this talk, I will bolster this from a distributional point of view and show how this gives rise to deep vs. shallow regret bounds in semi-supervised learning that can also be carried over to some reinforcement learning settings. The argument in both paradigms deals with partial observation, namely partially labeled data resp. partially observed states, and useful representations that can be learned therefrom. Examples include manifold learning and group-invariant features.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series : The topology of representation teleportation, regularized Oja's rule, and weight symmetry

Apr 2, 2019 - 4:00 pm
Photo of Jon Bloom
Venue:  MIT Building 46-3002 (Singleton Auditorium) Address:  43 Vassar St, Cambridge, MA 02139 Speaker/s:  Dr. Jon Bloom, Broad Institute

Abstract:  When trained to minimize reconstruction error, a linear autoencoder (LAE) learns the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this talk, I'll explain how this observation became the focus of a project on representation learning of neurons using single-cell RNA data. I'll then share how this focus led us to a satisfying conversation between numerical analysis, algebraic topology, random matrix theory, deep learning, and computational neuroscience. We'll see that an L2-regularized LAE learns the principal directions as the left singular vectors of the decoder, providing a simple and scalable PCA algorithm related to Oja's rule. We'll use the lens of Morse theory to smoothly parameterize all LAE critical manifolds and the gradient trajectories between them; and see how algebra and probability theory provide principled foundations for ensemble learning in deep networks, while suggesting new algorithms. Finally, we'll come full circle to neuroscience via the "weight transport problem" (Grossberg 1987), proving that L2-regularized LAEs are symmetric at all critical points. This theorem provides local learning rules by which maximizing information flow and minimizing energy expenditure give rise to less-biologically-implausible analogues of backproprogation, which we are excited to explore in vivo and in silico. Joint learning with Daniel Kunin, Aleksandrina Goeva, and Cotton Seed.

Project resources: https://github.com/danielkunin/Regularized-Linear-Autoencoders

Short Bio: Jon Bloom is an Institute Scientist at the Stanley Center for Psychiatric Research within the Broad Institute of MIT and Harvard. In 2015, he co-founded the Models, Inference, and Algorithms Initiative and a team (Hail) building distributed systems used throughout academia and industry to uncover the biology of disease. In his youth, Jon did useless math at Harvard and Columbia and learned useful math by rebuilding MIT’s Intro to Probability and Statistics as a Moore Instructor and NSF postdoc. These days, he is exuberantly surprised to find the useless math may be useful after all.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Seminars