Talks

CBMM Special Seminar: Panel Discussion on the relationship between engineering and science in CBMM and the field

Sep 29, 2020 - 4:00 pm
Speaker/s:  Profs. Jim DiCarlo, Tomaso A Poggio, and Joshua Tenenbaum

Panel details:

Profs. Jim DiCarlo, Tomaso A Poggio, and Joshua Tenenbaum will discuss and debate the relationship between engineering and science in CBMM and the field:

  • We all believe that if we want to understand how our brain computes intelligence, we need a synergistic combination of the science of brains and the engineering of machines.
  • We all agree that science and engineering are both equally important and should be equally deep and rigorous.
  • Beyond these shared beliefs — which are the soul of CBMM — there are of course many open questions where each one of us may hold different opinions that would be fun to discuss. 
  1. Is studying brains a top priority for AI? Do engineers need neuroscience? Current models for visual object categorization and synthetic text generation are thriving without new input from neuroscience, for example.
  2. What aspects of neuroscience are likely to improve AI?
  3. We have had difficulty developing neural network models of symbolic intelligence, intuitive physics, and intuitive psychology, for example. Are prospects better on the science side (real neurons and networks in experiments and models) or engineering (abstract formulations)?
  4. Will theoretical understanding of deep learning translate to a theoretical understanding of human intelligence?

This panel discussion will be hosted remotely via Zoom.

Zoom Webinar link: - https://mit.zoom.us/j/95884034610?pwd=d044U3ZtM0I3U3ZaM3A0UjVCQm94dz09 

passcode 804263

Organizer:  Kenneth Blum Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators

Sep 15, 2020 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Prof. George Em Karniadakis, Brown University

Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.

 

This seminar talk will be hosted remotely via Zoom.

Zoom Webinar link:https://mit.zoom.us/j/95815924103?pwd=Y0Zrd3hiQWdGN3k3SlVORFJFZkRwUT09 

passcode 829729

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Doing for our robots what nature did for us

Feb 4, 2020 - 4:00 pm
Venue:  Singleton Auditorium Address:  Singleton(46-3002), 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Leslie Pack Kaelbling, CSAIL

Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Canceled: Brains, Minds + Machines Seminar Series: Hypernetworks and a New Feedback Model

Mar 16, 2020 - 4:00 pm
Photo of Lior Wolf
Venue:  Singleton Auditorium Address:  Singleton(46-3002), 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Lior Wolf, Tel Aviv University and Facebook AI Research.

Please note that this talk has been canceled.

We will reschedule his talk at the earliest convenience.

 

Abstract: Hypernetworks, also known as dynamic networks, are neural networks in which the weights of at least some of the layers vary dynamically based on the input. Such networks have composite architectures in which one network predicts the weights of another network. I will briefly describe the early days of dynamic layers and present recent results from diverse domains: 3D reconstruction from a single image, image retouching, electrical circuit design, decoding block codes, graph hypernetworks for bioinformatics, and action recognition in video. Finally, I will present a new hypernetwork-based model for the role of feedback in neural computations.

Organizer:  Frederico Azevedo Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: How will we do mathematics in 2030?

Feb 25, 2020 - 4:00 pm
Michael Douglas
Venue:  Singleton Auditorium Address:  Singleton(46-3002), 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Michael Douglas, Stony Brook

Title:  How will we do mathematics in 2030 ?

Abstract:
We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra,  and emerging technologies such as formal knowledge repositories, semantic search and intelligent textbooks. 

After a short review of the state of the art, we survey directions where we expect progress, such as mathematical search and formal abstracts, developments in computational mathematics, integration of computation into textbooks, and organizing and verifying large calculations and proofs. For each we try to identify the barriers and potential solutions.

Organizer:  Frederico Azevedo Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Feedforward and feedback processes in visual recognition

Nov 5, 2019 - 4:00 pm
Photo of Thomas Serre
Venue:  Singleton Auditorium Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Thomas Serre, Cognitive, Linguistic & Psychological Sciences Department, Carney Institute for Brain Science, Brown University

Title: Feedforward and feedback processes in visual recognition

Abstract: Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive fields that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture which addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.​

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Beyond Empirical Risk Minimization: the lessons of deep learning

Oct 28, 2019 - 4:00 pm
Photo of Mikhail Belkin
Venue:  Singleton Auditorium Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Mikhail Belkin, Professor, The Ohio State University - Department of Computer Science and Engineering, Department of Statistics, Center for Cognitive Science

Title: Beyond Empirical Risk Minimization: the lessons of deep learning

Abstract: "A model with zero training error is  overfit to the training data and  will typically generalize poorly"  goes statistical textbook wisdom.  Yet, in modern practice, over-parametrized deep networks with   near  perfect  fit on  training data still show excellent test performance.  This apparent  contradiction points to troubling cracks in the conceptual foundations of machine learning. While classical analyses of Empirical Risk Minimization rely on balancing the  complexity of  predictors with  training error, modern models are best described by interpolation. In that paradigm  a predictor is chosen by minimizing (explicitly or implicitly) a norm corresponding to a certain inductive bias over a space of functions that  fit the training data exactly. I will discuss the nature of the challenge to our understanding of machine learning and point the way forward to first analyses that account for the empirically observed phenomena.  Furthermore, I will show how  classical and modern models can  be unified within a single  "double descent" risk curve,  which subsumes the classical U-shaped bias-variance trade-off.

Finally, as an example of a particularly interesting inductive bias, I will show evidence that deep  over-parametrized autoencoders networks, trained with SGD, implement a form of associative memory with training examples as attractor states.

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Quantum Computing: Current Approaches and Future Prospects-Jack Hidary

Oct 2, 2019 - 11:00 am
Photo of Jack Hidary
Venue:  Singleton Auditorium Address:  MIT Bldg 46 Rm 3002, 43 Vassar Street, Cambridge MA 02139   Speaker/s:  Jack Hidary, Alphabet X, formerly Google X

Abstract: Jack Hidary will take us through the nascent, but promising field of quantum computing and his new book, Quantum Computing: An Applied Approach

Bio: Jack D. Hidary is a research scientist in quantum computing and in AI at Alphabet X, formerly Google X. He and his group develop and research algorithms for NISQ-regime quantum processors as well as create new software libraries for quantum computing.  In the  AI field, Jack and his  group focus on fundamental research such as the generalization of deep networks as well as applied AI technologies.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy

Oct 29, 2019 - 4:00 pm
Photo of Thomas Icard
Venue:  Star Seminar Room (Stata D463) Address:  Stata D463, Building 32, 32 Vassar Street Cambridge, MA 02139 Speaker/s:  Thomas Icard, Stanford

Abstract: How might we assess the expressive capacity of different classes of probabilistic generative models? The subject of this talk is an approach that appeals to machines of increasing strength (finite-state, recursive, etc.), or equivalently, by probabilistic grammars of increasing complexity, giving rise to a probabilistic version of the familiar Chomsky hierarchy. Many common probabilistic models — hidden Markov models, generative neural networks, probabilistic programming languages, etc. — naturally fit into the hierarchy. The aim of the talk is to give as comprehensive a picture as possible of the landscape of distributions that can be expressed at each level in the hierarchy. Of special interest is what this pattern of results might mean for cognitive modeling.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: A distributional point of view on hierarchy

Sep 17, 2019 - 4:00 pm
Photo of Maia Fraser
Venue:  MIT Building 46-3002 (Singleton Auditorium) Speaker/s:  Maia Fraser, Assistant Professor University of Ottawa

Abstract: Hierarchical learning is found widely in biological organisms. There are several compelling arguments for advantages of this structure. Modularity (reusable components) and function approximation are two where theoretical support is readily available. Other, more statistical, arguments are surely also relevant, in particular there's a sense that "hierarchy reduces generalization error". In this talk, I will bolster this from a distributional point of view and show how this gives rise to deep vs. shallow regret bounds in semi-supervised learning that can also be carried over to some reinforcement learning settings. The argument in both paradigms deals with partial observation, namely partially labeled data resp. partially observed states, and useful representations that can be learned therefrom. Examples include manifold learning and group-invariant features.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Talks