Seminars

CBMM Brains, Minds, and Machines Seminar Series: Something Else About Working Memory

May 11, 2021 - 4:00 pm
Photo of Prof. Earl Miller (PILM, MIT)
Venue:  Hosted via Zoom Speaker/s:  Prof. Earl K. Miller, Picower Institute for Learning and Memory, BCS Dept., MIT

Host: Prof. Matt Wilson (MIT)

Abstract: Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”.  However, new work from our lab has revealed more complex dynamics.  The impulses fire sparsely and interact with brain rhythms of different frequencies.  Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory.  In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.​

---

This seminar talk will be hosted remotely via Zoom.

Zoom link: https://mit.zoom.us/j/96121350408?pwd=ZU1seGNLSWkvS2xBTGM3SlhjaDNXQT09
Passcode: 405475

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Brains, Minds, and Machines Seminar Series: Compositional Generative Networks & Adversarial Examiners: Beyond the Limitations of Current AI

May 4, 2021 - 2:30 pm
Photo of Prof. Alan L. Yuille
Venue:  Hosted via Zoom Speaker/s:  Prof. Alan L. Yuille (JHU)

Abstract: Current AI visual algorithms are very limited compared to the robustness and flexibility of the human visual system. These limitations, however, are often obscured by the standard performance measures (SPMs) used to evaluate vision algorithms which favor data-driven methods. SPMs, however, are problematic due to the combinatorial complexity of natural images and lead to unrealistic expectations about the effectiveness of current algorithms. We argue that tougher performance measures, such as out-of-distribution testing and adversarial examiners, are required to realistically evaluate vision algorithms and hence to encourage AI vision systems which can achieve human level performance. We illustrate this by studying object classification where the algorithms are trained on standard datasets which have limited occlusion but are tested on datasets where the objects are severally occluded (out-of-distribution testing) and/or where adversarial patches are placed in the images (adversarial examiners). We show that standard Deep Nets perform badly under these types of tests but Generative Compositional Nets, which perform approximate analysis by synthesis, are much more robust.

 

 

Zoom link: https://mit.zoom.us/j/95505708173?pwd=cjBLVlZWYXNXcDBIanRKMWZNNXZuZz09
Passcode: 522130

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Brains, Minds, and Machines Seminar Series: Common Sense Physics and Structured Representation in the Era of Deep Learning

Mar 2, 2021 - 2:00 pm
Portrait of Prof. Murray Shanahan
Venue:  Hosted via Zoom Speaker/s:  Prof. Murray Shanahan, Imperial College London

Host: Prof. Josh Tenenbaum (MIT)

Abstract:  The challenge of endowing computers with common sense remains one of the major obstacles to achieving the sort of general artificial intelligence envisioned by the field’s founders. A large part of human common sense pertains to the physics of the everyday world, and rests on a foundational understanding of such concepts as objects, motion, obstruction, containers, portals, support, and so on. In this talk I will discuss the challenge of common sense physics in the context of contemporary progress in deep reinforcement learning, and the question of how deep neural networks can learn representations at the required level of abstraction.

Zoom link: https://mit.zoom.us/j/92856609553

---

 

*Please note the change in start time, this talk will start at 2 PM EST.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Brains, Minds, and Machines Seminar Series: Computation and Learning with Assemblies of Neurons

Feb 23, 2021 - 4:00 pm
Photo of Prof. Santosh Vempala
Venue:  Hosted via Zoom Speaker/s:  Prof. Santosh Vempala, Georgia Tech.

Host: Prof. Tomaso Poggio (MIT)

Abstract:  Despite great advances in ML, and in our understanding of the brain at the level of neurons, synapses, and neural circuits, we still have no satisfactory explanation for the brain's performance in perception, cognition, language, memory, behavior; as Nobel laureate Richard Axel put it, ``we have no logic for translating neural activity into thought and action''. The Assembly Calculus (AC) is a framework to fill this gap, a computational model whose basic data type is the assembly, a large subset of neurons whose simultaneous excitation is tantamount to the subject's thinking of an object, idea, episode, or word. The AC provides a repertoire of operations ("project", "reciprocal-project", "associate", "pattern-complete", etc.) whose implementation relies only on Hebbian plasticity and inhibition, and encompasses a complete computational system, thereby enabling complex function. Very recently, it has been shown, rigorously and in simulation, that the AC can learn to classify samples from well-separated classes. For basic concept classes in high dimension, an assembly can be formed and recalled for each class, and these assemblies are distinguishable as long as the input classes are sufficiently separated. Viewed as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision --- all attributes expected of a brain-like mechanism. The talk will highlight several fascinating questions that arise, from the convergence of assemblies to their unexpected generalization abilities.

 

This is joint work with Christos Papadimitriou, Max Dabagia, Mirabel Reid and Dan Mitropolsky. ​

 

Zoom link: https://mit.zoom.us/j/97301534627

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Brains, Minds, and Machines Seminar Series: Representations vs Algorithms: Symbols and Geometry in Robotics

Nov 3, 2020 - 4:00 pm
Photo: David Sella
Speaker/s:  Nick Roy, CSAIL, AeroAstro, MIT

Abstract: In the last few years, the ability for robots to understand and operate in the world around them has advanced considerably. Examples include the growing number of self-driving car systems, the considerable work in robot mapping, and the growing interest in home and service robots. However, one limitation is that robots most often reason and plan using very geometric models of the world, such as point features, dense occupancy grids and action cost maps. To be able to plan and reason over long length and timescales, as well as planning more complex missions, robots need to be able to reason about abstract concepts such as landmarks, segmented objects and tasks (among other representations). I will talk about recent work in joint reasoning about semantic representations and physical representations and what these joint representations mean for planning and decision making.

This seminar series talk will be hosted remotely via Zoom.

Zoom link: https://mit.zoom.us/j/96323330576

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Panel Discussion: Is the theory of Deep Learning relevant to applications?

Oct 27, 2020 - 4:00 pm
Speaker/s:  Panelists: Tomaso A Poggio (CBMM), Daniela L Rus (CSAIL), Max Tegmark (Physics), Lorenzo Rosasco (IIT), and Andrea Tacchetti (DeepMind)

Abstract: Deep Learning has enjoyed an impressive growth over the past few years in fields ranging from visual recognition to natural language processing. Improvements in these areas have been fundamental to the development of self-driving cars, machine translation  and healthcare applications. This progress has arguably been made possible by a combination of increases in computing power and clever heuristics, raising puzzling questions that lack full theoretical understanding. Here, we will discuss the relationship between the theory behind deep learning and its application.

This panel discussion will be hosted remotely via Zoom.

Zoom Webinar link: https://mit.zoom.us/j/99126775953

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Panel Discussion on the relationship between engineering and science in CBMM and the field

Sep 29, 2020 - 4:00 pm
Speaker/s:  Profs. Jim DiCarlo, Tomaso A Poggio, and Joshua Tenenbaum

Panel details:

Profs. Jim DiCarlo, Tomaso A Poggio, and Joshua Tenenbaum will discuss and debate the relationship between engineering and science in CBMM and the field:

  • We all believe that if we want to understand how our brain computes intelligence, we need a synergistic combination of the science of brains and the engineering of machines.
  • We all agree that science and engineering are both equally important and should be equally deep and rigorous.
  • Beyond these shared beliefs — which are the soul of CBMM — there are of course many open questions where each one of us may hold different opinions that would be fun to discuss. 
  1. Is studying brains a top priority for AI? Do engineers need neuroscience? Current models for visual object categorization and synthetic text generation are thriving without new input from neuroscience, for example.
  2. What aspects of neuroscience are likely to improve AI?
  3. We have had difficulty developing neural network models of symbolic intelligence, intuitive physics, and intuitive psychology, for example. Are prospects better on the science side (real neurons and networks in experiments and models) or engineering (abstract formulations)?
  4. Will theoretical understanding of deep learning translate to a theoretical understanding of human intelligence?

This panel discussion will be hosted remotely via Zoom.

Zoom Webinar link: - https://mit.zoom.us/j/95884034610?pwd=d044U3ZtM0I3U3ZaM3A0UjVCQm94dz09 

passcode 804263

Organizer:  Kenneth Blum Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators

Sep 15, 2020 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Prof. George Em Karniadakis, Brown University

Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.

 

This seminar talk will be hosted remotely via Zoom.

Zoom Webinar link:https://mit.zoom.us/j/95815924103?pwd=Y0Zrd3hiQWdGN3k3SlVORFJFZkRwUT09 

passcode 829729

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Virtual Seminar: Marco Baroni

Jun 23, 2020 - 2:00 pm
Venue:  Zoom Speaker/s:  Marco Baroni, Facebook AI Research (Paris) and Catalan Institute for Research and Advanced Studies (Barcelona)

Title:
Is compositionality over-rated? A view from emergent neural network language analysis

Abstract:

Compositionality is the property whereby linguistic expressions that denote new composite meanings are derived by a rule-based combination of expressions denoting their parts. Linguists agree that compositionality plays a central role in natural language, accounting for its ability to express an infinite number of ideas by finite means.

"Deep" neural networks, for all their impressive achievements, often fail to quickly generalize to unseen examples, even when the latter display a predictable composite structure with respect to examples the network is already familiar with. This has led to interest in the topic of compositionality in neural networks: can deep networks parse language compositionally? how can we make them more sensitive to compositional structure? what does "compositionality" even mean in the context of deep learning?

I would like to address some of these questions in the context of recent work on language emergence in deep networks, in which we train two or more networks endowed with a communication channel to solve a task jointly, and study the communication code they develop. I will try to be precise about what "compositionality" mean in this context, and I will report the results of proof-of-concept and larger-scale experiments suggesting that (non-circular) compositionality is not a necessary condition for good generalization (of the kind illustrated in the figure). Moreover, I will show that often there is no reason to expect deep networks to find compositional languages more "natural" than highly entangled ones. I will conclude by suggesting that, if fast generalization is what we care about, we might as well focus directly on enhancing this property, without worrying about the compositionality of emergent neural network languages.

 

Please click the link below to join the webinar: 

https://mit.zoom.us/j/93213662313?pwd=N0F2eXUxT1gvRklCeFdDVzBZd0N5Zz09

Password: brains

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Spatial Perception for Robots and Autonomous Vehicles: Certifiable Algorithms and Human-level Understanding

Apr 21, 2020 - 4:00 pm
Speaker/s:  Luca Carlone

Abstract:
Spatial perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms.

This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation in 3D point clouds and RGB images: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems, while circumventing the intrinsic computational intractability of typical perception problems.

The second effort targets high-level understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our recent work on actionable hierarchical representations, 3D Dynamic Scene Graphs, and discuss their potential impact on planning and decision-making, human-robot interaction, long-term autonomy, and scene prediction. The creation of a Dynamic Scene Graph requires a variety of algorithms, ranging from model-based estimation to deep learning, and offers new opportunities for both researchers and practitioners.

Bio:
Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the best paper award at WAFR’16, the best Student paper award at the 2018 Symposium on VLSI Circuits, and he was best paper finalist at RSS’15. At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.

Connect to the Webinar using this link: https://mit.zoom.us/j/95924561648

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Seminars