Quest | CBMM Seminar Series - A Theory of Appropriateness with Applications to Generative Artificial Intelligence

Mar 4, 2025 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Joel Leibo, senior staff research scientist at Google DeepMind and professor at King's College London

Abstract: What is appropriateness? Humans navigate a multi-scale mosaic of interlocking notions of what is appropriate for different situations. We act one way with our friends, another with our family, and yet another in the office. Likewise for AI, appropriate behavior for a comedy-writing assistant is not the same as appropriate behavior for a customer-service representative. What determines which actions are appropriate in which contexts? And what causes these standards to change over time? Since all judgments of AI appropriateness are ultimately made by humans, we need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it. In this talk, I will present a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series - Aligning deep networks with human vision will require novel neural architectures, data diets and training algorithms

Feb 11, 2025 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Thomas Serre, Brown University

Abstract: Recent advances in artificial intelligence have been mainly driven by the rapid scaling of deep neural networks (DNNs), which now contain unprecedented numbers of learnable parameters and are trained on massive datasets, covering large portions of the internet. This scaling has enabled DNNs to develop visual competencies that approach human levels. However, even the most sophisticated DNNs still exhibit strange, inscrutable failures that diverge markedly from human-like behavior—a misalignment that seems to worsen as models grow in scale.

In this talk, I will discuss recent work from our group addressing this misalignment via the development of DNNs that mimic human perception by incorporating computational, algorithmic, and representational principles fundamental to natural intelligence. First, I will review our ongoing efforts in characterizing human visual strategies in image categorization tasks and contrasting these strategies with modern deep nets. I will present initial results suggesting we must explore novel data regimens and training algorithms for deep nets to learn more human-like visual representations. Second, I will show results suggesting that neural architectures inspired by cortex-like recurrent neural circuits offer a compelling alternative to the prevailing transformers, particularly for tasks requiring visual reasoning beyond simple categorization.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Front page feed