Seminars

Towards General Artificial Intelligence

Apr 20, 2016 - 5:00 pm
An illustration (pictured) shows a traditional Go board and half showing computer-calculated moves. Image credit Google
Venue:  MIT Green Building, Bldg 54 Address:  Room 54-100, MIT Green Building Speaker/s:  Demis Hassabis, Google DeepMind

Abstract: Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition. Demis will draw on his eclectic experiences as an AI researcher, neuroscientist and videogames designer to discuss what is happening at the cutting edge of AI research, including the recent historic AlphaGo match, and its future potential impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind.

This talk is presented as part of the CBMM Annual Retreat.

Organizer:  Tomaso Poggio Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Topological Treatment of Neural Activity and the Quantum Question Order Effect

Apr 26, 2016 - 4:00 pm
Seth Lloyd
Venue:  McGovern Institute for Brain Research Address:  Singleton Auditorium, MIT 46-3002 Speaker/s:  Seth Lloyd, MIT Department of Mechanical Engineering 

Abstract: The order in which one asks people questions affects the probability of their answers. Similarly, in quantum mechanics, the order in which measurements are performed affects the probability of their outcomes. The quantum order effect has a specific mathematical pattern, which -- unexpectedly -- is also obeyed by the human question order effect. This conjunction of the two order effects does not mean that the brain is processing information in an intrinsically quantum way, but rather that certain aspects of neural activity can apparently be captured by a linear projective structure,as in quantum mechanics. I introduce a topological treatment of neural activity and identify topological linear projective structures that might be responsible for the question order effect.

CBMM Special Seminar: Reading Large-scale Neural Codes Underlying Memory and Cognition in Behaving Animals

Nov 13, 2015 - 4:00 pm
Photo of Prof. Mark J. Schnitzer
Venue:  MIT Singleton Auditorium (46-3002) Address:  43 Vassar St., Cambridge MA 02139 MIT Bldg. 46., 3rd Floor, Room 46-3002 Speaker/s:  Mark J. Schnitzer

Prof. Mark J. Schnitzer, Departments of Biology and Applied Physics, Howard Hughes Medical Institute, Stanford University

Abstract: A longstanding challenge in neuroscience is to understand how the dynamics of large populations of individual neurons contribute to animal behavior and brain disease. Addressing this challenge has been difficult partly due to lack of appropriate brain imaging technology for visualizing cellular dynamics in awake behaving animals. I will discuss several new optical technologies of this kind. The miniature integrated fluorescence microscope allows one to monitor the dynamics of up to ~1000 individual genetically identified neurons in behaving mice over weeks. I will describe ongoing studies using this technology to understand the neural codes underlying episodic, emotional and reward related memories. Toward elucidating the interactions between brain areas during active behavior, multi-axis optical imaging can record the dynamics of two or more neural ensembles residing in different brain regions. Lastly, genetically encoded voltage indicators are progressing rapidly in their capacities to allow high fidelity detection of neural spikes and accurate estimation of spike timing, and with further improvements might soon be ready for use in behaving animals.

Bio: Professor Schnitzer is an HHMI Investigator, the Co-Director of the Cracking the Neural Code Program, and a faculty member of the Neuroscience, Biophysics, and Molecular Imaging Programs in the Stanford School of Medicine, as well as of the Stanford Neurosciences Institute and Stanford Bio-X. Dr. Schnitzer has longstanding interests in neural circuit dynamics and optical imaging, and his optical innovations are used in over a hundred neuroscience labs in the USA, Europe and Asia, and in the neuropharmaceutical industry. The miniature integrated fluorescence microscope invented in his lab was named the 2013 Innovation of the Year by The Scientist magazine. Dr. Schnitzer has received the NIH Director’s Pioneer Award, the Biophysical Society’s Michael and Kate Bárány Award, and a Presidential Young Investigator Award, and was a finalist for the 2013 Israel Brain Prize. He is a member of the National Institutes of Health working group for President Obama's BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies).

Organizer:  Matt Wilson Jon Newman Organizer Email:  mwilson@mit.edu

CBMM Special Seminar: Solving Global Coordination

Sep 9, 2015 - 4:00 pm
Jaan Tallinn
Venue:  MIT Singleton Auditorium Bdlg. 46-3002 Speaker/s:  Jaan Tallinn

CBMM will host a brief talk by Jann Tallinn (followed by extensive QAs): Solving Global Coordination

Jaan Tallinn is an Estonian computer scientist who participated in the development of Skype in 2002 and FastTrack/Kazaa, a file-sharing application, in 2000.

He graduated from the University of Tartu in 1996 with a BSc in Theoretical Physics with a thesis that involved traveling interstellar distances using warps in space-time.

Tallinn is a former member of the Estonian President's Academic Advisory Board. He is also one of the founders of the Centre for the Study of Existential Risk, and the Future of Life Institute,  and was  co-founder of the personalized medical research company MetaMed

Organizer:  Tomaso Poggio Organizer Email:  tp@ai.mit.edu

Building newborn minds in virtual worlds

Apr 28, 2015 - 4:00 pm
Building newborn minds in virtual worlds
Venue:  McGovern Institute for Brain Science at MIT, Room 46-3189 Address:  McGovern Seminar Room 46-3189, 3rd floor 43 Vassar St., Cmabridge MA 02139 Speaker/s:  Prof. Justin Wood, USC

Abstract: What are the origins of high-level vision: Is this ability hardwired by genes or learned during development? Although researchers have been wrestling with this question for over a century, progress has been hampered by two major limitations: (1) most newborn animals cannot be raised in controlled environments from birth, and (2) most newborn animals cannot be observed and tested for long periods of time. Thus, it has generally not been possible to characterize how specific visual inputs relate to specific cognitive outputs in the newborn brain.

To overcome these two limitations, I recently developed an automated, high-throughput controlled-rearing technique. This technique can be used to measure all of a newborn animal’s behavior (9 samples/second, 24 hours/day, 7 days/week) within strictly controlled virtual environments. In this talk, I will describe a series of controlled-rearing experiments that reveal how one high-level visual ability—invariant object recognition—emerges in the newborn brain. Further, I will show how these controlled-rearing data can be linked to models of visual cortex for characterizing the computations underlying newborn vision. More generally, I will argue that controlled rearing can serve as a critical tool for testing between different theories and models, both for developmental psychology and computational neuroscience.

Organizer:  Elizabeth Spelke Joshua Tenenbaum

Brains, Minds and Machines Seminar Series: Towards a system-level theory of computation in the visual cortex

Apr 14, 2015 - 4:00 pm
Prof. Thomas Serre
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Prof. Thomas Serre, Brown University

Abstract: Perception involves a complex interaction between feedforward (bottom-up) sensory-driven inputs and feedback (top-down) attention and memory-driven processes. A mechanistic understanding of feedforward processing, and its limitations, is a necessary first step towards elucidating key aspects of perceptual functions and dysfunctions.

In this talk, I will review our ongoing effort towards the development of a large-scale, neurophysiologically accurate computational model of feedforward visual processing in the primate cortex. I will present experimental evidence from a recent electrophysiology study with awake behaving monkeys engaged in a rapid natural scene categorization task. The results suggest that bottom-up processes may provide a satisfactory description of the very first pass of information in the visual cortex. I will then survey recent work extending a feedforward hierarchical model from the processing of 2D shape to motion, depth and color. I will show that this bio-inspired approach to computer vision performs on par with, or better than state-of-the-art computer vision systems in several real-world applications. This demonstrates that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

Dr Serre is a Manning Assistant Professor in Cognitive Linguistic & Psychological Sciences at Brown University . He received a PhD in computational neuroscience from MIT (Cambridge, MA) in 2006 and an MSc in EECS from Télécom Bretagne (Brest, France) in 2000. His research focuses on understanding the brain mechanisms underlying the recognition of objects and complex visual scenes using a combination of behavioral, imaging and physiological techniques. These experiments fuel the development of quantitative computational models that try not only to mimic the processing of visual information in the cortex but also to match human performance in complex visual tasks. He is the recipient of an NSF early career award and DARPA young faculty award.

Organizer:  Tomaso Poggio Organizer Email:  tp@ai.mit.edu

Brains, Minds & Machines Seminar Series: Computer Vision that is changing our lives

Mar 23, 2015 - 4:00 pm
Prof. Amnon Shashua, Hebrew University, Co-founder, Chairman & CTO, Mobileye (NYSE:MBLY), OrCam.
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Prof. Amnon Shashua, Hebrew University, Co-founder, Chairman & CTO, Mobileye (NYSE:MBLY), OrCam.

Brief Biography:
Amnon Shashua holds the Sachs chair in computer science at the Hebrew University. He received his Ph.D. degree in 1993 from the AI lab at MIT working on computational vision where he pioneered work on multiple view geometry and the recognition of objects under variable lighting. His work on multiple view geometry received best paper awards at the ECCV 2000, the Marr prize in ICCV 2001 and the Landau award in exact sciences in 2005. His work on Graphical Models received a best paper award at the UAI 2008. Prof. Shashua was the head of the School of Engineering and Computer Science at the Hebrew University of Jerusalem during the term 2003–2005. He is also well known on founding startup companies in computer vision and his latest brainchild Mobileye employs today 250 people developing systems-on-chip and computer vision algorithms for detecting pedestrians, vehicles, and traffic signs for driving assistance systems. For his industrial contributions prof. Shashua received the 2004 Kaye Innovation award from the Hebrew University.

Organizer:  Tomaso Poggio

Reflexive Theory-of-Mind Reasoning in Games

Dec 2, 2014 - 9:00 pm
Prof. Jun Zhang
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street MIT Bldg 46 Cambridge, 02139 United States Speaker/s:  Prof. Jun Zhang, Department of Psychology and Department of Mathematics University of Michigan, Ann Arbor

Theory-of-mind (ToM) is the modeling of mental states (such as belief, desire, knowledge, perception) through recursive (“I think you think I think …”) type reasoning in order to plan one’s action or anticipate others’ action. Such reasoning forms the core of strategic analysis in the game-theoretic setting. Traditional analysis of rational behavior in games of complete information is centered on the axiom of “common knowledge,” according to which all players know something to be true, know that all players know it to be true, know that all players know all players know it to be true, etc. Such axiom requires recursive modeling of players to the full depth, and seems to contradict human empirical behavior revealed by behavioral game literature. Here, I propose that such deviation from normative analysis may be due to players’ building predictive mental models of their co-players based on experience and context without necessarily assuming a priori full rationality and common knowledge, rather than due to any lapse in “instrumental rationality” whereby players (and co-players) translate the predictions from their mental models to optimal choice. I investigate this mental model account of theory-of-mind reasoning by constructing a series of two-player, sequential-move matrix games all terminating in a maximal of three steps. By carefully designing payoff matrices, the depth of recursive reasoning (i.e., first-order ToM versus second-order ToM) can be contrasted based on participants’ choice behavior in those games. Empirical findings support the idea that depth of ToM recursion (related to perspective-taking) and instrumental rationality (rational application of belief-desire to action) constitute separate processes.

Brains, Minds and Machines Seminar Series: Scientific Utopia: Improving Openness and Reproducibility in Scientific Research

Oct 28, 2014 - 4:00 pm
Brian Nosek, University of Virginia
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Brian Nosek, University of Virginia Professor in the Department of Psychology and co-founder of Project Implicit and the Center for Open Science

Abstract:

An academic scientist’s professional success depends on publishing.

Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results.

These incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation.  I will briefly review the evidence and challenges for reproducibility and then discuss some of the initiatives that aim to nudge incentives and create infrastructure that can improve reproducibility and accelerate scientific progress.

Biography:

Brian Nosek is Professor in the Department of Psychology at the University of Virginia and is co-founder of both the Center for Open Science and Project Implicit.

This talk is part of the Brains, Minds and Machines Seminar Series September 2015-June 2016

Organizer:  Rebecca Saxe

Brains, Minds and Machines Seminar Series: Neural Representations of Language Meaning

Sep 30, 2014 - 4:00 pm
Tom M. Mitchell: E. Fredkin University Professor and Chair of the Machine Learning Department School of Computer Science at Carnegie Mellon University
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Tom M. Mitchell: E. Fredkin University Professor and Chair of the Machine Learning Department, School of Computer Science at Carnegie Mellon University

Abstract:

How does the human brain use neural activity to create and represent meanings of words, sentences and stories?  One way to study this question is to give people text to read, while scanning their brain, then develop machine learning methods to discover the mapping between language features and observed neural activity.  We have been doing such experiments with fMRI (1 mm spatial resolution) and MEG (1 msec time resolution) brain imaging, for over a decade.  As a result, we have learned answers to questions such as “Are the neural encodings of word meaning the same in your brain and mine?”, “Are neural encodings of word meaning built out of recognizable subcomponents, or are they randomly different for each word?,” and “What sequence of neurally encoded information flows through the brain during the half-second in which the brain comprehends a single word, or when it comprehends a multi-word sentence?”  This talk will summarize some of what we have learned, newer questions we are currently working on, and will describe the central role that machine learning algorithm play in this research.

Biography:

Tom M. Mitchell founded and chairs the Machine Learning Department at Carnegie Mellon University, where he is the E. Fredkin University Professor.  His research uses machine learning to develop computers that are learning to read the web, and uses brain imaging to study how the human brain understands what it reads.  Mitchell is a member of the U.S. National Academy of Engineering, a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow and Past President of the Association for the Advancement of Artificial Intelligence (AAAI).  He believes the field of machine learning will be the fastest growing branch of computer science during the 21st century.

This talk is part of the Brains, Minds and Machines Seminar Series September 2015-June 2016

Pages

Subscribe to Seminars