The Science and Engineering of Intelligence: A bridge across Vassar Street

Prof. Kanwisher speaks to the packed auditorium

Neuroscience has made huge advances in the last few years. We now know more about how the brain works than we have ever known before. Likewise, Computer Science and Artificial Intelligence have made enormous steps forward and have become part of our every-day lives. The interaction between Neuroscience and Computer Science has driven some of the most recent advances in Artificial Intelligence and this interaction has become a critical stepping stone for AI research. We have assembled a stellar list of speakers at the intersection of Neuroscience and AI from both sides of Vassar Street who will give an account of how this multi-disciplinary interaction affects their work.

Date: January 15th, 2016 | Location: 46-3002 (Singleton Auditorium) | IAP Activity Page

The list of speakers included:

Summary of Speaker Talks:

Feng Zhang

Prof. Zhang presented three pieces of work from his group related to improving, applying and extending his CRISPR-Cas9 systems. CRISPR-Cas9 systems allow for precise gene editing and have opened the door to investigating complex diseases that are caused multiple genetic and epigenetic modifications. These systems are not yet perfect, Prof. Zhang presented techniques from his group to dramatically increase the specificity of the induced mutation and avoid off-site genetic manipulation. Prof. Zhang went on to talk about how these systems can be used to accelerate significantly the search for cancer drugs, techniques that are especially effective where cancer cells have shown to mutate quickly to resist to current drugs. Lastly Prof. Zhang overviewed his recent efforts to find new naturally occurring CRISPR-Cas9-like systems that may prove fundamental where the current techniques fall short.

Ed Boyden

Prof. Boyden presented an accessible talk and introduced his notorious expansion microscopy, scientists have long worked on improving the resolution of microscopes, Boyden and colleagues have instead figured out how to expand the specimen under observation by introducing a polymer gel in tissues and then expanding the polymer by almost two orders of magnitude. He next presented his group’s efforts to create 3D microelectrode arrays and explained how they can be used to simultaneously record from and localize neurons’ activity in all three dimensions. Lastly he presented his recent applications of optogenetics, a technique which involves the use of light to control cells in living tissue to map the connectivity of complex neural circuits and establish causal relationships.

Tomaso Poggio

Prof. Poggio addresses the packed auditoriumProf. Poggio started his talk highlighting the most recent advances in Artificial Intelligence and remarked how numerous systems have their roots deeply planted in what we know about neuroscience. He remarked how AI is living a second golden age, press coverage and enthusiasm is at an all time high especially for Computer Vision and Speech Recognition. He remarked that to claim a true understanding of a system one is supposed to be able to provide a theory of how and why it works. Currently many successful AI systems lack this requirement. He presented recent efforts from him and his group to develop a mathematical theory that explains why recent architectures for computer vision have been as successful as they are starting from the principle of invariant representations. He introduced a number of prediction about biological vision, that are confirmed by experiments, that he was able to draw from his theoretical account.

Nancy Kanwisher

Prof. Kanwisher pointed out that all the complex systems we observe, natural or human-made, are to some extent modular. Highly specialized and almost standalone modules interact with each other and give rise to a whole that is greater than the sum of its parts. She said there is no reason why the brain should work any differently and presented a number of results to substantiate this claim. She described how fMRI, a non-invasive brain imaging technique, can be used to find such specialized modules. She presented convincing (and entertaining) evidence that the most credible critiques to these methods have recently been proved false and there is in fact a causal relationship between the stimulation of these highly specialized modules and perception. She concluded by her first stabs at investigating whether or not these modules exist at birth or they are formed with learning and suggested that some very specific regions like the visual word area are in fact only formed during development.

Josh Tenenbaum

Prof. Tenenbaum presented his recent work on analysis by synthesis of visual faces and motivated it by pointing out how humans are able to reliably peroform one-shot learning and are able to easily infer the cause of what the observe. For example when we see a hand drawn character, even for an alphabet we don’t know we have no trouble producing a new instance, therefore we are able to imagine and even execute the motor program that drew the original character in the first place. Exploring this idea of what the graphic engine in our head might look like Prof. Tenenbaum and his collaborators built a state of the art face generation tool that is able to infer what a face would look like from a completely different viewpoint or under a different illumination starting from a single example. Likewise their system can draw new instances of a handwritten digits that people confuse with ones that are drawn by other humans.

Bill Freeman

Prof. Freeman presented his recent work on predicting sounds from mute video sequences and showed that predicted sounds sequences are a precious mid-level representation to predict material properties of objects in a visual scene. He and his collaborators collected a large dataset of videos of a drum-stick hitting or brushing against various surfaces. They computed cochleagram, a biologically inspired representation for audio for the sound part of the dataset and showed that a deep neural network is able to reliably infer a cochleogram from a mute sound. They further showed that the inferred cochleagram can be used to classify the material properties of the objects that produced the inferred sound as soft, hard, rough smooth and so on.

 

Speaker Bios:

Ed BoydenEd Boyden:

Professor Boyden leads the MIT Media Lab’s Synthetic Neurobiology research group, which develops tools for mapping, controlling, observing, and building dynamic circuits of the brain, and uses these neurotechnologies to understand how cognition and emotion arise from brain network operation, as well as to enable systematic repair of intractable brain disorders such as epilepsy, Parkinson's disease, and post-traumatic stress disorder. His research group has invented a suite of “optogenetic” tools that are now in use by thousands of research groups around the world for activating and silencing neurons with light.

Boyden was named to the "Top 35 Innovators Under the Age of 35" by Technology Review in 2006, and to the "Top 20 Brains Under Age 40" by Discover magazine in 2008. He has received the Gabbay Award, National Institutes of Health (NIH) Director's Pioneer Award and Transformative Research Award, the Society for Neuroscience Research Award for Innovation in Neuroscience, the NSF CAREER Award, the Paul Allen Distinguished Investigator Award, and the New York Stem Cell Robertson Investigator Award. In 2010, his work was recognized as the "Method of the Year" by the journal Nature Methods. Most recently he shared the 2013 Grete Lundbeck European Brain Research Prize for outstanding contributions to European neuroscience–the largest neuroscience prize in the world.

Bill FreemanBill Freeman:

William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, joining the faculty in 2001.

His current research interests include motion re-rendering, computational photography, and learning for vision. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and recently won "test of time" awards for papers written in 1991 and 1995. Previous research topics include steerable filters and pyramids, the generic viewpoint assumption, color constancy, bilinear models for separating style and content, and belief propagation in networks with loops. He holds 30 patents.

He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences and was program co-chair for ICCV 2005 and CVPR 2013. MORE

Tomaso PoggioTomaso Poggio:

Tomaso A. Poggio, is the Eugene McDermott Professor at the Department of Brain and Cognitive Sciences; Director, Center for Brains, Minds and Machines; Member of the Computer Science and Artificial Intelligence Laboratory at MIT; since 2000, member of the faculty of the McGovern Institute for Brain Research.

Born in Genoa, Italy (naturalized in 1994), he received his Doctor in Theoretical Physics from the University of Genoa in 1971 and was a Wissenschaftlicher Assistant, Max Planck Institut für Biologische Kybernetik, Tüebingen, Germany from 1972 until 1981 when he became Associate Professor at MIT. He is an honorary member of the Neuroscience Research Program, a member of the American Academy of Arts and Sciences and a Founding Fellow of AAAI. He received several awards such as the Otto-Hahn-Medaille Award  of the Max-Planck-Society, the Max Planck Research Award (with M. Fahle), from the Alexander von Humboldt Foundation, the MIT 50K Entrepreneurship Competition Award, the Laurea Honoris Causa from the University of Pavia in 2000 (Volta Bicentennial), the 2003 Gabor Award, the 2009 Okawa prize, the American Association for the Advancement of Science (AAAS) Fellowship (2009) and the Swartz Prize for Theoretical and Computational Neuroscience in 2014. He is one of the most cited computational neuroscientists (with a h-index greater than 100 – based on GoogleScholar).

Nancy KanwisherNancy Kanwisher:

Nancy Kanwisher is the Walter A. Rosenblith Professor of Cognitive Neuroscience in the Department of Brain and Cognitive Sciences and a founding member of the McGovern Institute. She joined the MIT faculty in 1997, and prior to that served on the faculty at UCLA and Harvard University. In 1999, she received the National Academy of Sciences Troland Research Award. She was elected to the National Academy of Sciences in 2005 and to the American Academy of Arts and Sciences in 2009.

The Kanwisher lab has used brain imaging to identify regions of the brain that play highly specialized roles in perception and cognition, including the perception of faces, places, and bodies, as well as various aspects of social cognition and language processing. Each of these regions can be identified robustly in a short functional scan in essentially every normal subject; they are part of the basic functional organization of the human mind and brain. In ongoing work the Kanwisher lab is working to better characterize the precise computations that occur in each region, to discover new functionally specific brain regions, and to understand how these regions get wired up in development and how they work together to produce cognition.

Feng ZhangFeng Zhang:

Zhang is a bioengineer focused on developing tools to better understand nervous system function and disease. His lab applies these novel tools to interrogate gene function and study neuropsychiatric diseases in animal and stem cell models. Since joining MIT and the Broad Institute January 2011, Zhang has pioneered the development of genome editing tools for use in eukaryotic cells – including human cells – from natural microbial CRISPR systems. These tools, which he has made widely available, are accelerating biomedical research around the world.

Zhang leverages CRISPR and other methodologies to study the role of genetic and epigenetic mechanisms underlying diseases, specifically focusing on disorders of the nervous system. He is especially interested in complex disorders, such as psychiatric and neurological diseases, that are caused by multiple genetic and environmental risk factors and which are difficult to model using conventional methods. Zhang’s methods are also being used in the fields of immunology, clinical medicine, cancer biology, and other areas of research. Zhang’s long-term goal is to develop novel therapeutic strategies for disease treatment. Zhang's work on genome editing traces back to his seminal paper, published in 2010, reporting the first systematic approach of an earlier system, called TALENs, to target specific genes in mammalian cells.

Soon after joining the Broad Institute and MIT, in early 2011 Zhang turned his attention to the CRISPR-Cas system – which researchers in Canada had just demonstrated as being able to create double-stranded breaks in target DNA at precise positions  – as a potential tool for improved genome editing. On Oct 5, 2012, Zhang submitted a breakthrough paper which reported the first successful programmable genome editing of mammalian cells using CRISPR-Cas9 (Cong et al. Science 2013). Cong et al. remains the most-cited paper in genome editing.

Joshua TenenbaumJosh Tenenbaum:

Josh Tenenbaum is a Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences, and a member of the Computer Science and Artificial Intelligence Laboratory. He received his Ph.D. from MIT in 1999 and after a brief postdoc with the MIT AI Lab, he joined the Stanford University faculty as Assistant Professor of Psychology and (by courtesy) Computer Science. He returned to MIT as a faculty member in 2002. He currently serves as Associate Editor of the journal Cognitive Science, and he has been active on the program committees of the Neural Information Processing Systems (NIPS) and Cognitive Science (CogSci) conferences.