Research Meeting: Duncan Stothers, Will Xiao, and Nimrod Shaham

May 14, 2019 - 4:00 pm
Speaker/s: 

Duncan Stothers, Will Xiao, Nimrod Shaham

Organizer: 

Duncan Stothers-

Title: Turing's Child Machine: A Deep Learning Model of Neural Development

Abstract:

Turing recognized development’s connection to intelligence when he proposed engineering a ‘child machine’ that becomes intelligent through a developmental process, instead of top-down hand-designing intelligence into an ‘adult machine’. We now know from neurobiology that the most important developmental process is the ‘critical period’ where the architecture (equivalently connectome or topology) expands in a random way and then prunes itself down based on activity. The computational role of this process is unknown, but we know it is connected to intelligence because deprivation during this period has permanent negative effects later in life. Further, the fact the connectome changes during this period through ‘architecture learning’ in addition to the synaptic weights changing through ‘synaptic weight learning’, set it apart from deep learning AI research where the architecture is hand designed and stays fixed during learning and only ‘synaptic weight learning’ takes place. To understand development’s connection to biological and artificial intelligence we model the critical period by adding random expansion and activity based pruning steps to deep neural network training. Results suggest the critical period is as an unsupervised architecture search process that finds exponentially small architectures that generalize well. Resultant architectures from this process also show similarities to hand-designed ones. 

 

Will Xiao-

Title: Uncovering preferred stimuli of visual neurons using generative neural networks

Abstract:

What information do neurons represent? This is a central question in neuroscience. Ever since Hubel and Wiesel discovered that neurons in primary visual cortex (V1) respond preferentially to bars of certain orientations, investigators have searched for preferred stimuli to reveal information encoded by neurons, leading to the discovery of cortical neurons that respond to specific motion directions (Hubel, 1959), color (Michael, 1978), binocular disparity (Barlow et al., 1967), curvature (Pasupathy & Connor, 1999), complex shapes such as hands or faces (Desimone et al., 1984; Gross et al., 1972), and even variations across faces (Chang & Tsao, 2017).

However, the classic approach for defining preferred stimuli depends on using a set of hand-picked stimuli, limiting possible answers to stimulus properties chosen by the investigator. Instead, we wanted to develop a method that is as general and free of investigator bias as possible. To that end, we used a generative deep neural network (Dosovitskiy & Brox, 2016) as a vast and diverse hypothesis space. A genetic algorithm guided by neuronal preferences searched this space for stimuli.

We evolved images to maximize firing rates of neurons in macaque inferior temporal cortex and V1. Evolved images often evoked higher firing rates than the best of thousands of natural images. Furthermore, evolved images revealed neuronal selective properties that were sometimes consistent with existing theories but sometimes also unexpected.

This generative evolutionary approach complements classical methods for defining neuronal selectivities, serving as an independent test and a hypothesis-generating tool. Moreover, the approach has the potential for uncovering internal representations in any modality that can be captured by generative neural networks.

 

Nimrod Shaham-

Title: Continual learning and replay in a sparse forgetful Hopfield model

Abstract:

The brain has a remarkable ability to deal with an endless, continuous stream of information, while storing new memories and learning to perform new tasks. This is done without losing previously learned knowledge, which can be stored to timescales of the order of the animal’s life. In contrast, current artificial neural network models suffer from limited capacity (associative memory network models) and acute loss of performance in previously learned tasks after learning new ones (deep neural networks). Overcoming this limitation, known as catastrophic interference, is one of the main challenges in machine learning and theoretical neuroscience.

Here, we study a recurrent neural network that continually learns and stores sparse patterns of activity, while forgetting old ones (a palimpsestic model). Time dependent forgetting is incorporated as a decay of old memories’ contributions to the weight matrix. We calculate the required forgetting rate in order to avoid catastrophic interference, and find the optimal decay rate that gives maximal number of retrievable memories. Then, we introduce replay to the system, in the form of reappearance of previously stored patterns, and calculate the enhancement of time for which a memory is retrievable due to different patterns of replays. Our model reveals in a tractable and illuminating way how a recurrent neural network can learn continuously and store selected information for lifelong timescales.     

Details

52 Oxford St, Cambridge, MA 02138
Date: 
May 14, 2019
Time: 
4:00 pm
Venue: 
Harvard NW Building, Room 243
Address: 

52 Oxford St, Cambridge, MA 02138