Pouya Bashivan, McGill University
Across the primate neocortex, neurons that perform similar functions tend to be spatially grouped together. How such organization emerges and why have been debated extensively, with various models successfully replicating aspects of cortical topography using cost functions and learning rules designed to induce topographical structures. However, these models often compromise task learning capabilities and rely on strong assumptions about learning in neural circuits. I will introduce two new approaches for training topographically organized neural networks that substantially improve the trade-off between task performance and topography while also simplifying the assumptions about learning in neural circuits required to obtain brain-like topography. In particular, I will show that excitatory local lateral connectivity is sufficient for simulating cortex-like topographical organization without the need for any topography-promoting learning rules or objectives. I will also discuss the implications of this model for the link between topographical organization and robust representations.
Pouya Bashivan is an Assistant Professor at the Department of Physiology at McGill University, member of the Integrated Program in Neuroscience and an associate member of the Quebec AI Institute (MILA). Prior to joining McGill University, he was a postdoctoral fellow at MILA working with Drs. Irina Rish and Blake Richards. Prior to that he was a postdoctoral researcher at the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, MIT, working with Professor James DiCarlo. He received his PhD in computer engineering from the University of Memphis in 2016. Before that, Pouya studied control engineering and earned a B.Sc. and a M.Sc. degree in electrical and control engineering from KNT University (Tehran, Iran).
The goal of research in Bashivan lab is to develop neural network models that leverage memory to solve complex tasks. While we often rely on task-performance measures to find improved neural network models and learning algorithms, we also use neural and behavioral measurements from humans and other animal brains to evaluate the similarity of these models to biologically evolved brains. We believe that these additional constraints could expedite the progress towards engineering a human-level artificially-intelligent agent.