Weekly Research Meetings

CBMM Weekly Research Meeting: Spiking neurons can discover predictive features by aggregate-label learning

May 4, 2016 - 4:00 pm
Venue:  McGovern Reading Room (45-5165) Speaker/s:  Speaker: Robert Gütig (Max Planck Institute of Experimental Biology, Goettingen)

Robert Gütig is a group leader at the Max Planck Institute of Experimental Biology in Goettingen where he researches spike-based learning and information processing in neural networks. He was trained in Physics at the Free University of Berlin (Germany) and the University of Cambridge (UK). He did a PhD in Computational Neuroscience with Ad Aertsen (University of Freiburg, Germany) and a postdoc with Haim Sompolinsky (Hebrew University (Israel) and Harvard University (USA)).

Abstract: The brain routinely discovers sensory clues that predict opportunities or dangers. However, it is unclear how neural learning processes can bridge the typically long delays between sensory clues and behavioral outcomes. Here, I introduce a learning concept, aggregate-label learning, that enables biologically plausible model neurons to solve this temporal credit assignment problem. Aggregate-label learning matches a neuron’s number of output spikes to a feedback signal that is proportional to the number of clues but carries no information about their timing. Aggregate-label learning outperforms stochastic reinforcement learning at identifying predictive clues and is able to solve unsegmented speech-recognition tasks. Furthermore, it allows unsupervised neural networks to discover reoccurring constellations of sensory features even when they are widely dispersed across space and time.

This work has appeared at http://science.sciencemag.org/content/351/6277/aab4113

CBMM Weekly Research Meeting: U. Mass Boston Research Projects

Apr 13, 2016 - 4:00 pm
Venue:  McGovern Reading Room, MIT 46-5165 Speaker/s:  Dr. Erik Blaser, Dr. Marc Pomplun, Dr. Jin Ho Park, UMass Boston

Abstract: Dr. Blaser (Psychology) and Dr. Pomplun (Computer Science) will give an introduction to ongoing research both in their labs, and more broadly at the University of Massachusetts Boston.  Dr. Blaser’s area is visual psychophysics (including work on visual attention and ocular dominance plasticity) and he collaborates on projects related to cognitive development, such as the development of visual attention and working memory in infants and toddlers diagnosed with Autism Spectrum Disorder.  He will give a brief overview of some ongoing studies, with a particular focus on the use of pupillometry as a measure of attentional control. Dr. Blaser is Director of the new Developmental and Brain Sciences PhD program at UMass Boston and he and a colleague, Dr. Jin Ho Park, will also give a brief overview of the Cognitive and Behavioral Neuroscience work within that program. Dr. Pomplun directs the Visual Attention Lab and is interested in vision in humans and machines, with a focus on eye movements and visual attention. He will present some of his psychophysical studies and demonstrate how computational modeling can be used to better understand biological vision and build more powerful computer vision systems and smarter human-computer interfaces.

A CBMM debate on interdisciplinary topics around intelligence

Apr 6, 2016 - 4:00 pm
Venue:  MIT Bldg. 46 Room 5165 Speaker/s:  Moderator: Max Tegmark Panelists: Adam Marblestone, Boris Katz, Josh Tenenbaum, Gabriel Kreiman, Seth Lloyd & Tommy Poggio - Volunteers are welcome! - Audience members are also encouraged to participate. 

Main topic: What similarities/differences should we expect between the brain and AI-systems?

On one hand, one might expect evolution and engineering to discover similar solutions to similar computational problems. On the other hand, the two are optimizing under very different constraints: evolution cares about self-assembly, self-repair, learning and low power-consumption, while AI-designers care about simplicity & ease of understanding.

 

Bonus topics if time permits: 

* Do we need a science of intelligence or merely engineering of intelligence?

* Hilbert questions in AI and how to approach them

* Which questions should we be asking but aren’t? (“unknown unknowns”)

* At what level of structure can we best understand the mind? 

Would an accurate simulation have to include atoms, cells, idealized neurons or merely simplified cortical columns? How much does it matter that there are dozens of neuronal cell types and that synapses are so complicated? While the Navier-Stokes equation lets us understand the motion of a fluid without worrying about it being made up of atoms, there are other effects such as Brownian motion where the atomic details matter. On the other hand, we can simulate a computer program perfectly at the abstract level of bits without knowing anything about transistors or other details of the computational substrate.

 

 

*Food and social starts at 3:30 p.m.

CBMM Weekly Research Meeting: The story story

Mar 9, 2016 - 4:00 pm
Venue:  Harvard NW Bldg. Speaker/s:  Patrick Winston 

Abstract:
I describe the Genesis story understanding system, and I explain why I believe Genesis sheds light on aspects of  intelligence that are uniquely human.  I show how Genesis exhibits aspects of common sense reasoning, conceptual understanding, cultural bias, hypothetical reflection, mental-modeling, mental illness, and self awareness as Genesis reads simple, 100-sentence stories adapted from sources such as Shakespeare’s plays and newspaper accounts of violence used in social-psychology studies.   I discuss how work on Genesis has been influenced by Minsky, Marr, Chomsky, Ullman, Spelke, Wilson, Morris and Peng, and an engineering perspective.
 

Hybrid Search: How long-term memories interact with visual search.

Mar 2, 2016 - 4:00 pm
Photo of Prof. Jeremy Wolfe
Venue:  MIT Bldg. 46 Room 5165 Speaker/s:  Jeremy M Wolfe, PhD Professor of Ophthalmology & Radiology,  Harvard Medical School Director -Visual Attention Lab, Center for Advanced Medical Imaging (Radiology)

Abstract: In a typical visual search task, you look for a target object amongst some non-target, distractor objects. In the real world, however, you often look for more than one thing at one time. In the supermarket, you might be holding a shopping list of 10 items in your memory. We will call this combination of visual search and memory search. “hybrid search”. In a basic hybrid search task in the lab, our Observers memorized 1-100 specific objects and searched for them in visual displays of 1-16 objects. Reaction Time (RT) was a linear function of the visual set size. RT is not a linear function of the memory set size.  Rather, RT increases linearly with the log of the number of items in memory. What does the log function tell us about memory search? What is the roll of working memory? What happens if you are looking for categories of items (e.g. Find any animals, coins, or boats)? What happens if observers are foraging for multiple instances of multiple types of targets in a single display? By answering questions like these, the hybrid search paradigm gives us new insights into the interaction of object recognition, memory and visual attention in complex tasks.

Geometry, probability, invariance and perception

Mar 30, 2016 - 4:00 pm
Photo of Prof. L. Mahadevan
Venue:  Harvard NW Bldg. Room 243 Speaker/s:  L. Mahadevan, SEAS, Physics and OEB, Harvard University

Abstract: Geometry is typically associated with simultaneous processing of the relationship between objects, while probability is typically associated with the sequential processing of events.   I will  discuss some of our preliminary work on combining these subjects in two contexts: (i) characterizing geometry probabilistically using a relatively simple invariant measure with the ability to explain some seemingly surprising behavioral results that show that we can distinguish between large and small objects, or animate and inanimate objects; (ii)  characterizing probability geometrically, couched in terms of a perceptual test of randomness.

CBMM Weekly Research Meeting: "Computational Studies of Deep Networks”

Feb 17, 2016 - 4:00 pm
Photo of Prof. Haim Sompolinsky
Venue:  MIT Building 46, Room 3189 Speaker/s:  Haim Sompolinsky

Abstract: What are the computational principles underlying the transformation of sensory representations along brain sensory hierarchies? I will discuss recent theoretical results addressing: (1) the role of sparsity, expansion and noise in signal propagation in deep networks;  (2) the potential role of top-down inputs in providing context information, and (3) object recognition and classification from neuronal manifolds.

* Food will be out by 3:30pm and everyone is welcome to come to eat and socialize before the talk starts, at 4:00pm sharp.

CBMM Weekly Research Meeting: A first look at early functional organization in human cortex

Feb 3, 2016 - 4:00 pm
Photo of Prof. Rebecca Saxe
Venue:  MIT Bldg. 46 Room 5165 Speaker/s:  Rebecca Saxe

In healthy human adults, cortical representations of the visual world are spatially and functionally organized at multiple scales. High level, behaviourally relevant categories (e.g. faces, scenes) elicit systematic responses across wide regions of cortex. While the adult state has been described in detail, the developmental process that creates these cortical representations remains deeply mysterious, and highly theoretically contentious. I will describe the first results of fMRI experiments with awake human infants, revealing both the continuities and differences in cortical function.

CBMM Research Meeting

Jan 27, 2016 - 4:00 pm
Venue:  Harvard U. Northwest Bldg. Room 243 Address:  52 Oxford St., Cambridge, MA 02138 Speaker/s:  Nancy Kanwisher

Thrust 4 Projects

Speakers: Prof. Nancy Kanwisher - Introduction; Matt Peterson - "Real World Eye Movements for Acquiring Social Information"; Maryam Vaziri Pashkam - "Understanding action reading and social vision from simple interactions"; Leyla Isik - "Studying social interactions with commercial movie stimuli"

Pages

Subscribe to Weekly Research Meetings