Prof. Santosh Vempala, Georgia Tech.
Host: Prof. Tomaso Poggio (MIT)
Abstract: Despite great advances in ML, and in our understanding of the brain at the level of neurons, synapses, and neural circuits, we still have no satisfactory explanation for the brain's performance in perception, cognition, language, memory, behavior; as Nobel laureate Richard Axel put it, ``we have no logic for translating neural activity into thought and action''. The Assembly Calculus (AC) is a framework to fill this gap, a computational model whose basic data type is the assembly, a large subset of neurons whose simultaneous excitation is tantamount to the subject's thinking of an object, idea, episode, or word. The AC provides a repertoire of operations ("project", "reciprocal-project", "associate", "pattern-complete", etc.) whose implementation relies only on Hebbian plasticity and inhibition, and encompasses a complete computational system, thereby enabling complex function. Very recently, it has been shown, rigorously and in simulation, that the AC can learn to classify samples from well-separated classes. For basic concept classes in high dimension, an assembly can be formed and recalled for each class, and these assemblies are distinguishable as long as the input classes are sufficiently separated. Viewed as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision --- all attributes expected of a brain-like mechanism. The talk will highlight several fascinating questions that arise, from the convergence of assemblies to their unexpected generalization abilities.
This is joint work with Christos Papadimitriou, Max Dabagia, Mirabel Reid and Dan Mitropolsky.
Zoom link: https://mit.zoom.us/j/97301534627