No

Special Seminar: What is the information content of an algorithm?

Nov 7, 2013 - 3:00 pm
Joachim M. Buhman
Venue:  MIT: Ray and Maria Stata Center - Star Conference Room, 32-D463 Address:  32 Vassar Street MIT Bldg 32 Cambridge, MA 02139 United States Speaker/s:  Joachim M. Buhman, Machine Learning Laboratory in the Department of Computer Science at ETH Zurich

Abstract:
Algorithms are exposed to randomness in the input or noise during the computation. How well can they preserve the information in the data w.r.t. the output space? Algorithms especially in Machine Learning are required to generalize over input fluctuations or randomization during execution. This talk elaborates a new framework to measure the “informativeness” of algorithmic procedures and their “stability” against noise. An algorithm is considered to be a noisy channel which is characterized by a generalization capacity (GC). The generalization capacity objectively ranks different algorithms for the same data processing task based on the bit rate of their respective capacities. The problem of grouping data is used to demonstrate this validation principle for clustering algorithms, e.g. k-means, pairwise clustering, normalized cut, adaptive ratio cut and dominant set clustering. Our new validation approach selects the most informative clustering algorithm, which filters out the maximal number of stable, task-related bits relative to the underlying hypothesis class. The concept also enables us to measure how many bit are extracted by sorting algorithms when the input and thereby the pairwise comparisons are subject to fluctuations.

Biography:
Joachim M. Buhmann leads the Machine Learning Laboratory in the Department of Computer Science at ETH Zurich. He has been a full professor of Information Science and Engineering since October 2003. He studied physics at the Technical University Munich and obtained his PhD in Theoretical Physics. As postdoc and research assistant professor, he spent 1988-92 at the University of Southern California, Los Angeles, and the Lawrence Livermore National Laboratory. He held a professorship for applied computer science at the University of Bonn, Germany from 1992 to 2003. His research interests spans the areas of pattern recognition and data analysis, including machine learning, statistical learning theory and information theory. Application areas of his research include image analysis, medical imaging, acoustic processing and bioinformatics. Currently, he serves as president of the German Pattern Recognition Society.

This talk is part of the Brains, Minds & Machines Seminar Series 2013-2014.

Organizer:  Tomaso Poggio Lorenzo Rosasco

Special Seminar: Understanding the building blocks of neural computation: Insights from connectomics and theory

Oct 10, 2013 - 3:30 pm
Dmitri “Mitya” Chklovskii
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Speaker/s:  Dmitri “Mitya” Chklovskii, Janelia Farm, HHMI

Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. We developed a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our identification of cell types involved in motion detection allowed targeting of extremely demanding electrophysiological recordings by other labs. Preliminary results from such recordings are consistent with a correlation-based motion detector. This demonstrates that connectomes can provide key insights into neuronal computations.

Organizer:  Tomaso Poggio

Vision and Learning: Computers and Brains

Vision and Learning: Computers and Brains
Massachusetts Institute of Technology (MIT)
This course reviews and discusses research on the problem of learning to understand the world and interact with it using sensory information. Vision is used as the primary domain, and relevant learning approaches are examined from both computational and biological perspectives. Topics include learning in computational vision, recent advances and limitations of current learning methods, face processing by computers and brain, learning in synapses, reinforcement learning, and Markov decision processes in computers and brains.

Pages

Subscribe to No