Theoretical Frameworks for Intelligence

Approach

The core CBMM challenge provides a critical focus for the diversity of theoretical approaches. The models for answering the sets of questions in the challenge will need to encompass behavior, macro circuitry and individual neural circuits. They will need to be plausible and testable at all these levels.

The theory platform will connect all these levels: for instance face identification algorithms to answer the question “who is there” should perform well but also be consistent with known fMRI and primate physiology. In general the theory platform will inform the algorithms that will be implemented within Vision and Language which will take the lead on the engineering side.  The modeling and algorithm development will be guided by scientific concerns, incorporating constraints and findings from our work in cognitive development (Development of Intelligence), human cognitive neuroscience (Social Intelligence), and systems neuroscience (Vision and Language).

Integration

A core mathematics of intelligence comprising learning, inference, and neural computation that has emerged in the past few years will provide the tools for the theory platform.

Learning theory is the modern synthesis (due to work by Vapnik, Valiant, and Smale among others) of diverse fields in modern mathematics such as high dimensional probability and empirical process theory, computational harmonic analysis, computational geometry and topology, optimization theory, and convex analysis (Amit et al., 1985; Bousquet et al., 2004; Cucker & Smale 2001; Devroye et al., 1996; Poggio & Smale, 2003; Seung et al., 1992; , Smale et al., 2009; Steinwart & Christmann 2008; , Valiant 1984; Valiant 2000; Vapnik 1998; Vapnik 1995). Hierarchical “deep” architectures for learning represents a promising area for theoretical work leading to a new learning theory inspired by the basic organization of the cortex.

Probabilistic modeling and inference are central tools for acting intelligently in a complex world with pervasive uncertainty. Probabilistic graphical models are our starting point, casting perception, reasoning, learning, prediction, and planning in a unified framework as Bayesian inferences about unobserved variables (latent causes or future outcomes) conditioned on observed data (effects).  Hierarchical and nonparametric Bayesian methods and probabilistic grammars extend the approach. Probabilistic programs generalize all these methods, marrying Bayesian probability with universal computation (Goodman et al., 2008.)

Neural Computation comprises several complementary modeling approaches that have been developed to link intelligent behavior and the brain mechanisms underlying it (Rao & Ballard 1999). Work is planned on neural circuits that may implement probabilistic inference (including representations of constraints and priors) [Beck et al., 2011; Beck et al., 2008; Burak et al., 2010).  We will also investigate a recent theory that attempts to explain and predict cortical architecture and properties of neurons in different visual areas (Poggio, 2011).

Related Projects

Recent Publications

W. Lotter, Kreiman, G., and Cox, D., Unsupervised Learning of Visual Structure using Predictive Generative Networks, in International Conference on Learning Representations (ICLR), San Juan, Puerto Rico, 2016.
CBMM Funded
M. Nickel, Murphy, K., Tresp, V., and Gabrilovich, E., A Review of Relational Machine Learning for Knowledge Graphs, Proceedings of the IEEE, vol. 104, no. 1, pp. 11 - 33, 2016.PDF icon 1503.00759v3.pdf (1.53 MB)
CBMM Funded
Q. Liao, Leibo, J. Z., and Poggio, T., How Important Is Weight Symmetry in Backpropagation?, in Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), Phoenix, AZ., 2016.PDF icon liao-leibo-poggio.pdf (191.91 KB)
CBMM Funded
CBMM Funded