LH - Computational Cognitive Science: Course Topics

The Fall 2018 course covered the following topics, in roughly this sequence:

  • Prelude: Two approaches to intelligence - Pattern recognition versus Modeling the world.  Learning about AI by playing video games.  
  • Foundational questions:  What kind of computation is cognition?  How does the mind get so much from so little?  How is learning even possible?  How can you learn to learn?
  • Introduction to Bayesian inference and Bayesian concept learning: Flipping coins, rolling dice, and the number game.
  • Human cognition as rational statistical inference: Bayes meets Marr's levels of analysis.  Case studies in modeling surface perception and predicting the future.
  • Modeling and inference tradeoffs, or “Different ways to be Bayesian”: Comparing humans, statisticians, scientists, and robots.  The sweet spot for intelligence: Fast, cheap, approximate inference in rich, flexible, causally structured models.
  • Probabilistic programming languages: Generalizations of Bayesian networks (directed graphical models) that can capture common-sense reasoning.  Modeling social evaluation and attribution, visual scene understanding and common-sense physical reasoning.
  • Approximate probabilistic inference schemes based on sampling (Markov chain Monte Carlo, Sequential Monte Carlo (particle filtering)) and deep neural networks, and their use in modeling the dynamics of attention, online sentence processing, object recognition and multiple object tracking.
  • Learning model structure as a higher-level Bayesian inference, and the Bayes Occam's razor.  Modeling visual learning and classical conditioning in animals.
  • Hierarchical Bayesian models: a framework for learning to learn, transfer learning, and multitask learning.  Modeling how children learn the meanings of words, and learn the basis for rapid ('one-shot') learning.  Building a machine that learns words like children do.
  • Probabilistic models for unsupervised clustering: Modeling human categorization and category discovery; prototype and exemplar categories; categorizing objects by relations and causal properties.
  • Nonparametric Bayesian models - capturing the long tail of an infinitely complex world: Dirichlet processes in category learning, adaptor grammars and models for morphology in language.
  • Planning with Markov Decision Processes (MDPs): Modeling single- and multi-agent decision-making.  Modeling human 'theory of mind' as inverse planning.
  • Modeling human cognitive development - how we get to be so smart: infants' probabilistic reasoning and curiosity; how children learn about causality and number; the growth of intuitive