LH - Computational Cognitive Science: Home

Course Description: 

Introduction to computational theories of human cognition and computational frameworks that could support human-like artificial intelligence (AI). The central questions are, what is the form and content of people’s knowledge of the world across different domains, and what are the principles that guide people in learning new knowledge and reasoning to reach decisions based on sparse, noisy data? The course surveys recent approaches to cognitive science and AI built on these principles:

  • World knowledge can be described using probabilistic generative models; perceiving, learning, reasoning, and other cognitive processes can be understood as Bayesian inferences over these generative models.
  • To capture the flexibility and productivity of human cognition, generative models can be defined over richly structured symbolic systems such as graphs, grammars, predicate logics, and most generally probabilistic programs.
  • Inference in hierarchical models can explain how knowledge at multiple levels of abstraction is acquired.
  • Learning with adaptive data structures allows models to grow in complexity or change form in response to the observed data.
  • Approximate inference schemes based on sampling (Monte Carlo) and deep neural networks allow rich models to scale up efficiently, and may also explain some of the algorithmic and neural underpinnings of human thought.

We introduce a range of modeling tools, including core methods from contemporary AI and Bayesian machine learning, as well as new approaches based on probabilistic programming languages. We show how these methods can be applied to many aspects of cognition, including perception, concept learning and categorization, language understanding and acquisition, common-sense reasoning, decision-making and planning, theory of mind and social cognition. Lectures focus on the intuitions behind these models and their applications to cognitive phenomena. Recitations fill in mathematical background and provide hands-on modeling guidance in several probabilistic programming environments, including Church, WebPPL, and PyMc3.

Prerequisites:

  • Basic probability and statistical inference
  • Previous experience with programming, especially in MATLAB, Python, Scheme, JavaScript, or Julia, which form the basis of the probabilistic programming environments used in this course
  • It is helpful to have previous exposure to core problems and methods in artificial intelligence, machine learning, or cognitive science

Readings:

  • There is no single required text for this course. The following text is strongly recommended as background reading on relevant formal models: Russell, S. J. & Novig, P. (2009) Artificial Intelligence: A Modern Approach, Third Edition, New Jersey: Prentice Hall. Readings consist of papers from the AI and cognitive literature, excerpts from the above text and other books, and tutorials on machine learning.

Assignments:

  • There are four problem sets that involve programming. Students work on practice problems in the recitations, in preparation for the assignments.

Project:  

  • Students complete a final project or paper on cognitive modeling, either individually or in pairs. A typical project begins with a paper from the recent cognitive or artificial intelligence literature that is not yet implemented in a probabilistic programming language such as Church or WebPPL. The project replicates the key results from the paper and tries to extend the paper in interesting ways, making use of the expressive power of probabilistic programs. Optionally, students can run a human experiment on Mechanical Turk or offline. See these project ideas and brief guidelines for the project write-up.