CBMM faculty offer graduate and undergraduate courses that integrate computational and empirical approaches used in the study of problems related to intelligence. These courses introduce some of the mathematical frameworks used to formulate computational models, and experimental methods used in the fields of neuroscience and cognitive science to study the neural implementations of intelligent processes and manifestation of these computations in human cognitive behavior. Examples of the integration of these perspectives are drawn from current research on intelligence. Materials for many of these courses are available online. Most graduate courses are open to advanced undergraduates with appropriate background. Enrollment for courses is handled through the respective institutions.

Fall 2019

Massachusetts Institute of Technology (MIT)

Statistical Learning Theory and Applications
Provides students with the knowledge needed to use and develop advanced machine learning solutions to challenging problems. Covers foundations and recent advances of machine learning in the framework of statistical learning theory. Focuses on regularization techniques key to high-dimensional supervised learning. Starting from classical methods such as regularization networks and support vector machines, addresses state-of-the-art techniques based on principles such as geometry or sparsity, and discusses a variety of algorithms for supervised learning, feature selection, structured prediction, and multitask learning. Also focuses on unsupervised learning of data representations, with an emphasis on hierarchical (deep) architectures.
Atissa Banuazizi
Provides instruction on the mechanistic basis of intelligence - how the brain produces intelligent behavior and how we may be able to replicate intelligence in machines. Examines how human intelligence emerges from computations in neural circuits to reproduce similar intelligent behavior in machines. Working in teams, students complete computational projects and exercises that reinforce the theme of collaboration between (computer science + math) and (neuroscience + cognitive science). Culminates with student presentations of their projects. Instruction and practice in oral and written communication provided.
Computational Cognitive Science
Introduction to computational theories of human cognition. Focuses on principles of inductive learning and inference, and the representation of knowledge. Computational frameworks include Bayesian and hierarchical Bayesian models, probabilistic graphical models, nonparametric statistical models and the Bayesian Occam's razor, sampling algorithms for approximate learning and inference, and probabilistic models defined over structured representations such as first-order logic, grammars, or relational schemas. Applications to understanding core aspects of cognition, such as concept learning and categorization, causal reasoning, theory formation, language acquisition, and social inference.

Harvard University

This course examines recent work applying computational models to mental disorders. These models formalize psychopathology in terms of breakdown in fundamental neurocognitive processes, linking normal and abnormal brain function within a common framework. Computational modeling has already begun to yield insights, and even possible treatments, for a wide range of disorders, including schizophrenia, autism, Parkinson’s, depression, obsessive-compulsive disorder, and attention-deficit hyperactivity disorder. The course will consist of weekly readings from the primary literature, with one student leading the discussion of each paper
Visual Object Recognition: Computational and Biological Mechanisms
Visual recognition is essential for most everyday tasks including navigation, reading and socialization, and is also important for engineering applications such as automatic analysis of clinical images, face recognition by computers, security tasks and automatic navigation. In spite of the enormous increase in computational power over the last decade, humans still outperform the most sophisticated engineering algorithms in visual recognition tasks. This course examines how circuits of neurons in visual cortex represent and transform visual information, covering the following topics: functional architecture of visual cortex, lesion studies, physiological experiments in humans and animals, visual consciousness, computational models of visual object recognition, computer vision algorithms.
Spring 2019

Massachusetts Institute of Technology (MIT)

The Human Brain
Surveys the core perceptual and cognitive abilities of the human mind and asks how these are implemented in the brain. Key themes include the functional organization of the cortex, as well as the representations and computations, developmental origins, and degree of functional specificity of particular cortical regions. Emphasizes the methods available in human cognitive neuroscience, and what inferences can and cannot be drawn from each.
Cognitive Science
Edward Gibson, Pawan Sinha
This class is the second half of an intensive survey of cognitive science for first-year graduate students. Topics include visual perception, language, memory, cognitive architecture, learning, reasoning, decision-making, and cognitive development. Topics covered are from behavioral, computational, and neural perspectives.
sound waves
The course covers foundations and recent advances in the study of perception: the process of deriving information about the world from our sensory receptors.
Neural Coding and Perception of Sound
Daniel Polley, Bertrand Delgutte, M. C. Brown
(Same subject as HST.723[J]) Neural structures and mechanisms mediating the detection, localization and recognition of sounds. General principles are conveyed by theme discussions of auditory masking, sound localization, musical pitch, cochlear implants, cortical plasticity and auditory scene analysis. Follows Harvard FAS calendar.

Harvard University

robot shape made up of word cloud
This course provides a foundational overview of the fundamental ideas in computational neuroscience and the study of Biological Intelligence. At the same time, the course will connect the study of brains to the blossoming and rapid development of ideas in Artificial Intelligence. Topics covered include the biophysics of computation, neural networks, machine learning, Bayesian models, theory of learning, deep convolutional networks, generative adversarial networks, neural coding, control and dynamics of neural activity, applications to brain-machine interfaces, connectomics, among others. Lectures will be taught by leading Harvard experts in the field.
Computational Cognitive Neuroscience
"What I cannot create, I do not understand." – Richard Feynman This course applies Richard Feynman's dictum to the brain, by teaching students how to simulate brain function with computer programs. Special emphasis will be placed on how neurobiological mechanisms give rise to cognitive processes like learning, memory, decision-making, and object perception. Students will learn how to understand experimental data through the lens of computational models, and ultimately how to build their own models.
Hodkin-Huxley circuit
Follows trends in modern brain theory, focusing on local neuronal circuits and deep architectures. Explores the relation between network structure, dynamics, and function. Introduces tools from information theory, dynamical systems, statistics, and learning theory in the study of experience-dependent neural computation. Specific topics include: computational principles of early sensory systems; unsupervised, supervised and reinforcement learning; attractor computation and memory in recurrent cortical circuits; noise, chaos, and coding in neuronal systems; learning and computation in deep networks in the brain and in AI systems.

Johns Hopkins University

Vision as Bayesian Inference
This is an advanced course on computer vision from a probabilistic and machine learning perspective. It covers techniques such as linear and non-linear filtering, geometry, energy function methods, markov random fields, conditional random fields, graphical models, probabilistic grammars, and deep neural networks. These are illustrated on a set of vision problems ranging from image segmentation, semantic segmentation, depth estimation, object recognition, object parsing, scene parsing, action recognition, and text captioning.
IAP 2019

Massachusetts Institute of Technology (MIT)

Photo of microscope
Course will be held the third week of IAP (week of January 28th), M-F, from 2-5pm, in MIT room #46-3015 Provides instruction and dialogue on practical ethical issues relating to the responsible conduct of human and animal research in the brain and cognitive sciences. Specific emphasis on topics relevant to young researchers including data handling, animal and human subjects, misconduct, mentoring, intellectual property, and publication. Preliminary assigned readings and initial faculty lecture followed by discussion groups of four to five students each. A short written summary of the discussions submitted at the end of each class. See IAP Guide for registration information. 

Evolution, Computation, and Learning
Daniel Czegel
Here, we will explore recent work in evolutionary computation and theoretical biology modeling the processes of evolution. Namely, we will focus on these broad questions: What are the processes that govern evolution or 'evolutionary learning'? How can these processes improve upon or inspire new models or theories of learning, search, and/or development? If any, what is the role of evolutionary computation or theoretical biology in investigating human cognition or developing AI? Are there any frameworks, theories, or models that we can import from these fields?