No

Methods for Analyzing Neural Data

Methods for Analyzing Neural Data
Massachusetts Institute of Technology (MIT)
Instructor(s): 
Covers methods that are useful for analyzing neural data including conventional statistics, mutual information, point process models and decoding analyses. Emphasis is on explaining the basic mathematical intuitions behind these methods, and giving practical hands-on experience for how these methods can be applied to real data. The class is divided into lectures that explain different methods and laboratory classes where students analyze real data. Examples focus on neural spiking activity but we also discuss other types of signals including MEG signals and local field potentials.

Introduction to Pattern Recognition and Machine Learning

Introduction to Pattern Recognition and Machine Learning
University of California, Los Angeles (UCLA)
Instructor(s): 
Introduction to pattern analysis and machine intelligence designed for advanced undergraduate and graduate students. Topics include Bayes decision theory, learning parametric distributions, non-parametric methods, regression, Adaboost, perceptrons, support vector machines, principal components analysis, nonlinear dimension reduction, independent component analysis, K-means analysis, and probability models.

Computational Models and Cognitive Development

Computational Models and Cognitive Development
Massachusetts Institute of Technology (MIT)
Harvard University
Explores the prospects for “reverse engineering” infant and early childhood cognition over the first three years of life, with the goal of laying the foundations for a computational account of what children know and how they come to know it, expressed in the language of contemporary engineering approaches to intelligence. Focuses on core knowledge systems, such as core intuitive physics, psychology, sociology, space and number, as well as the learning mechanisms that extend, enrich and transform these core systems as children grow. Integrates related research from cognitive neuroscience and comparative studies of cognition in non-human species.

Probabilistic Models of the Visual Cortex

Probabilistic Models of the Visual Cortex
Johns Hopkins University
Instructor(s): 
The course gives an introduction to computational models of the mammalian visual cortex. It covers topics in low-, mid-, and high-level vision. It briefly discusses the relevant evidence from anatomy, electrophysiology, imaging (e.g., fMRI), and psychophysics. It concentrates on mathematical modelling of these phenomena taking into account recent progress in probabilistic models of computer vision and developments in machine learning, such as deep networks.

Visual Object Recognition: Computational and Biological Mechanisms

Harvard University
Instructor(s): 
Visual recognition is essential for most everyday tasks including navigation, reading and socialization. Visual pattern recognition is also important for many engineering applications such as automatic analysis of clinical images, face recognition by computers, security tasks and automatic navigation. In spite of the enormous increase in computational power over the last decade, humans still outperform the most sophisticated engineering algorithms in visual recognition tasks. In this course, we will examine how circuits of neurons in visual cortex represent and transform visual information. The course will cover the following topics: functional architecture of visual cortex, lesion studies, physiological experiments in humans and animals, visual consciousness, computational models of visual object recognition, computer vision algorithms.

Artificial Intelligence

Artificial Intelligence
Massachusetts Institute of Technology (MIT)
Instructor(s): 
Introduces representations, techniques, and architectures used to build applied systems and to account for intelligence from a computational point of view. Applications of rule chaining, heuristic search, constraint propagation, constrained search, inheritance, and other problem-solving paradigms. Applications of identification trees, neural nets, genetic algorithms, and other learning paradigms. Speculations on the contributions of human vision and language systems to human intelligence.

Computational Aspects of Biological Learning

Computational Aspects of Biological Learning
Massachusetts Institute of Technology (MIT)
Takes a computational approach to learning in the brain by neurons and synapses. Examines supervised and unsupervised learning as well as possible biological substrates, including Hebb synapses and the related topics of Oja flow and principal components analysis. Discusses hypothetical computational primitives in the nervous system, and the implications for unsupervised learning algorithms underlying the development of tuning properties of cortical neurons. Also focuses on a broad class of biologically plausible learning strategies.

Principles of Neuroengineering

Principles of Neuroengineering
Massachusetts Institute of Technology (MIT)
Instructor(s): 
Covers how to innovate technologies for brain analysis and engineering, for accelerating the basic understanding of the brain, and leading to new therapeutic insight and inventions. Focuses on using physical, chemical and biological principles to understand technology design criteria governing ability to observe and alter brain structure and function. Topics include optogenetics, noninvasive brain imaging and stimulation, nanotechnologies, stem cells and tissue engineering, and advanced molecular and structural imaging technologies. Design projects by students.

Statistical Learning Theory and Applications

Statistical Learning Theory and Applications
Massachusetts Institute of Technology (MIT)
Provides students with the knowledge needed to use and develop advanced machine learning solutions to challenging problems. Covers foundations and recent advances of machine learning in the framework of statistical learning theory. Focuses on regularization techniques key to high-dimensional supervised learning. Starting from classical methods such as regularization networks and support vector machines, addresses state-of-the-art techniques based on principles such as geometry or sparsity, and discusses a variety of algorithms for supervised learning, feature selection, structured prediction, and multitask learning. Also focuses on unsupervised learning of data representations, with an emphasis on hierarchical (deep) architectures.

Pages

Subscribe to No