Home Page Spotlights

Today: Kevin Murphy (Google Research) will discuss recent work related to visual scene understanding and "grounded" language understanding. Talk: 4pm, May 26th, MIT Singleton Auditorium (46-3002)
Screenshot of video player
In this talk David S. Vogel, an award-winning predictive modeling scientist, discusses state-of-the-art machine learning techniques and the application of the these techniques to healthcare, recommendation systems, and finance.
Figure No. 10 from CBMM Memo No. 067
In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability..
Figures No. 1 & 2 from CBMM Memo No. 065
Deep convolutional neural networks are generally regarded as robust function approximators. So far, this intuition is based on perturbations to external stimuli such as the images to be classified.
Screenshot of video player
In this talk, Prof Feldman discussed a Bayesian approach to grouping, formulating it as an inverse inference problem in which the goal it to estimate the organization that best explains the observed configuration of visual elements.
A DNA double helix is seen in an artist's illustration released by the National Human Genome Research Institute. (Handout/Reuters)
For more than a half century, the United States has operated what might be called a “Miracle Machine.” Powered by federal investment in science and technology, the machine regularly churns out breathtaking advances...
Figure No. 2 from CBMM Memo No. 064
While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning—leveraging unlabeled examples to learn about the structure of a domain — remains a difficult unsolved challenge.
Photo of Prof. Erik Brynjolfsson
On Friday, May 5, 2017, Prof. Erik Brynjolfsson (MIT Sloan School) will discuss a preliminary framework and approach for understanding the potential effects of machine learning (ML) on tasks, occupations and industries.
In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning".
Images from memo figures.
The complexity of a learning task is increased by transformations in the input space that preserve class identity. Visual object recognition for example is affected by changes in viewpoint, scale, illumination or planar transformations. ...
Screenshot from MIT News | Around Campus webpage.
Amnon Shashua PhD ’93, co-founder of Mobileye, discusses challenges associated with autonomous vehicles in MIT visit. | MIT News - Around Campus | April 13, 2017
Photo of David Vogel
On Wed., April 12, 2017, David S. Vogel, an award-winning predictive modeling scientist, will discuss state-of-the-art machine learning techniques and the application of the these techniques to healthcare, recommendation systems, and finance.
Figure 7 Same as Figure 3, but all weights are collected from Layer 5
Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least for the most successful Deep Convolutional Neural Networks (DCNNs)...
CBMM Memo 061: Full interpretation of minimal images
The goal in this work is to model the process of ‘full interpretation’ of object images, which is the ability to identify and localize all semantic features and parts that are recognized by human observers. The task is approached by dividing the ...
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case...