Machine-learning system processes sounds like humans do

MIT neuroscientists have developed a machine-learning system that can process speech and music the same way that humans do.  Image: Chelsea Turner/MIT
April 19, 2018

Neuroscientists train a deep neural network to analyze speech and music.

Anne Trafton | MIT News Office

Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre.

This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks.

“What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. “Historically, this type of sensory processing has been difficult to understand, in part because we haven’t really had a very clear theoretical foundation and a good way to develop models of what might be going on.”

 

Read the full story on the MIT News website using the link below.

Associated CBMM Pages: