Speech Representations based on a Theory for Learning Invariances

TitleSpeech Representations based on a Theory for Learning Invariances
Publication TypeConference Poster
Year of Publication2014
AuthorsVoinea, S, Zhang, C, Evangelopoulos, G, Rosasco, L, Poggio, T
Date Published10/2014
Place PublishedSANE 2014 - Speech and Audio in the Northeast
Type of Workposter presentation
Abstract

Recognition of sounds and speech from a small number of labelled examples (like humans do), depends on the properties of the representation of the acoustic input. We formulate the problem of extracting robust speech features for supervised learning with small sample complexity as a problem of learning representations of the signal that are maximally invariant to intraclass transformations and deformations. We propose an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain, that requires the memory-based, unsupervised storage of acoustic templates -- such as specific phones or words -- together with all the transformations of each that normally occur. A quasi-invariant representation for a speech signal can be obtained by projecting it to a number of template orbits, i.e., each one a set of transformed template signals, and computing the associated one-dimensional empirical probability distributions. The computations are perfomed by modules of filtering and pooling, that can be used for obtaining a mapping in single- or multilayer architectures. We consider several aspects of such representations including different signal scales (word vs. frame), input domains (raw waveforms vs. frequency filterbank responses), structures (shallow vs. multilayer/hierarchical), and ways of sampling from template orbit sets given a set of observations (explicit vs. learned). Preliminary empirical evaluations for learning to separate speech phones and words are given on TIMIT and subsets of TI-DIGITS. 

Citation Key1140

Research Area: 

CBMM Relationship: 

  • CBMM Related