Phone Classification by a Hierarchy of Invariant Representation Layers

TitlePhone Classification by a Hierarchy of Invariant Representation Layers
Publication TypeConference Paper
Year of Publication2014
AuthorsZhang, C, Voinea, S, Evangelopoulos, G, Rosasco, L, Poggio, T
Conference NameINTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association
Conference LocationSingapore
KeywordsHierarchy, Invariance, Neural Networks, Speech Representation
Abstract

We propose a multi-layer feature extraction framework for speech, capable of providing invariant representations. A set of templates is generated by sampling the result of applying smooth, identity-preserving transformations (such as vocal tract length and tempo variations) to arbitrarily-selected speech signals. Templates are then stored as the weights of “neurons”. We use a cascade of such computational modules to factor out different types of transformation variability in a hierarchy, and show that it improves phone classification over baseline features. In addition, we describe empirical comparisons of a) different transformations which may be responsible for the variability in speech signals and of b) different ways of assembling template sets for training. The proposed layered system is an effort towards explaining the performance of recent deep learning networks and the principles by which the human auditory cortex might reduce the sample complexity of learning in speech recognition. Our theory and experiments suggest that invariant representations are crucial in learning from complex, real-world data like natural speech. Our model is built on basic computational primitives of cortical neurons, thus making an argument about how representations might be learned in the human auditory cortex.

URLhttp://www.isca-speech.org/archive/interspeech_2014/i14_2346.html
Refereed DesignationRefereed

Research Area: 

CBMM Relationship: 

  • CBMM Funded