|Title||Phone Classification by a Hierarchy of Invariant Representation Layers|
|Publication Type||Conference Paper|
|Year of Publication||2014|
|Authors||Zhang, C, Voinea, S, Evangelopoulos, G, Rosasco, L, Poggio, T|
|Conference Name||INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association|
|Keywords||Hierarchy, Invariance, Neural Networks, Speech Representation|
We propose a multi-layer feature extraction framework for speech, capable of providing invariant representations. A set of templates is generated by sampling the result of applying smooth, identity-preserving transformations (such as vocal tract length and tempo variations) to arbitrarily-selected speech signals. Templates are then stored as the weights of “neurons”. We use a cascade of such computational modules to factor out different types of transformation variability in a hierarchy, and show that it improves phone classification over baseline features. In addition, we describe empirical comparisons of a) different transformations which may be responsible for the variability in speech signals and of b) different ways of assembling template sets for training. The proposed layered system is an effort towards explaining the performance of recent deep learning networks and the principles by which the human auditory cortex might reduce the sample complexity of learning in speech recognition. Our theory and experiments suggest that invariant representations are crucial in learning from complex, real-world data like natural speech. Our model is built on basic computational primitives of cortical neurons, thus making an argument about how representations might be learned in the human auditory cortex.
- CBMM Funded