|Title||Word-level Invariant Representations From Acoustic Waveforms|
|Publication Type||Conference Paper|
|Year of Publication||2014|
|Authors||Voinea, S, Zhang, C, Evangelopoulos, G, Rosasco, L, Poggio, T|
|Conference Name||INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association|
|Keywords||Invariance, Speech Representation, Theories for Intelligence|
Extracting discriminant, transformation-invariant features from raw audio signals remains a serious challenge for speech recognition. The issue of speaker variability is central to this problem, as changes in accent, dialect, gender, and age alter the sound waveform of speech units at multiple scales (phonemes, words, or phrases). Approaches for dealing with this variability have typically focused on analyzing the spectral properties of speech at the level of frames, on par with frame-level acoustic modeling usually applied to speech recognition systems. In this paper, we propose a framework for representing speech at the whole-word level and extracting features from the acoustic, temporal domain, without the need for spectral encoding or pre-processing. Leveraging recent work on unsupervised learning of invariant sensory representations, we extract a signature for a word by first projecting its raw waveform onto a set of templates and their transformations, and then forming empirical estimates of the resulting one-dimensional distributions via histograms. The representation and relevant parameters are evaluated for word classification on a series of datasets with increasing speaker-mismatch difficulty, and the results are compared to those of an MFCC-based representation.
- CBMM Funded