It has been recently demonstrated that human listeners are insensitive to the temporal detail of certain natural sounds known as "auditory textures". This implies that the detailed structure of these sounds is not encoded by the auditory system - instead, it seems to retain a lossy, compressed representation formed by a set of time-averaged statistics. This result might be considered surprising given the known temporal precision of the auditory system and its ability to precisely encode fine temporal variation.
Here we seek to understand this phenomenon from a normative perspective. Auditory textures are typically generated by a large number of independent acoustic events happening over an extended period of time. Does any of these properties determine whether a sound will be compressed by the auditory system? Does insensitivity to temporal structure of texture sounds reflect a limitation of auditory perception, or is it a manifestation of an adaptive coding strategy? To answer this questions we apply methods grounded in information theory, construct statistical models of natural sounds and perform psychophysical experiments.