Frivolous Units: Wider Networks Are Not Really That Wide

TitleFrivolous Units: Wider Networks Are Not Really That Wide
Publication TypeConference Paper
Year of Publication2021
AuthorsCasper, S, Boix, X, D'Amario, V, Guo, L, Schrimpf, M, Vinken, K, Kreiman, G
Conference NameAAAI 2021
Date Published05/2021
Abstract

A remarkable characteristic of overparameterized deep neural networks (DNNs) is that their accuracy does not degrade when the network width is increased. Recent evidence suggests that developing compressible representations allows the complex- ity of large networks to be adjusted for the learning task at hand. However, these representations are poorly understood. A promising strand of research inspired from biology involves studying representations at the unit level as it offers a more granular interpretation of the neural mechanisms. In order to better understand what facilitates increases in width without decreases in accuracy, we ask: Are there mechanisms at the unit level by which networks control their effective complex- ity? If so, how do these depend on the architecture, dataset, and hyperparameters? We identify two distinct types of “frivolous” units that prolifer- ate when the network’s width increases: prunable units which can be dropped out of the network without significant change to the output and redundant units whose activities can be ex- pressed as a linear combination of others. These units imply complexity constraints as the function the network computes could be expressed without them. We also identify how the development of these units can be influenced by architecture and a number of training factors. Together, these results help to explain why the accuracy of DNNs does not degrade when width is increased and highlight the importance of frivolous units toward understanding implicit regularization in DNNs.

URLhttps://dblp.org/rec/conf/aaai/CasperBDGSVK21.html
Download:  PDF icon 1912.04783.pdf

Associated Module: 

CBMM Relationship: 

  • CBMM Funded