SGD and Weight Decay Provably Induce a Low-Rank Bias in Deep Neural Networks

TitleSGD and Weight Decay Provably Induce a Low-Rank Bias in Deep Neural Networks
Publication TypeCBMM Memos
Year of Publication2023
AuthorsGalanti, T, Siegel, Z, Gupte, A, Poggio, T
Abstract

In this paper, we study the bias of Stochastic Gradient Descent (SGD) to learn low-rank weight matrices when training deep ReLU neural networks. Our results show that training neural networks with mini-batch SGD and weight decay causes a bias towards rank minimization over the weight matrices. Specifically, we show, both theoretically and empirically, that this bias is more pronounced when using smaller batch sizes, higher learning rates, or increased weight decay. Additionally, we predict and observe empirically that weight decay is necessary to achieve this bias. Finally, we empirically investigate the connection between this bias and generalization, finding that it has a marginal effect on generalization. Our analysis is based on a minimal set of assumptions and applies to neural networks of any width or depth, including those with residual connections and convolutional layers.

DSpace@MIT

https://hdl.handle.net/1721.1/148230

Download:  PDF icon Low-rank bias.pdf
CBMM Memo No:  140

Associated Module: 

Research Area: 

CBMM Relationship: 

  • CBMM Funded