Implicit regularization with strongly convex bias: Stability and acceleration

TitleImplicit regularization with strongly convex bias: Stability and acceleration
Publication TypeJournal Article
Year of Publication2023
AuthorsVilla, S, Matet, S, Vũ, BCông, Rosasco, L
JournalAnalysis and Applications
Volume21
Issue01
Pagination165 - 191
Date Published-01/2023
ISSN0219-5305
Abstract

Implicit regularization refers to the property of optimization algorithms to be biased towards a certain class of solutions. This property is relevant to understand the behavior of modern machine learning algorithms as well as to design efficient computational methods. While the case where the bias is given by a Euclidean norm is well understood, implicit regularization schemes for more general classes of biases are much less studied. In this work, we consider the case where the bias is given by a strongly convex functional, in the context of linear models, and data possibly corrupted by noise. In particular, we propose and analyze accelerated optimization methods and highlight a trade-off between convergence speed and stability. Theoretical findings are complemented by an empirical analysis on high-dimensional inverse problems in machine learning and signal processing, showing excellent results compared to the state of the art.

URLhttps://www.worldscientific.com/doi/10.1142/S0219530522400139
DOI10.1142/S0219530522400139
Short TitleAnal. Appl.

Associated Module: 

CBMM Relationship: 

  • CBMM Related