%0 Journal Article %J Analysis and Applications %D 2023 %T Implicit regularization with strongly convex bias: Stability and acceleration %A Villa, Silvia %A Matet, Simon %A Vũ, Bằng Công %A Rosasco, Lorenzo %X

Implicit regularization refers to the property of optimization algorithms to be biased towards a certain class of solutions. This property is relevant to understand the behavior of modern machine learning algorithms as well as to design efficient computational methods. While the case where the bias is given by a Euclidean norm is well understood, implicit regularization schemes for more general classes of biases are much less studied. In this work, we consider the case where the bias is given by a strongly convex functional, in the context of linear models, and data possibly corrupted by noise. In particular, we propose and analyze accelerated optimization methods and highlight a trade-off between convergence speed and stability. Theoretical findings are complemented by an empirical analysis on high-dimensional inverse problems in machine learning and signal processing, showing excellent results compared to the state of the art.

%B Analysis and Applications %V 21 %P 165 - 191 %8 -01/2023 %G eng %U https://www.worldscientific.com/doi/10.1142/S0219530522400139 %N 01 %! Anal. Appl. %R 10.1142/S0219530522400139 %0 Conference Paper %B NIPS 2015 %D 2015 %T Learning with incremental iterative regularization %A Lorenzo Rosasco %A Villa, Silvia %X

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimizationresults.

%B NIPS 2015 %G eng %U https://papers.nips.cc/paper/6015-learning-with-incremental-iterative-regularization %0 Book Section %B Empirical Inference %D 2013 %T On Learnability, Complexity and Stability %A Villa, Silvia %A Lorenzo Rosasco %A Tomaso Poggio %A Schölkopf, Bernhard %A Luo, Zhiyuan %A Vovk, Vladimir %X

Empirical Inference, Chapter 7

Editors: Bernhard Schölkopf, Zhiyuan Luo and Vladimir Vovk

Abstract:

We consider the fundamental question of learnability of a hypothesis class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik. We survey classic results characterizing learnability in terms of suitable notions of complexity, as well as more recent results that establish the connection between learnability and stability of a learning algorithm.

%B Empirical Inference %I Springer Berlin Heidelberg %C Berlin, Heidelberg %P 59 - 69 %@ 978-3-642-41135-9 %G eng %U http://link.springer.com/10.1007/978-3-642-41136-6 %& 7 %R 10.1007/978-3-642-41136-610.1007/978-3-642-41136-6_7