We study Nystr"om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr"om Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets.

%B NIPS 2015 %G eng %U https://papers.nips.cc/paper/5936-less-is-more-nystrom-computational-regularization %0 Generic %D 2014 %T Abstracts of the 2014 Brains, Minds, and Machines Summer Course %A Nadav Amir %A Tarek R. Besold %A Raffaello Camoriano %A Goker Erdogan %A Thomas Flynn %A Grant Gillary %A Jesse Gomez %A Ariel Herbert-Voss %A Gladia Hotan %A Jonathan Kadmon %A Scott W. Linderman %A Tina T. Liu %A Andrew Marantan %A Joseph Olson %A Garrick Orchard %A Dipan K. Pal %A Giulia Pasquale %A Honi Sanders %A Carina Silberer %A Kevin A Smith %A Carlos Stein N. de Briton %A Jordan W. Suchow %A M. H. Tessler %A Guillaume Viejo %A Drew Walker %A Leila Wehbe %A Andrei Barbu %A Leyla Isik %A Emily Mackevicius %A Yasmine Meroz %Xhttp://hdl.handle.net/1721.1/100189