Sobolev Independence Criterion: Non-Linear Feature Selection with False Discovery Control.

Sobolev Independence Criterion: Non-Linear Feature Selection with False Discovery Control.

Date Posted:  May 5, 2020
Date Recorded:  April 28, 2020
Speaker(s):  Youssef Mroueh, IBM Research and MIT-IBM Watson AI lab
  • All Captioned Videos
  • CBMM Research
Associated CBMM Pages: 
Description: 

Youssef Mroueh, IBM Research and MIT-IBM Watson AI lab

Abstract: In this talk I will show how learning gradients help us designing new non-linear algorithms for feature selection, black box sampling and also, in understanding neural style transfer. In the first part of the talk, I will present Sobolev Independence Criterion (SIC), that relates to saliency based method in deep learning. SIC is an interpretable dependency measure that gives rise to feature importance scores. Sparsity inducing gradient penalties are crucial regularizers for the SIC objective and in promoting the desired non-linear sparsity. SIC can subsequently be used in feature selection and false discovery rate control.

Paper: http://papers.nips.cc/paper/9147-sobolev-independence-criterion.pdf Joint work with Tom Sercu, Mattia Rigotti, Inkit Padhi and Cicero Dos Santos

Bio: Youssef Mroueh is a research staff member in IBM Research and a principal investigator in the MIT-IBM Watson AI lab. He received his PhD in computer science in February 2015 from MIT, CSAIL, where he was advised by Professors Tomaso Poggio and Lorenzo Rosasco. In 2011, he obtained his engineering diploma from Ecole Polytechnique Paris France, and a master of science in Applied Maths from Ecole des Mines de Paris. He is interested in Deep Learning, Machine Learning, Statistical Learning Theory, Computer Vision. He conducts Modeling and Algorithmic research in Multimodal Deep Learning.