Adversarial examples and human-ML alignment

Adversarial examples and human-ML alignment

Date Posted:  July 24, 2020
Date Recorded:  July 23, 2020
Speaker(s):  Shibani Santurkar, MIT
  • All Captioned Videos
  • Computational Tutorials
Machine learning models today achieve impressive performance on challenging benchmark tasks. Yet, these models remain remarkably brittle---small perturbations of natural inputs, known as adversarial examples, can severely degrade their behavior.

Why is this the case?

In this tutorial, we take a closer look at this question, and demonstrate that the observed brittleness can be largely attributed to the fact that our models tend to solve classification tasks quite differently from humans. Specifically, viewing neural networks as feature extractors, we study how features extracted by neural networks may diverge from those used by humans, and how adversarially robust models can help to make progress towards bridging this gap.

Additional tutorial info:

The tutorial will include demos---we will use Colab notebooks so please bring laptops along. In these demos, we will explore the brittleness of standard ML models by crafting adversarial perturbations, and use these as a lens to inspect the features models rely on.

Github link for demos:

Suggested reading (in order of importance):

Speaker Bio:
Shibani Santurkar is a PhD student in the MIT EECS Department, advised by Aleksander Mądry and Nir Shavit. Her research has been focused on two broad themes: developing a precise understanding of widely-used deep learning techniques; and avenues to make machine learning methods robust and reliable. Prior to joining MIT, she received a bachelor's degree in electrical engineering from IIT Bombay. She is a recipient of the Google Fellowship in Machine Learning.