Research Meeting: Module 1

October 1, 2019 - 4:00 pm to 5:00 pm
Speaker/s: 

Andrzej Banburski, Poggio Lab

,

Title: Biologically-inspired defenses against adversarial attacks

,

 

,

Abstract: Adversarial examples are a broad weakness of neural networks and represent a crucial problem that needs to be solved to ensure tampering-resistant deployments of neural network-based AI systems. They also offer a unique opportunity to advance research at the intersection of the science and engineering of intelligence. We propose a novel approach, based on the hypothesis that the primate visual system has a similar architecture to deep networks but seems to be immune to today's adversarial attacks. What are then aspects of visual cortex that are not captured by present models? We focus on the eccentricity dependent sampling array of the retina and on the existence of a set of spatial frequencies channels at each eccentricity. Our proposal will test whether systems based on these properties may be robust against adversarial examples.

Organizer: 

Details

MIT Building 46
Date: 
October 1, 2019
Time: 
4:00 pm to 5:00 pm
Venue: 
MIT 46-5165
Address: 

MIT Building 46, 43 Vassar Street, Cambridge MA 02139