On the surprising similarities between supervised and self-supervised models
Date Posted:
December 16, 2020
Date Recorded:
December 12, 2020
Speaker(s):
Robert Geirhos, University of Tübingen & International Max Planck Research School for Intelligent Systems
All Captioned Videos SVRHM Workshop 2020
PRESENTER 1: Robert [INAUDIBLE] is someone who I believe needs no further introduction despite still being a grad student. I feel very lucky almost to have met Robert in one of the BSS probably two or three years ago, right, when you started working on your work with Felix and Matthias. And Robert is presenting from Germany, University of Tubingen.
And he's going to be speaking about on the supervised similarities of-- on the surprising similarities of supervised and self-supervised models. Robert, please take it away.
PRESENTER 2: Thanks, [INAUDIBLE], and also, a big shout out to all of the organizers of this workshop. It's a fantastic line up. I'm super excited.
So yes, I'll be talking about the similarities of supervised and self-supervized models, and I'll start by explaining some of the motivation behind the excitement that surrounds self-supervized learning right now. And I think part of that story is that supervised learning has quite a few problems. So we know it's not particularly robust.
It suffers from shortcut learning. It's clearly showing many aspects of non-human behavior. It has a texture bias, and it's incredibly label and data hungry.
So just standard plain feed forward supervised learning has quite a few issues. And the big hope is, of course, that self-supervized learning might overcome some of these issues and make progress. So the situation to [INAUDIBLE] scheme here is sort of standard supervised [INAUDIBLE], particularly human-like and not particularly robust.
Humans, on the other end, they're quite robust, and they're arguably also fairly human-like. So the question really is how far is self-supervized learning going to get us into this intriguing direction. And there are already quite a few super exciting studies that look into some of these aspects. We decided to look at this question from the behavioral perspective. So we'll be comparing [INAUDIBLE] self-supervized and supervised behavior.
So the methods that we used was essentially, we took a bunch of existing data sets carefully collected in the lab, lots of observers. We compared that to quite a few supervised [INAUDIBLE] models and also to a range of self-supervized contrastive models. Within those contrastive models, we include simpler-- for the purpose of plotting, we're using a different color here because [INAUDIBLE] sometimes stands out. But this is also just a self-supervized contrastive model.
And the paradigm that we used for collecting the human behavioral data was quite simple. We presented them with an image for a brief presentation time. We then had a 1 over F noise mask to limit the influence of recurrent processing. And then humans have to choose one of those 16 categories over there. And the same images were shown to CNNs, and we also asked them to pick one of these categories.
So let's see some results here. The first-- and in total, we looked at three different questions, and the first one was noise robustness. So just a question-- when we change certain aspects of an image, how fast does recognition accuracy degrade with noise both for humans and for different types of CNNs?
And just to give one example here, that's just standard blurring. When we increase the level of blurring, then at some point, obviously the classification accuracy drops. And it drops much faster for supervised models than it drops for human observers.
So the big question now is obviously, well, where are self-supervised models in this range? Are we making any progress here? And actually, for this particular type of noise blurring, we don't really see any striking differences here.
But obviously, we're interested in more than just a single type of noise. So we looked at many of these models [INAUDIBLE] with lots of results. I'm not going to go into many details here, but essentially, the pattern that we can see is that self-supervized models and standard supervised models, they-- in most of those cases, they agree fairly well except for those three that I'm highlighting here. And this is where the blue model [INAUDIBLE] really seems to stand out.
And this is interesting because these are some types of noise, like uniform noise, contrast, high pass, that weren't part of the training data augmentations used for simpler training. So this is some sort of emergent finding here. So for noise robustness, us it's kind of a mixed story.
Some are quite interesting. Others, for other cases like low pass filtering, it's rather more disappointing, I would say, if you're looking to increase robustness. The second aspect that we looked at was trying to go at a deeper level and go beyond aggregated scores like accuracy.
That was error pattern. So essentially, two observers, they can 50% accuracy each. But we are really interested in, well, are they finding the same images [INAUDIBLE]? And the same image is difficult.
And this is what we can look at here. The intuition, again, is if two observers or a human observer and a CNN, if they use the same strategy, they should also make errors on the same individual images. And this is what we can quantify using error consistency metric.
Essentially, we're going to see some overlap in terms of making errors just by chance. So this would be a-- corrected for chance would be zero. If a value is zero, that means just random overlap.
And then if there's some systematic agreement here in terms of which images are easy and difficult, then we would see higher values here. And now we compare different groups. For example, human observers-- one human observer against other human observers.
And what we can see here is that there's actually quite a consistent agreement in terms of finding the same image is easy or difficult. So one human observe [INAUDIBLE] with other human observers-- also, a supervised model agrees with most of the other supervised models. And self-supervized models agree with other self-supervized models.
Now an interesting question again is, well, what about across groups? How about humans versus other models, for example? So humans versus standard supervised models actually don't really show much of an agreement here.
Also, humans versus self-supervised models, not much of an agreement. But-- and this is super interesting and something that we haven't quite fully understood where this comes from. Supervised models and self-supervized models, despite being trained in a completely different way, they show an extremely high agreement.
They completely agree which images are easy and difficult. And this might indicate that they might be using similar strategies despite being trained in a completely different way. So this is something I'm quite curious about and haven't fully understood yet.
And last but not least, we decided to look at shade versus texture bias. This is the last of the three experiments that we looked at. Our self-supervized models, now that they're not chained with labels, are they going to be biased towards texture or shape?
And the idea is that in standard images, you could use either feature [INAUDIBLE] and get high accuracies. Cats have cat shape, cat texture. So we can just go the other way around for the purpose of the experiment and create images that have conflicting shape and texture information.
So with this paradigm, we now investigated humans, [INAUDIBLE] supervised models, and self-supervized models. And if the response was cat in this example here, this would be counted towards shape bias. If it was texture, it would be counted towards texture bias.
And we know already that humans have a strong shape bias-- no surprise here. And we also know already that all those 24 supervised models, that they're on the texture bias side. And now again, the question is, well, where are these exciting new self-supervized contrastive models?
And actually, they all seem to be on the texture bias side as well. This includes self-supervized Sinclair. It's a bit more in the direction of a shape bias but not completely there. So this is also an instance where we see a surprising similarity between supervised and self-supervized learning.
And yeah, just to wrap this up, what we've seen is that self-supervized and supervised models, they agree in the sense that they have a similar lack of robustness with the exception of Sinclair, which shows some emerging benefits here of self-supervized learning or perhaps of the particular data augmentations used during training. We don't exactly know yet.
We see that these groups make highly consistent errors, much more than what can be expected by a pure chance agreement alone. And finally, they're all biased towards texture. So right now, we don't really see good models of human behavior, at least for these particular types of data sets that we used here
But we also think that this is just the very start of this exciting self-supervized revolution that machine learning is currently undergoing. So the hope was that self-supervized learning is going to get us closer to human perception and also closer to robust models. And, well, empirically, our first results seem to indicate that it's actually more and more similar to standard supervised models than we would have expected.
Again, this is just for a particular type of self-supervized models which is contrastive models. And there's much more to come. So this is just a snapshot in time, not a definite conclusion. And we already see some exciting transfer simply here, which is more robust, not particularly more human-like. Even for those cases where Sinclair showed superior noice robustness, it was even way beyond human robustness.
So there wasn't much progress in the direction of human-like, but it's certainly an improvement in terms of noise robustness. And I guess there are many other interesting aspects that one could look at. And currently, I would say we have many more questions than we have answers. There are still many things that we don't really understand about this.
So why is it the case that supervised models and self-supervised models end up in such a similar space? At least that's what our data seems to suggest. There's no principled reason why this should be the case or at least none that's apparent to me at the moment.
So there's definitely a lot that remains to be understood about this. But also, we are just at the very start of this exciting self-supervized revolution, and I believe there's much more to come and investigate. And with that, I'd like to give a big shout out to my colleagues, collaborators, and mentors here. That's [INAUDIBLE] Matthias, Felix, and [INAUDIBLE]. And, well, thanks.