Nancy Kanwisher: The Functional Architecture of Human Intelligence
Date Posted:
June 5, 2014
Date Recorded:
June 5, 2014
CBMM Speaker(s):
Nancy Kanwisher All Captioned Videos Brains, Minds and Machines Summer Course 2014
Description:
Topics: Is the functional organization of the brain based on special-purpose vs. general-purpose machinery? Brief history of efforts to find specialized machinery (Spearman, Gall, lesion studies); introduction to fMRI methods and data; validation of fMRI through replication of physiological results; using fMRI to identify specialized face areas, with appropriate controls for other functions; response properties in the fusiform face area; selective cortical regions for color, scenes, movement, human body and parts, pitch, speech sounds, meaning of a sentence, theory of mind, complex thinking; future directions for fMRI studies
NANCY KANWISHER: Hello, everyone. Get your attention-- I'm starting promptly, because we have a lot of stuff to get through today. I'm Nancy Kanwisher. Welcome to my hometown.
That little schoolhouse you see over there, I went up to first through third grade in that schoolhouse. The bike path you may have been using, my mom is responsible for that bike path. She fought the good fight against evil developers, and turned the train tracks into a bike path that you could use it. And those black birds you see flying around here, I published my first scientific paper on them.
OK, enough on me-- we're going to talk about, actually, one other little thing-- beyond Woods Hole being a fabulous place for science and full of natural beauty, it has all kinds of other merits, like on cultural life. For example, it has its own folk orchestra, and this Saturday night at 8 o'clock in the community hall, which is across from the Captain Kidd-- I'm sure you've figured out where the Captain Kidd is by now-- in the community hall is a contra dance with the live Woods Hole Folk Orchestra. You don't need any background. There's a caller.
I recommend it. It's a hoot. If you're self conscious, start at the Captain Kidd and get a drink, and then cross the street, and go over to the contra dance. It'll be fun.
OK, so today I'll be talking about functional MRI. There's two agendas here. One is to get a little bit of scientific content out. But the bigger agenda is actually more methodological-- to talk about what we can learn about intelligence from functional MRI by looking at one of the best instances of intelligence around, the human brain. Functional MRI is one of the better methods for looking at that.
And so the agenda, probably mostly illegible here, but I'm going to start with absolutely rudimentary stuff, assuming you know nothing about functional MRI at all. I know that's not true for many of you. And I just encourage those who know anything at all about functional MRI to go for a walk for 45 minutes. And then things will get a little more sophisticated.
So I'll talk for 45 minutes. Then van Veen is going to talk about something called multiple voxel pattern analysis, which is a neat method that's been around for seven or eight years in functional MRI. And then later this afternoon at whenever, four or something, Alex Kell will talk about encoding models, which are more recent and really quite sophisticated, interesting approaches two functional MRI data.
And then Sam, who has just arrived, will talk at the end of that session on some data-driven methods to discover structure in functional MRI data, especially clustering PCA and ICA. So that's the agenda for the afternoon. Since the stuff I'm going through is so basic, I'm going to go fairly quickly. But I'm doing that because I'm trusting you guys to stop me if I'm not clear, or if you disagree with what I'm saying, or anything like that.
All right, so the key question addressed in this course, according to the website, anyway, is, how does the brain produce intelligent behavior? And how might be able to replicate intelligence in machines? And you've already heard loads of different kinds of approaches to try to understand this question. This is a classic, multi-level question that you can go at from many, many different ways.
But today, we're going to talk about one route into that question, and that is to look at functional architecture. And that is to ask, what are the basic components of the system? This sounds really loud to me. Am I blasting your eardrums, or is it OK? It's all right? All right.
OK, so what do I mean by that? Very simply, we can ask whether human intelligence is the product of a number of highly specialized components, each solving a very different specific single problem. The analogy I like-- it's not mine, it's from Leda Cosmides and John Tooby, evolutionary biologists-- it's a Swiss army knife. So are our minds and brains are like Swiss army knives, with different components, each solving a very different problem? Or is human intelligence more like just generalized computational power or generic computing ability?
And before we dive into the methods for asking that question, let's consider why we might care. I have a bunch of reasons why I think this is a worthwhile question to ask. First, I just think it's of inherent scientific interest. It's just one of the most basic, fundamental questions we can ask about the organization of the human mind and brain, of whether this is basically a single unit that we're supposed to understand as a single piece, or whether it has natural subdivisions that are different from each other in some important way.
Second, this method of looking for components is a classic way to try to understand any complicated system. It's been used in many different sciences throughout scientific history. If you're confronting something so complicated you don't know how to get started, one of the easier approaches is just say, OK, what are its basic pieces?
And if you can figure out what the basic pieces are, then maybe someday you can figure out how each piece works. And then, if you ever achieve scientific Nirvana, you can figure out how they all work together as a whole. We won't get there today.
And a third reason to ask about the basic components is, I think, just enumerating basic cognitive brain-based components of mind and brain already, in itself, gives us some powerful clues about the nature of the representations and computations that go on in each of those. And the reason is that the scope of a computational device already tells us a lot about what it might be doing. So for example, if you have a piece of code or a piece of brain that's all and only designed for face recognition, not also for recognizing shape of visually presented words and scenes and objects, then you can imagine very different kinds of presentation compared to the case where the same piece of code or the same piece of brain has to accomplish all of those goals.
So simply by understanding the computational scope of a piece, we've already got some clues about how it work and what kinds of representations and computations it might contain. Those are my reasons. So there's lots of ways to investigate this question.
We could through a few historic examples. Here's Charles Spearman, here, who published a paper in 1904 in the American Journal of Psychology. The paper was called "General Intelligence," and it was sandwiched between an article discussing the nature of soul and an account of the psychology of the English Sparrow. And this article did the following thing, or reported on the following thing-- Spearman went into two different grade schools and tested a large number of kids on a whole bunch of measures of academic ability. So he got exam grades in various subjects, and other measures of academic ability across a bunch of different subfields.
He then also had the kids do sensory discrimination tasks, like which of these tones is louder? Which of these weights is heavier? Which of these lights is brighter? Just basic sensory discrimination tasks.
And he measured each kid's ability on all of these things. And then he wanted to know, what is the relationship between those abilities? So now imagine any two of those abilities, and imagine a scatter plot where each dot is a kid. What do you expect to see? Just a mess? Or do you expect to see a relationship between different tasks?
Let's take, for example, math exam scores and discriminating the weight of two objects. Do you think those would be correlated, those two abilities across kids? How many think those might be correlated? A few. How many think those wouldn't be correlated?
OK, good. See, it's a good empirical question. It's not obvious in advance. What Spearman found is that everything was correlated with everything else, to an astonishing degree-- even things that seemed completely unrelated, like math ability and judging the weight of objects. And Spearman essentially invented factor analysis, or an early version of it, that he applied to all of the behaviorl data.
And from that, he derived this notion of g. g was the common factor that explained variance across subjects in their just generic ability to do different tasks. And Brits are much less uptight about talking about intelligence, in the sense that Americans are. It makes us nervous. But really aside from all the kind of social implications and squeamishness at the concept is a basic characterization of human minds and brains. The very fact that there's this generic ability that's a consistent property of an individual is a deep fact about minds and brains, which Spearman discovered with these nice, low-tech behavioral measures.
However, importantly, Spearman didn't think everything was g. He also is much less famous for this, but he also talked about not just g the generic factor, but s, the specific factor for each task. And he pointed out that different tasks differed from each other in the degree to which they were g-weighted-- that is, performance was predicted by g-- versus s-weighted.
And these things vary, so some tasks like learning Greek were more g-weighted, and other tasks like some music abilities were more s-weighted. And I think we see echoes of the same idea, of g and s, in the architecture of the brain we can see with functional MRI.
OK, next method-- you've probably heard of it the infamous method of phrenology invented by Franz Joseph Gall, shown here. Gall inferred that there are distinct mental faculties, each with different regions in the brain. And he did this by feeling bumps on the skull, and relating them to the particular abilities of the individual. And from this, he inferred 27 different mental faculties. My favorites are [INAUDIBLE] filial piety, and veneration.
And Gall was a bit ahead of his time. I think he had the right idea and a crappy method. If he had functional MRI, just think what he could've accomplished. But he didn't. He had bumps, and that was it.
Only slightly later, the lesion method of studying patients with the great fortune of having damage to part of the brain, and looking at what happened to cognitive abilities in those individuals, was gaining traction. So Flourens here was a physiologist who made lesions in the brains of pigeons, and rabbits, and stuff. And although this quote doesn't really fit that exactly, he did acknowledge that some basic abilities and sensation and motor control did differentially inhabit different parts of the brain.
But he looked a lot for differentiation of cognitive abilities in the brains, and really couldn't find it, and argued that all sensory and volitional faculties exist in the cerebral hemispheres, and must be regarded as occupying concurrently the same seat in these structures. In other words, the brain isn't differentiated with respect to cognitive abilities. Everything's on top of everything else. So he was arguing against Gall and others, but this was fiercely contested at the time.
And the idea of specialization of the brain for any aspect of high-level cognition was really not taken seriously until Paul Broca stood up in front of the Anthropology Society in Paris in 1861 and announced that the left frontal lobe was the seat of speech. And he made that argument on the basis of his patient, Tan, whose brain is shown here with this nasty black hole up there in his left frontal lobe. And Tan was so-called because that was all he could say after he sustained that damage to his left frontal lobe.
And Broca pointed out that most of Tan's cognitive abilities, if tested appropriately, were intact. And he had a selective loss of the ability to produce speech. And therefore, this was one of the early strong arguments based on the lesion method that higher-level mental abilities were not all completely generic and occupying all the same parts in the brain.
OK, so this debate went on, and in fact continues to this day. And I'd say it's this widespread agreement now that of course basic vision, hearing, motor control-- these things clearly live in different parts of the brain. You can see that any number of methods in humans and animals. And people don't fight that fight anymore.
But they still do very much fight the fight about whether higher level perceptual and cognitive abilities are differentiated in the brain. So that's where this debate stands now. OK, so the method we'll talk about today is just another method to look at this. And that's functional MRI.
Functional MRI is done using the same kind of MRI machines that exist in hospitals all over the whole world, that look like this. I'm sure you've seen them, or been inside them. And the basics, for anybody to that's been living in a cave for the last 15 years, are that neural activity is metabolically expensive.
So if a bunch of neurons, say right there, start firing, then blood flow increases to just that part of the brain. It could've been otherwise. It could've been the whole brain got more blood flow, and then this method wouldn't work. Luckily, the blood flow control is local, so that we can look at changes in blood flow. And it reflects local brain activity.
The signal is oddly backwards. The blood flow increase more than compensates for the oxygen use, so actually the signal's based on a relative decrease-- not an increase-- in deoxyhemoglobin. Oxyhemoglobin and deoxyhemoglobin are magnetically different. And it's that magnetic difference that the MRI signal picks up on.
So the point of all that is, it's an extremely indirect causal chain from neural activity to changes in blood flow to changes in concentration of oxygenated and deoxygenated hemoglobin. And so you might think, with that very indirect causal chain, that this would be a crappy signal. And in many ways it is, but it's remarkable what you can do with it anyway. Question?
AUDIENCE: Yes, I'm just curious. So what exactly is the mechanism for the increased blood flow to these parts of the brain? Is it [INAUDIBLE] that it happens this way? [INAUDIBLE] just happens to flow more at these regions where there's [INAUDIBLE]?
NANCY KANWISHER: Yeah, the mechanism of-- it's called neurovascular coupling. It's been worked out in quite some detail in just the last five, eight years. And a big part of it is that you have glia, which are these cells in the brain that aren't neurons. And there are glia that have kind of one hand sitting on a synapse, monitoring activity, and another sitting on a blood vessel, controlling dilation of that blood vessel, directly linking neural activity to blood flow.
AUDIENCE: So its [INAUDIBLE]?
NANCY KANWISHER: I can't hear, sorry.
AUDIENCE: So it's like a transistor in there [INAUDIBLE].
NANCY KANWISHER: I wouldn't take it that far. At least there's a pretty direct link there. OK, so let me tell you a little bit about the basics of this signal here.
So why do we use functional MRI? it has the highest spatial resolution method for monitoring neural activity in the human brain non-invasively, and in looking at the whole brain. So there's really nothing that competes on those fronts-- looking at the whole brain non-invasively, and having whole-brain coverage.
So a little bit about the nature of the data-- typically, when you scan, you have-- if you're covering the whole brain, you'll have somewhere between 30,000 and 50,000 individual pixels that you'll look at. A 3D pixel in an MRI image is called a voxel, for volume pixel. So we have 10s of thousands of those.
And then that's our whole brain volume composed of a set of slices through the brain. So imagine that whole 50,000 voxel data set. And you take one of those every, say, two seconds. OK, so this is one of the key advances that made functional MRI possible, is that the MR physicists figured out how to take a whole set of images all in a second, enabling us to do that every second or two or three, to make movies of brain activity. So it's very high-dimensional data.
OK, so more concretely, when you do a functional MRI experiment-- these are ancient slides-- so anyway, you choose a bunch of slices. You've got to choose the orientation. Here's a slice of the brain like this. And you're choosing a bunch of slices in which you'll do your functional scanning. So those are just different orientations you can choose in the brain.
And then for each slice, you'll get a functional image that looks kind of like this. Functional images are much blurrier than anatomical images. So I'm sure you've seen beautiful pictures with very sharp borders of gray matter and white matter in the brain. Those are anatomical images. Functional images, in contrast, look kind of like mashed potatoes, but have more interesting characteristics.
OK, so I think I've said all of this. So for example, for each slice in the brain, if you're sampling once every two seconds, essentially you have a stack of images like that. Each image is composed of a set of voxels. And you can grab any one voxel and plot its time course over the period of the scan. OK, everybody following me? I'm just trying to give a basic picture of the format of the data. So that's sort of the raw data. There's even rawer data than that. We'll skip that.
Once you have this, then there's a universe of ways of analyzing these data. And all of those are-- the first pass ones depend deeply on the precise nature of this linkage between neural activity and the blood flow response. So this is called the hemodynamic response function, or the BOLD, which stands for blood oxygenation level-dependent signal.
So what I'm going to plot is the BOLD response, the blood flow response, in, for example, visual cortex. So imagine we're looking at a voxel in my visual cortex back here. I'm fixating at a point here. And a pattern comes on briefly, and flickers for 100 milliseconds, a tenth of a second, and goes off.
We know from loads of neurophysiology work in monkeys that if a visual stimulus comes on for about 100 milliseconds, the neural activity will be tightly time-locked to that, with only about a 10th of a second delay. And when the visual signal goes off, the neural activity will stop-- doesn't continue afterward. OK, so we can draw an arrow in the timeline here.
The visual stimulus comes on at time zero. 100 milliseconds later, neurons fire. We're not seeing that with functional MRI. We know it from other methods, like monkey physiology. And then what you see is, this is the BOLD response. Sorry, the x-axis is probably not visible.
This peak is about six seconds after the neural activity. All of the neural activity was crunched in here in the first fifth of a second. And yet the blood flow response is really sluggish, and doesn't peak until about five, six seconds later, and then goes down and takes a long time to die out. Everybody get what I'm plotting here? So blood flow response after a little, punctate activation of some visual neurons. Yeah, uh-huh?
AUDIENCE: [INAUDIBLE] to what's happening, actually, neurally?
NANCY KANWISHER: So you mean, what is the point spread function of the BOLD signal relative to neural activity? Not known exactly, because what you would need is, you'd need to be actually measuring neurophysiologically. I guess you can do this in monkeys. It's fairly restricted, on the scale of a few millimeters.
So more precisely than that, you'd have to do a very fancy experiment, where you'd be measuring-- you'd need a whole array of electrodes to know exactly the range of the spread of the actual, gold standard, electrically recorded neural activity. And then you'd want to do functional MRI in the same animals. And I'm sure that's been done, but I'm not retrieving this moment who's done that. Anyway, it's going to spread a few millimeters.
AUDIENCE: And this [INAUDIBLE] associated with [INAUDIBLE] behavior is very similar in different parts of the brain? It doesn't change?
NANCY KANWISHER: It's a great question. It's quite variable across subjects. And it matters a lot, not just whether you're up here or back there in the brain. And that is where you are with respect to the blood vessels, also.
So all of the signals come in from blood vessels. That's where our signal is. But it's also a problem, because it means the biggest signals you get are right on top of blood vessels.
And actually, you want to know about the activity of next-door neurons, not when those draining veins are carrying the change in oxygenated hemoglobin. So one of the key spatial limits of functional MRI is the degree to which you can see the tiniest blood vessels, which are closest to the brain tissue you really want to monitor. And that's a function of many things, including the field strength and the size of your voxels and such.
AUDIENCE: [INAUDIBLE] because you have to correctly recalibrate depending on where you're measuring?
NANCY KANWISHER: AUDIENCE: Yeah, there's all kinds of questions that in my view, you just have no business asking when functional MRI is your method, because it just can't answer them. And a precise characterization of the shape of the neural response on the scale of millimeters over a patch of cortex is, I would say, damned near impossible to unconfound from where the vessels are.
People try heroically by, for example, getting venograms or ways to see exactly where all the vessels are, image the vessels with some other kinds of MRI protocol, know where the vessels are, use that to adjust your interpretation of the functional MRI signal accordingly. But maybe I just don't have the patience for that kind of thing. That's a losing battle, I think. You've got to choose the questions for which the method can answer. Sam?
AUDIENCE: [INAUDIBLE] most people don't--
NANCY KANWISHER: What the shape of this thing, yeah.
AUDIENCE: [INAUDIBLE] most people, when they [INAUDIBLE], don't try to estimate that function, but just assume they know the function. So you can roughly-- [INAUDIBLE] variables [INAUDIBLE].
NANCY KANWISHER: Yeah, Leila.
AUDIENCE: There's something about [INAUDIBLE] small negative [INAUDIBLE] response, like 100 or 200 milliseconds after [INAUDIBLE]?
NANCY KANWISHER: Yeah, the initial dip. So yeah.
AUDIENCE: [INAUDIBLE]
NANCY KANWISHER: There's been endless discussion of whether there's a little, teeny dip there or not. I thought it was dead a few years ago. And in fact, I found a review article by one of the people who first reported the initial dip, who was basically conceding that he can't replicably find it.
And then just, like, two years ago-- I forget the author on this, but a paper-- oh, actually, [INAUDIBLE], who's taken very seriously, and who is one of the skeptics about the initial dip, came out with a paper saying, yes, it exists, and here it is. And the reason that people make a big deal about it was the hope that that initial dip would be more spatially local. And that if you could detect it properly, you would be closer to the actual neural activity, and have less of a point spread function. And maybe, but boy, you need to be very courageous to want to suffer with this tiny, little signal compared to the bigger, blurrier one that comes afterward.
OK, so just thinking about this human [INAUDIBLE] response function, we can already tell a lot about what functional MRI would be good for and what it won't. Because it's so slow, that means that this is not a good method for looking at precisely timed neural events on the scale of 10s of milliseconds, where most of the action is in neural codes and neural computation. Especially if you're interested in something that's computed fast like vision or language, here entire mental operations happen within about a 10th of a second.
We can't even remotely distinguish the component steps that go into that. So no, it would be lovely if it worked. It just isn't done.
And spatially, you can get down to around a millimeter if you scan at super high fields, like seven Tesla. You might be able to get down to maybe half a millimeter on a side on a voxel, but your signal's getting very low at that point, and you have a hard time detecting anything in your fabulously tiny voxel. So there's a serious trade-off there.
And the important thing about that is that, even under the best of circumstances, a single pixel, voxel, that is your basic unit of analysis with functional MRI, already contains hundreds of thousands of neurons. So the real miracle of functional MRI is we ever see anything at all. Neurophysiologists who are used to recording from individual neurons are appalled when they learn this. And what can we say? That's all we got.
So the reason this works, evidently, is that you have a lot of clustering in the brain, where nearby neurons on the scale of a few millimeters are doing similar enough things that if you grab a big chunk of them and average over their neural activity with this indirect measure, you can still find stably different kinds of responses. But it's important to know what the limits are, because they're significant.
Another limit is that with functional MRI, there's no absolute signal that means anything interpretable. We can't get something equivalent to firing rate. All you can do is compare this BOLD response between pairs of conditions.
So it becomes very important to decide what your baseline is. And the number one thing you should ask when somebody tells you they found, with functional MRI, a part of the brain that's active when people do complicated mental process x, the very first thing you should say is, compared to what? Everything hinges on compared to what. Often, it's compared to staring at a dot, at which point it's deeply uninteresting that part of the brain, a, responded to task b. So everything depends on that difference.
Another challenge with using the functional MRI signal-- it's a big downer-- we'll get all the downers out of the way and go on-- is that the exact physiological basis of the BOLD signal is not well understood. We don't know whether it's coming from action potentials or synaptic activity, excitation and inhibition. These things are all metabolically expensive. They all require blood flow. All of them are probably measured by the BOLD response. And that's just life.
And a shortcoming that functional MRI shares with other recording methods is that it can only correlational, not causal. So you can say what parts turn on when people do XYZ, but you can't say whether that activity is necessary for people to do XYZ. This method just won't answer those questions. You need to complement it with other methods that can.
OK, so despite all of these caveats, the method works surprisingly well. So the first functional MRI papers were published in the early '90s. And by the mid '90s, people had already found, clearly in humans, retinotopic cortex in humans.
So for example, here's a later study from Wandell and Heeger and others, where you have subjects fixate hearing. You have an annulus of flickering checkerboard. And that annulus moves out in space while the subject, me, is fixating here. So the stimulus is now becoming more eccentric in my visual field. And you can use that to map positions in the visual field across visual cortex here.
So this is a piece of brain. If you took my right hemisphere off, and looked at the medial surface of my left hemisphere, that's this here-- temporal lobe, front, back. Everybody oriented? This is visual cortex here.
And what it's showing you is that foveal cortex, right at the center of [INAUDIBLE], that pink stuff, is right back here in the back of the head-- on me, right about there. And as you go out in the visual field, you move forward in the brain. So that's mapping, essentially, r in polar coordinates.
You can also map theta by having a wedged flickering checkerboard that moves around the visual field like that while the subject fixates here. And if you do that, you see a map of [INAUDIBLE] or polar angle. And you can see, for example, that the lower visual field is represented in the upper part of visual cortex, and the upper visual field in the lower part. Stuff is upside down in the visual cortex. OK, so this is just basic use of the BOLD signal to discover things that were already known from other methods.
And in similar ways, people found-- you guys have probably heard about him-- [INAUDIBLE] visual motion area, MP, much studied in monkeys, was found in humans in the mid '90s. People mapped out somatotopic maps and somatosensory cortex, and tonotopic maps and auditory cortex. And all of those things were very nice for replicating things known previously from other methods in humans and for methods in animal, and served also to validate the MRI method.
Despite all its flaws, it can detect these known things. But it would be more fun to use this method to discover new things. And so when I came on the scene in the mid '90s, that's what I wanted to do. And one of the first experiment that I did, that I'll talk about in a little bit of detail here just to give you a concrete sense of an experiment, is to ask whether there are parts of the brain that are specialized for face perception.
The reason I asked that question was I was going to get kicked off the scanner any minute. I didn't have a grant. The scanner was expensive. And if I didn't hit a home run really fast, that would be it for me and functional MRI. And there were lots and lots of reasons, from every other method-- from behavioral studies, from lesion studies, from actually intracranial recordings in humans had already been done, all of which suggested there was probably a part of the inferior right hemisphere that was selectively involved in face recognition. But nobody had ever seen it in action.
So we set up look for it doing this very simple experiment of having people lie in the scanner, and showing them pictures of faces, and then pictures of objects. And so when you do that, then you just ask for each voxel in the brain whether the MR signal was higher when the subject was looking at faces than when they were looking at objects. You get this little, teeny, blob here.
Now let me orient you. This is a weird slice to the bottom of the brain, mostly horizontal. This is the back of the head here. Left and right are flipped. So this patch right in there would be, in me, straight in right here, something like that, the bottom surface of the brain, sitting right on top of the cerebellum. OK, everybody oriented?
And what the colors are telling you is that the statistics are saying that those voxels right there produced a higher response when the subject was looking at faces than when they were looking at objects. But you should never believe statistics. You should ask to see the raw data.
Heres' the raw data from those two voxels in the brain right there. And you can see just eyeballing it that over this five minute scan, those voxels produced a higher response when this person was looking at faces, with the dark gray bars, than when they were looking at objects with the lighter bars. You can also see, importantly, that the signal was higher when they were looking at objects than when they were staring at a dot, the blue bars there.
So it's not like this region is shut off entirely when you look at something other than faces. It's just a lot more active when you look at faces than anything else. Everybody clear on this? This is just the bare basics. OK.
All right, that's one subject. How systematic is this? Pretty much everybody has [INAUDIBLE] like that. This is me. Here's another subject, another subject with two little bits, another subject.
You can see there's substantial variability across subjects in exactly where this thing lands and across different individuals. But there's a family resemblance, that pretty much everybody has one of these things in approximately the same location. So it seems to be a pretty basic, replicable part of the architecture of the brain. Yeah.
AUDIENCE: Are these similar cross-sections?
NANCY KANWISHER: Approximately, yeah, but it wasn't done very carefully. This is ancient data, but yes, approximately similar cross-sections. Yes.
AUDIENCE: So that the--
NANCY KANWISHER: Can you [INAUDIBLE]?
AUDIENCE: Yeah, about the method here-- do you wait six seconds for the entire transient to go before, or do you [INAUDIBLE] or something?
NANCY KANWISHER: Yeah, good question. I finessed all of that. Actually, with these data, I analyzed these data in Excel. I had nobody to show me anything. There was no software. I went in, and I analyzed it in Excel.
I [INAUDIBLE] the time course here, and just say I can see the delay for the hemodynamic lag, so I'll just skip the first few time points in each block, and take the end of each block, and dump things into a t-test. That's essentially what I did. Methods have obviously moved far beyond that. Now people do all kinds of things.
The standard thing to do is model the expected response by convolving this with the hemodynamic response function. But actually, I mean I think that's the standard thing. There's nothing wrong with it. I actually prefer methods where you don't have to assume anything at all. So if you have enough data and enough power, you can use analysis methods that make no assumptions about the shape of the response, or very few assumptions.
So all I've done so far is show you how you find a patch of the brain that responds more to a than b, in this case, faces than objects. That certainly isn't enough to demonstrate selectivity of that region for faces. So there's lots of things to do from here.
The method that I prefer for this kind of question is to identify that region in each subject individually, and then having identified this particular set of voxels in this particular subject as their candidate face-selective region, then run new tests after that region passed all the other tests that you've probably already thought of that we would need to establish that that everything was face selective, and looked in those voxels we just found. And the reason to do that is, as you see, that region does not land in the same place across subjects. So if you want to characterize it, you first have to find it.
This is not completely standard in the field. A lot of people like to take different subjects, align their brains as well as possible in some standardized space, and then do analyses across subjects. If you get a significant result in that kind of analysis, that's a strength. But you're really throwing away a lot of specificity because you're averaging across anatomically different brains. And you're really blurring the hell out of your data when you do that.
So I won't go through this in great detail. If you just think of a piece of brain that responds more to faces than objects, what else might that piece of brain be doing other than responding, specifically carrying out face recognition or face perception? Well, lots of things. We're social primates. Maybe we attend more to faces than to toasters and dogs and stuff.
Maybe it responds not just to faces, but to any human body part. Maybe it responds only to one view of faces, not another. Maybe it responds to round things, and on and on. So to make a claim that a region of the brain is specific for a particular function, you need to consider all those alternative hypotheses, and you need to test them.
So we spent an idiotic amount of time doing that. I'll just give you an example. Here is another subject's face, probably the same one. Here's an experiment where we showed faces.
We now hid the hair. We showed a three-quarter view instead of a front view. And we had subjects do a consecutive comparison task, where if any two consecutive images were the same, they had to press the button. This task is not very hard with faces. We're good at that.
With hands, which is our control condition here, the task is extremely difficult by design. We're very bad at discriminating hands. If you get shown a line-up of hands, you can't pick out your own hand. We just don't pay attention to this, don't care about them in the same way.
And what we found was that the part of the brain that we identified with this contrast of faces versus objects nonetheless responds much more to faces than hands, even though subjects are working harder, attending more on hands than faces, and even though hands are also human body parts and share some low-level perceptual properties with faces.
So that's just an example of one of the ways you can knock out some of those alternative hypotheses. We don't think it's visual attention. We think people were paying more attention with the hands than the faces. It's Not just any human body part. It's not strongly activated by hands. It's not just front views, and so forth. I'm supposed to shut up now, right?
So this is just an illustration of how you can use functional MRI to find a patch of the brain that looks interesting, and drill down in this specific way, with repeated testing to rule out alternative accounts, to try to get some functional precision about what that part of the brain is doing. Now I told Ben to make me shut up in five minutes, so I have to do triage. OK, skipping ahead, here's a summary of lots of other conditions we've tested. And I'll just say that that part of the brain responds very strongly for a wide variety of different kinds of face images. And it responds about half as much for a wide variety of different kinds of images that aren't faces.
So I think what we can do is to say that this region is strongly selected for faces. It's present in essentially every normal person [INAUDIBLE]. Since then, we've gone back to the Spearman method and actually found that face recognition ability is not correlated with IQ, exactly consistent with the idea that the g and the s factors that Spearman defined behaviorally as different components of mind, may have a relationship to different components of brain you can discover with functional MRI. And all of this raises a huge number of questions that I will get to before I stop. But first your question.
AUDIENCE: In all these experiments, if you're doing different contrasts, do you see the FFA dance around?
NANCY KANWISHER: Yeah, good question-- mostly not. I just didn't have time to make a slide, but one of my lab techs just sent me a bunch of data last week that-- I've seen this 100 times, but it still made me smile. We had a case where we had people looking at colored video, three-second video clips of faces versus three-second colored video clips of dynamic objects like colored kid's windup toys, and stuff like that. So the contrast was movies of faces versus objects and color.
And another condition, we had very schematic, cartoonish line drawings, black and white line drawings, of faces versus objects. And the activation maps for those two are to a voxel identical across widely different perceptual properties. And so that's the kind of data that I think really shows you that-- of course, back in the visual cortex you'll get different things, where things are selected for color and global properties. But in high level extrastriate regions, the face-selective bits really, to a first approximation, don't give a damn about the low-level properties. Yeah.
AUDIENCE: My question is, how do you see the selectivity [INAUDIBLE] as opposed to given by perceptual expertise? So for example, in that case of faces versus heads, they are all subordinate level discrimination. But suppose we are expect at faces. And then I think [INAUDIBLE] and [INAUDIBLE] showed that if you're a car expert and bird expert, actually they work through the same FFA.
NANCY KANWISHER: Here's what they showed. They showed, in the best paper-- which was done, actually, half of it in my lab-- I mean, I know these data. I'm not claiming that that's why it's good. I'm saying I know these data. I analyzed a lot of them.
What they showed is that the FFA responds like this-- [? Giro ?] is staring at a dot. That's as close to turning off your brain -- your visual system-- as you can. Here's the response when you look at discrimination of objects you're not expert at. Here's the response when you look at faces. And here's the response when you look at objects of expertise.
OK, so you're a car expert. You're obsessed with cars. So cars come on, the signal's a little bit higher than it is for you when birds come on. And vice versa if you're a bird expert.
But here's the thing-- in that paper, they didn't look at nearby cortex. If you're a car expert, you're excited about cars. We know that visual attention can modulate activity all over the back of the brain.
In at least seven subsequent studies, people have looked not only in the face area, but in adjacent, object-selective cortex. And what they find is this signal goes way up. So it's really an attention signal.
To the extent that there's any expertise effect, it's really an effective attention. It's bigger in nearby cortex than the FFA. So the expertise hypothesis has been fully empirically rejected by now, despite what you've been told. We can talk more later.
OK, so what am I going to do? I'm out of time. What I'm going to do is-- I was going to show you a whole bunch of other selective regions of the brain. OK, I'm going to show you-- this is slightly cheesy.
I did a TED talk a few months ago, and so I made a cheesy little video. This is my brain with everything mapped on it. So let's start by inflating the cortex so you can see everything.
The dark bits are the bits that were inside the sulcus before we inflated it. So let's rotate it around, now, to see my face area. There we go. On the inside surface, now on the bottom, that's my fusiform face area, right there.
Here in purple are regions that respond preferentially to color. They're not as selective as the face regions are for faces, but they're biased to respond more to color than luminance. There are also scene-selective regions, which you can see on that medial surface-- two of them here in green, and another one out on the lateral surface, right there. This is how you find visual motion area, MT-- moving versus stationary dots. And that's the yellow little bit on the bottom.
There's also a region that we reported way back that responds selectively to images of human bodies and body parts. That's in the lime green on the bottom, partially overlapping with visual motion area MT. There are also, that Sam will talk more about later this afternoon, specializations within human auditory cortex, which is up here on the top of the temporal lobe there, where you can find a region that responds selectively to pitch. Namely, it responds to sounds like this, but much more weakly to sounds like this.
A perfectly recognizable, perfectly familiar, but they have less pitch, and that's what that region responds selectively to. It's the pitch sound. Also nearby are regions here in purple that respond selectively to speech sounds. Here's now my left hemisphere, showing many of those same regions with a similar organization, but because it's my left hemisphere, we can see other cool stuff. Like in pink, those are regions that respond selectively when I understand the meaning of a sentence.
And the most astonishing region to me is this one shown in turquoise here. Rebecca Saxe will be talking about this, I guess, tomorrow. This is her work. She found that that region in turquoise responds selectively when you think about what another person is thinking. That may seem astonishing to you. It seems astonishing to me. It's true. She will show you the data.
We're back now on the right hemisphere here. In addition to all these highly specialized regions, for each of which we've gone through the kind of battery that I just sketched for the case of faces-- there's been a whole universe of experiments nailing the specificity of each of these things. In addition to those, there are also parts of the brain that are like the exact opposite-- regions of the brain that are active when you do anything difficult at all.
And these regions are shown in my brain here in white. And the icon is Sam trying to solve math problems here. So we don't understand these regions, but it says that in addition to these highly specialized bits of brain, we also have very generic regions that will step up and engage no matter what the task.
So to wrap up quickly, is human intelligence a product of specialized machinery or generic machinery? Well, like most questions in human psychology, the answer is, "both." I think this is a rich, fun picture of the architecture of human intelligence.
There's a bunch of other issues. I'll skip over the big challenge, because it will arise in Ben's talk. And I will just end the big challenges. This is the peak responses are the important thing. We can get back to that. I just want to end with the vast space of fundamental questions that haven't yet been answered. So here's my brain rotating for entertainment as I go through these.
Why do we have brain specializations for these functions and apparently not others? Well, which other ones do we have? Sam will talk about methods to look in a data-driven way for other specializations in the brain.
What are the computations that go on in each of these regions, and what kinds of representations do they hold? That's harder, but then we'll talk about pattern analysis, which is a method to try to characterize representations in each region of the brain. And Alex will talk about encoding models, which are ways to really precisely characterize representations in each part of the brain.
Of course, if you really want to go with great precision, you really need single unit data, just what we can do with humans and the available methods. But just to kind of allude to the vast space of other fundamental questions-- how do these things get wired up in the brain in development? We know next to nothing about that.
How do these things evolve? The language regions and the theory of language, since these are functions that do not exist in macaques? How did this evolve? Are there homologous regions in monkeys? And what they do in monkeys? There's some ways to get at that.
What are the actual circuits that carry out these computations? Obviously, functional MRI isn't going to be able to answer that question. But by working together with other methods, like, for example, the stuff that Gabriel [INAUDIBLE] talked about, or Jim DiCarlo. We may get there someday for at least some of these regions.
Also fundamentally, what is the connectivity of these regions? And how do they interact? These are fundamental questions that we have some kind of sort of maybe efforts to address. They're not great.
And finally, what is the causal role of each of these regions and behavior? These are hugely important questions with some methods to get at that. So why don't I take questions while Ben hooks up his computer? Thanks.
[APPLAUSE]
AUDIENCE: Are there other people with brain damage in the areas of the [INAUDIBLE] specialized?
NANCY KANWISHER: OK, let me say importantly-- the gray stuff there, it's not like we have shown that it's generic. The gray stuff is just-- actually, some of it-- I just didn't mark it on there-- some of it is just motor cortex and somatosensory cortex, and stuff like that, that are very well characterized. They just didn't make it on my map. But you're talking about the white areas.
AUDIENCE: Yeah, and usually, when you think of neuroscience done with people that have some brain damage, it's damage to specialized areas.
Yeah, so I wanted to include all this, and ran out of time. So yeah, the white stuff-- so I sort of skipped all over this. But those white regions-- John Duncan and Adrian Owen and their colleagues have been writing papers for a dozen years, pointing out that those regions are very generic, in the sense that they're engaged in a wide variety of different tasks. However, they've been basing those conclusion on meta analyses and group analyses.
And I mentioned before that a group analysis blurs the data badly. And if you really want to understand the specificity of a region, you really don't want to do that. So we've been collaborating with John Duncan recently, looking at some of our data where we have a whole suite of difficult and easy conditions across language tasks, spatial working memory tasks, arithmetic, you name it.
And we can ask voxel by voxel whether the same voxel responds to increased difficulty across all of those domains. It does, to a shocking degree. So Duncan's really right about that. Those regions are very domain general.
But now the question is, what do they do? His story is that they're involved in basically solving novel problems, and they're at the essence of g, and creativity, and all of that. And he's got a shred of data-- several bits of data, but one that's kind of ghoulish but fascinating. He did a study of I think 80 patients with brain damage, where he could categorize the volume of damage in the brain as a function of whether it was in those what he calls multiple demand or generic regions, versus whether it was in the other regions.
He measured IQ in these patients after their brain damage, and estimated pre-brain damage IQ, and asked how much does IQ drop as a function of the volume of brain lost within those domain general regions versus the other regions? And the number is something like it drops about, I think, five IQ points per cubic centimeter, or something like that in the domain general region. If you have damage outside those regions, your IQ is unaffected. Instead, you may become paralyzed, or prosopagnosic, or aphasic, but your IQ doesn't go down-- again, consistent with this idea that there really are functional divisions between these specialized regions among each other. So in between them and the more domain general regions seem to underlie g and more generic ability.
Associated Research Thrust: