Systems Neuroscience Using fMRI: Studying the Brain to Understand the Mind (1:02:50)
Date Posted:
January 5, 2018
Date Recorded:
January 3, 2018
CBMM Speaker(s):
Idan Blank
Loading your interactive content...
Description:
Idan Blank, a post-doctoral researcher at MIT, explains how MRI and fMRI work, and highlights some important principles for the design fMRI experiments that examine functional specialization in the brain. These principles are illustrated through two fMRI studies. The first reveals brain regions that are active when subjects perform intuitive physical inference, and the second identifies patterns of brain activity suggesting that imagining a recent social break-up is associated with physical pain.
Resources:
Idan Blank’s website
Fischer, J., Mikhael, J. G., Tenenbaum, J. B. & Kanwisher, N. (2016) Functional neuroanatomy of intuitive physical inference , Proceedings of the National Academy of Sciences 113(34):E5072-E5081.
Kross, E., Berman, M. G., Mischel, W., Smith, E. E. & Wager, T. D. (2011) Social rejection shares somatosensory representations with physical pain , Proceedings of the National Academy of Sciences 108(15):6270-6275.
IDAN BLANK: Hi everyone. My name is Idan. I am a postdoc here at the Department of Brain and Cognitive Sciences. I did my PhD here with Nancy Kanwisher and Evelina Fedorenko, and then I didn't want to leave, so I convinced them to let me stay as a postdoc and I will stay here until they kick me out. And today, I'm going to talk to you first about fMRI generally, how does the machine work, what's the rationale behind the methodology, and I'll show you some cool findings from recent years, just sort of share with you the different kinds of questions we can answer with it. And then in the afternoon, we're going to all have a hands on experience and we're going to analyze real fMRI data and find someone's language system, their brain regions that respond to language. All right.
So let's start. So cognitive scientists study the mind, neuroscientists study the brain, and cognitive neuroscientists look at the brain in order to understand the mind. That's what I do. And so today, there are several things on the agenda. The first is I want to explain to you how the MRI machine works, because I think it's beautiful and it involves a little bit of physics. So you're going to have kind of physics from someone who is a non-physicist. And then I'll talk to you about what we measure with functional MRI, so that's going to be kind of physiology from someone who is a non-physiologist. And then hopefully we'll get to a part where I'm more confident in what I can say. So we'll talk about two important topics in fMRI research that are called cognitive subtraction and reverse inference, and then I'll share with you three cool analysis methods and I'll demonstrate each of them with a recent study.
OK, so let's start with how does MRI work? All right, so our goal when we use MRI is to get images of biological tissue, and we're going to focus about the brain later but this could be any biological tissue, right? You can use MRI to get an anatomical scan of any part of your body. And we want it to be non-invasive, obviously, and we want it to have relatively low long term risks, not like x-rays, for example. But we have several challenges. And what I'm going to do, I'm going to describe these challenges and how each of them is solved, and in that way we'll construct the reasoning or the logic behind how MRI works. Now, these challenges, the order is sort of only for educational purposes. Historically, this is really not the order in which things were discovered and really not the rationale in which this method was developed, because it was based on findings that were initially in other fields.
But I think this sort of makes a more coherent picture. So the first challenge that we have is that we want some substance that is abundant in all of the tissues, because we need a machine that can give us pictures of any tissue. But that it can tell different tissues apart, right? If we get an image and all the tissues look the same, then we didn't really do anything. So what are we going to do? What do we have a lot of in our body? In every tissue? Water. Awesome. So we're going to focus on hydrogen atoms in water molecules and also in fat. That's the most abundant thing we have in our body, and different tissues have different kinds of behaviors of the water molecules in them, depending on for example, the amount of fat or how viscous it is. Blood or CSF can flow relatively freely, but in the white matter where we have a lot of fat because of the myelin across the axons, water molecules are much more confined in how they can travel and behave.
All right, so now we need some signal of the hydrogen presence. We need a machine that will tell us where is hydrogen. And in order to do that, let's look at the nucleus of a hydrogen atom, and the nucleus of a hydrogen atom spins. And because it spins, it creates a magnetic field like any nucleus that spins. And so the magnetic field has a direction, and in this particular example the direction is sort of from bottom left to top right. And so what we can measure is the magnetic field of the spin. So we're lucky that the hydrogen has a magnetic field that we can measure. That's where magnetic comes from in magnetic resonance imaging. All right, you don't have just one hydrogen atom, we have many of them.
So here are some of them, and they're all spinning. Now, here there is a problem. We want to measure the magnetic field that they emit, but each of them induces a magnetic field in a different direction. And so they all cancel each other overall, so if we try to measure the overall magnetic field here, we get nothing. There is no direction at which most of the magnetization goes. And so we need to deal with that. Spins have random directions, so the total magnetization is zero, does that make sense? All right, so what are we going to do with that? Well, we're going to put all of our spins or our tissue or our person in a huge magnetic field that I'm denoting here with this red arrow. Very, very strong magnetic field-- that's going to be the MRI machine-- there is a [? board ?] that is actually a huge magnet, and when the hydrogen nuclei are in this huge magnetic field, very strong, what happens is that they all align with the direction of the field.
Now, just sort of in parentheses to mention this, some of them are aligned in the opposite direction, there is a lot of physics in that, but for our purposes they align with the direction of the magnetic field. So we use a strong external magnetic field, so all the spins align and now we have net or total magnetization going in that direction. All of these vectors sum up, and we get a signal that we can measure. Now, how strong is this magnet? Usually and what we have downstairs is three Tesla, which is 60,000 times stronger than the Earth's magnetic field. So it's really really, really strong, and if you Google MRI accidents, you will see things that happen when people walk near the MRI machine with things like office chairs that have metal or oxygen tanks, things just get pulled really, really strongly. They don't care what's in the way even if there is a person.
So that's the magnetic field. All right, so now we have all these spins aligned, and we have some magnetic field that we can measure. Now, one thing I didn't mention is that all of these nuclei don't just spin, they also precess And what I mean by that is that if they have the axis around which they rotate, they also precess around that axis like kind of like a top or a dreidel. And so it looks something like this. So this is what we actually have when we put someone in the magnet. In order to measure this magnetization, we want it to oscillate. We can't really measure just the constant magnetic field, we want a magnetic field that oscillates back and forth, and that's the signal that we're going to measure. Now, let's try to measure the magnetic field in this direction, because that's where all of the magnetization goes, in the direction of the huge external magnetic field.
The problem here is that if you for example look at this spin when it's in this position, this is the amount of magnetization that it induces in the direction we're looking at. Now, as it rotates and gets let's say to that position, it sends exactly the same amount of magnetization. So throughout its rotation, it doesn't actually change the magnetization in that direction. It's always the same, so magnetization doesn't oscillate in this direction, and that's not good for us. So we need to look at magnetization in a different direction, so what going to do is we're just going to look at the perpendicular direction. So if we're changing our view here, and imagine that now I have an antenna here that's looking at the magnetization that's coming here, when the spin for example is in that direction, we have a magnetization that goes in that way and then it continues to precess.
And then we have a magnetization that goes in the other way, so it oscillates. And as the spin precesses, we get this oscillating magnetic field, and this is something that we can measure. It has a particular frequency, and we can have an antenna and measure that. All right, does that make sense? So the idea is that because of the precession, we can measure magnetization from 90 degrees, and that magnetization oscillates. All right, now what I showed you before is that all of these spins precess, but again we have a similar problem to the problem we had before, which is each of them precesses in its own sort of phase. If you freeze the picture, each spin is in a different part of the circle. And so again, if you try to sum up all the magnetization that goes into our antenna, again you'll get zero, because each of them induces a magnetic field in a different direction.
So it's not enough that they are all pointing in the same general direction. We also need them all to rotate exactly the same way. So not to have a situation where one is here and then the other one is still behind, or something like that. And how do we do that? So it turns out that there is a very simple method to do this, and that's to induce a magnetic field and we do this with a radio frequency with an electromagnetic wave that has a radio frequency, and we emit this wave exactly at the frequency of the rotation. So it happens that when you put atoms in a magnetic field, the frequency of the rotation, the number of circles that you do in a second or in a minute, whatever, is determined by the strength of the magnetic field. So if we know the strength of the MRI machine, we know the frequency of these circles.
And so we emit a wave that is exactly at the same frequency, and because it is the same frequency, what it does is that it puts all of these spins in phase. They all absorb energy, and they can absorb maximum energy because we're sending a wave that is exactly their frequency. And so they're all going to start precessing in phase. And now you can see that all those that are pointing in the same direction, if you freeze the picture they're all exactly in the same location. And this is sort of like if you'd like the logic of a swing. So if you push a swing, you can make it go higher and higher. And the way you can make it absorb the maximum amount of energy is to push it every time it finishes a cycle. You wait until it comes up, and then you push it, so you push it with the same frequency that it oscillates. And this is exactly the same idea.
We send a wave that has the same frequency as these oscillations, and so these nuclei absorb that energy, and it also makes them rotate in the same phase. It does some other things as well that don't matter for us right now. All right, the frequency of these rotations is called the resonant frequency. It also has another name. It's called the Larmor frequency if you've heard about it, but this is where resonance come from in magnetic resonance. The frequency that we need in order to make all these spins precess in phase or synchronize is called the resonant frequency. Now, this synchronization is very, very brief. It doesn't last long. What happens in MRI is that we give a very brief pulse of this radio frequency, and then all of these spins get in phase and then slowly, they start going out of phase. And that's because the magnetic field that they induce interact with one another, and there are other small inhomogeneities is in the magnetic fields.
So there are small, small influences that sort of pull some spins in one direction and others in other direction, and so they start de-synchronizing. So now, when they're all synchronized, they're all this, like we can imagine that they're this huge magnetization, all in the same phase all in the same direction, and it oscillates and our antenna can measure it, and great. Now, different tissues differ in their decay rate, and that's what's important. If at a certain point in time I freeze the system and I look at the amount of synchronization of these spins, spins in one tissue for example the gray matter, will have a different amount of synchronization from spins in the white matter or in the cerebrospinal fluid. These tissues are different.
They have different properties, and they have different interactions within them that influence the desynchronization. So in some tissues, desynchronization happens really fast. In other tissues, desynchronization happens more slowly. And so if we freeze the system in time and take a picture, measure the magnetization, we can tell different tissues apart, because we'll see the different parts of our image have different strengths of signal. The tissues that are most synchronized will have the strongest signal, the tissues that are less synchronized will have the weakest signal. The last thing about magnetic resonance imaging is the image. We need to get a picture.
Now, an image is a signal that changes across space. But what we have here is all of the spins are in the same direction in the same phase. They are all this huge one magnetization vector. How do we know where they come from? How do we know if they come from this side of the picture or this side of the picture or from that? They're all together. And so this is kind of a complicated matter, but I just want to give you an intuition about how we solve this problem. The way we solve this problem is we change the huge magnetic field by introducing a gradient. So we actually make the magnetic field change a little bit. And you remember I told you that the frequency of this procession depends on the strength of the magnetic field. The stronger the magnetic field is, the faster the precession. So when we introduce this gradient, what we're doing in effect is that in some parts of our image or of the tissue or the slice that we're going to image, precessions are going to oscillate really fast, and so our antenna is going to get a signal that oscillates fast.
And in other parts of the image, the oscillations are going to be slower, and so our antenna is going to get signal that oscillates a little slower. So we can just look at signals at different oscillations and ask, how strong is the signal in the really fast oscillation? OK, that's coming from there. How strong is the signal from this intermediate oscillation? OK, that's coming from that part. And then how strong is the signal that oscillates relatively slowly? That's coming from that part. And of course, you want an image. An image is two dimensional, so you have to actually do it in two dimensions so you know what sort of row this signal is coming from and what column it's coming from, and it gets really complicated. But this is basically the idea. You induce a magnetic field that slowly changes, and so you get different rates of precession. This is the gradient to the external field.
So overall, this is sort of a very vague and hopefully somewhat intuitive view of how the MRI works. We put a tissue in the magnet. All the hydrogen atoms align with the direction of the magnet, and they precess around this axis like a top spinning, and then we get a radio frequency pulse, and that makes them not just precess but precess in phase so they're synchronized, and when they're synchronized, we can measure the oscillating magnetic field and we measure it in a direction that is 90 degrees to the main magnetic field. And basically, that's what MRI does. So this was kind of physics from a non-physicist, and now we're going to do kind of physiology from a non-physiologist. All right, so hopefully we have sort of an intuitive, broad, vague understanding of how the MRI works.
The functional MRI, which is the reason we're here-- before we wanted to just take a picture. We wanted to freeze time and see how tissues look, so get a picture of anatomy. But what we want to do with functional MRI is to get a movie. We want to see how signals or how activity changes over time. And so what functional MRI does is basically record a movie, and the way it records a movie is like we record a movie in real life. You just take many, many pictures, one after the other, and then if you look one after the other really fast, it's a movie. So basically, functional MRI means take many, many pictures, one right after the other. So example. Let's say just for fun we want to know which brain regions are recruited when you're flexing your arm. So what we want to know here or what we want to measure is neural activity, right?
We want to look at the entire brain, get the neural activity from different parts of the brain, and figure out which regions of the brain are the most active when you're flexing. But fMRI can't do that-- well, that's what it looks like, axons. An axon sends an action potential that spills neurotransmitters into the synapse and and that goes to the next neuron, and so on. But what we actually have is changes in blood flow. That's what functional MRI measures, and now we're going to figure out why, why we measure changes in blood flow and how it helps us. And the link is, the basis of the link is that neurons use oxygen, right? When neurons fire, when neurons work, they use oxygen. Now, they don't have an internal supply of oxygen. They need to get it from somewhere.
They get it from the blood. They get it from red blood cells. So the red blood cells deliver oxygen, and they bind it to hemoglobin. And here is the crucial part that makes fMRIs, in my opinion, makes it so cool. Hemoglobin can either be de-oxygenated, those are hemoglobin molecules that don't happen to have oxygen on them at the moment, and in those molecules there is an iron atom that is exposed. There are actually four of them. Four atoms that are exposed. But when a hemoglobin molecule binds oxygen, the oxygen sort of covers the iron atoms. And this difference between the de-oxygenated and oxygenated hemoglobin is really important for MRI. Why? Iron is ferromagnetic.
Iron induces in-homogeneities in the field. When you have iron exposed, it's like a small magnet that affects a small part of space, and so in that space it pulls the spins, it makes them get out of phase more quickly. Does that make sense? All right, so what happens is that if we have an iron atom exposed, it disrupts the magnetic field. That means that the desynchronization or dephasing of the spins is quicker, and so we get a weaker signal. But if the iron atom is covered, if I have molecules carrying oxygen then the magnetic field is relatively intact, so the desynchronization is slower, and we get a stronger signal. So now the question is, how are we going to use that to figure out which brain regions are active when you flex your arm.
And that is the question that I want you to think about for a second. If a brain region increases its activity, are we going to get stronger or weaker MRI signal based on this? So one claim is, if the neurons are using oxygen, there is going to be a signal from the brain saying or from that region, I need more oxygen, so more oxygen is going to flow there and more oxygen means stronger signal. Who thinks the opposite? Yeah.
AUDIENCE: Well, the brain is going to need more oxygen. So the hemoglobin will get rid of the oxygen faster, so the iron would be exposed more.
IDAN BLANK: So that's the opposite claim, right? If neurons are using the oxygen, then we have less oxygenated hemoglobin around that region, because all that oxygen is being used by the neurons. And so we'll get a weaker signal. And so which one is true? It ends up being the case that you get a stronger signal, and the reason you get a stronger signal is that the body overcompensates when the brain uses oxygen. So whenever a brain region is active and the neurons are firing and they need more oxygen, the body sends much more oxygen than is actually needed for that region. And so the oxygen exceeds the metabolic demands of that region, and so we end up having more oxygenated hemoglobin and less de-oxygenated hemoglobin.
So here for example, red blood cells with the blue 02 in the center are those that have oxygen bound to them, and so when a region is active, what happens is that you have [? vascillization, ?] so the blood vessels get wider, and you get more blood flow and more blood volume and you get much more oxygen than the region actually needs. All right, and so that means that because the O2 metabolic needs, we have less de-oxy hemoglobin in the venous blood. Of course, everything here is in the veins, because the arteries always have oxygen in them going to all of the brain regions. And so we have less disruption to the magnetic field and stronger MRI signal. And initially, people were pretty confused when they just came up with fMRI, people were very confused on whether they were expecting to get a stronger signal or a weaker signal. But once we figured out that the body overcompensates and sends way more oxygen than we need, we realized that the signal is stronger.
All right, so the signal is dependent on the level of blood oxygenation. Does that sentence make sense? We have some oxygen in the blood, and the level of oxygen the blood influences our signal. If there is more blood oxygenation, we get a stronger signal. And that's why this signal is called BOLD-- which means blood oxygenation level dependent signal. So if you read fMRI papers and you see BOLD signal, that's just what it means. It's the usual MRI signal, but it's sensitive to the amount of oxygen near that region. So that's an example of a case where different tissues in parts of the brain will dephase in different speeds depending on how much exposed iron there is. So let's see how this signal looks so I'm going to show you the BOLD signal and how it changes over time when we record it from a region and the first thing that happens after a region starts responding. For example, we showed a participant some picture, and the visual cortex starts responding.
At first, there is a small, small dip. That's because that region is using oxygen, so initially the signal gets weaker. But it's very, very fast and very tiny, and we don't always detect that. And then the signal begins to get really, really high. It peaks, and [AUDIO OUT] back down, and at some point there is an undershoot that doesn't really matter, and then it goes back to normal. OK, two things. First of all, what is zero? I'm planning here a signal that goes from up and down. What is zero? Zero is not really zero signal. The brain is never inactive. There is always some activity, so BOLD signals are only meaningful relative to some baseline or relative to some control condition. In the simplest [? terms, ?] it's just whatever happened before the stimulus came on. You take 10 seconds or however long you want before the participant saw the picture, and whatever happened in the visual cortex before that, that's the baseline. And you measure the change relative to that baseline.
Does that makes sense? So fMRI is always, always relative. The signals are never absolute. They have no meaning in absolute terms. It's always relative to something, and you need to ask yourself relative to what? Because that's going to be really important to interpreting the data. All right, the peak happens more or less six seconds after the neurons fired, which is super slow relative to how neurons work. Neurons work on a millisecond timescale, and the signal that we're measuring, the blood that flows there gets there six seconds later, which means that we detect changes way after they happen, and it also means that we're kind of limited in our ability to distinguish between processes that happen really, really close in time. The unit of measurement when we take an image of the brain-- the brain in 3D, so we take many, many 2D pictures and we stack them together and so we get a 3D image. And [AUDIO OUT] picture, and then we take it at every point in time to get a movie.
The unit in each of these 3D pictures is called a voxel. It's like volumetric pixel. It's pixel in 3D. Again, the brain is 3D, so we don't have a 2D pixel, we have a 3D pixel. And a voxel depending on how big it is has several hundred thousand neurons, which again just encourage you to think about what that means, how many networks, or how many functional units can be inside one voxel, can we expect it to be homogeneous? Should we not expect it to be homogeneous? You can ask, are we measuring anything meaningful? I mean, we're measuring something that happened six seconds after neurons fired, and our unit of measurements have a few 100,000 neurons. Are we measuring anything meaningful at all? So people tried to study that, for example by taking monkeys and inserting electrodes into their brain and recording from the same brain regions with electrodes and then with fMRI. And the current consensus even though this is still debated and people are still running studies, but overall it seems like this bold signal or this hemodynamic dynamic response function, correlates not with the output of the region, not with the spikes or the action potentials, but rather with the input to the region and the internal processing within a region.
So the signals that come into the dendrites and being sound. And that's because this signal correlates with what's called local field potentials, so I think you're going to talk about them at some point this week. And local field potentials are a measure of input to a region and the internal processing, and that's the thing that correlates the most with the fMRI signal, which also I think is something [AUDIO OUT] for many people, because we see images of the brain with this bright spot, beautiful spot in a paper, and we think, oh, that region is sending some information when it's lighting up. Well, actually it's getting some information and processing it. Now, it's true that a region that gets information and processes it usually sends it to the next stage [AUDIO OUT], but that doesn't have to always be the case. So that's an fMRI if you've never seen one. This [? board ?] is a magnet. It's always on. It's never off. And then you have a subject lying on the bed.
This coil around the head is where the radio frequencies are emitted from and also where we record the magnetization, so that's from this coil. And there are other gadgets. We have a button box, because subjects have tasks so they need to press buttons. And this is the mirror, because we can't actually have a computer screen inside the magnet. It's too small, so we project whatever we want to show them behind them, and they have a mirror and they see it through the mirror. One thing that I want to mention is what I think cognitive neuroscience is not about. And I think cognitive neuroscience is not about where in the brain something happens, because I don't think that's a very interesting question. Any behavior, any mental process, anything you can do mentally, physically, happens somewhere in the brain. It has to. The question of where exactly it happens in the brain, I don't really care. What I do care is whether two different processes happen in the same region, for example, because that means that they share neural resources and maybe they share cognitive resources. Maybe they're inseparable. Or I care whether some cognitive function has one [? redistributed ?] regions.
So if it's only one region that does it or a set of regions and so on. And so finding where in the brain something happens is only the first step, and we have to do it because for example, if we want to study face processing in the brain, the first thing I need to do is figure out what parts of the brain process faces. But that's not the goal. That's sort of the first step. And once I found them, I can start running experiments and ask what kind of information they represent, what kind of faces they care about, what factors modulate their responses, and what we're going to do in the afternoon is we're going to find where in the brain of one individual the language system is. So it's just that first step before we actually do all the fun stuff of finding where it is. But once you learn how to do that, analyzing the rest of the data is the same. So here is an example of a study that was very, very sexy and published in Nature in 2016 where they sort of had people listen to seven hours of stories and they [INAUDIBLE] the semantics or the meanings of different words and concepts across the entire brain.
And so there is this [? GUI ?] online where you can go and click on different voxels and see all the words that voxel responds to. And this was done with a very, very cool method. So the method in this paper is awesome, but I'm not sure what we learn from this, because it's basically a map of that region cares about those words and that region cares about those words. And of course, concepts are not all over the brain, so some of these are not even meaningful. So I'm not even sure what that tells us, and also they had a really hard time interpreting this. For example, they have one voxel that responds to a mother, father, son, daughter, and murder. And then they said, well, that makes sense, because murder often happens in families, which is a very post hoc explanation. You could imagine a different pattern, and you would come up with an explanation. So some of the patterns that they find make a lot of sense, and those are the parents that we already knew about from other paradigms and it's really cool that this paradigm can get it too.
For example, we know that there is a region about here that only activates when you think about what other people are thinking. It's called theory of mind, and so there are some voxels here that respond to a lot of, for example, emotion verb, like frighten or fear or amuse and stuff like that. So that makes sense. They just listened to seven hours of a story, of stories, radio, whatever and then they did this fancy analysis that I can't explain. But basically, they looked at each voxel and they asked, throughout the story at what time we get a signal increasing that voxel? And so what they have here is for each voxel, you have a word cloud of the words that that voxel responds to the most. So now I want to show you some cool studies. All right, and the first study I'm going to use to discuss the principle of cognitive subtraction. So I want to talk about intuitive physics. I never studied any physics, and yet there are many things about how the world works that I just seem to know. I know which bricks I can pull out of this tower, and I know which bricks I probably shouldn't.
I know in which cases it might be more stable or less stable. I know that if I drop something to the ground, it falls in a straight line. I know that vacuum cleaners suck things towards them. I know that if I hit the ball from a certain angle, it will go to a certain place. I know that solids can't go through one another, which is really important, right? That's why I know I can go through a curtain of beads but not through a wall, so I can pass through rooms unharmed. I know that if I see this door, I should push the edge rather than the center, because that way I can rotate it more efficiently. And for example, I hope you all have a sense that this is very dangerous and this laptop is unstable and it might fall, and here's just a cool [AUDIO OUT]. Here's also a laptop and it's also near a big source of water, but in this case it seems very stable and there doesn't seem to be any danger.
So all of these are examples of intuitive physics, and people seem to be just naturally good at this. And so the question is, do we have an intuitive physics engine? Do we have some engine in our mind that simulates how the world works and allows us to plan actions? For example, when you go and grab an object, you prepare your muscles because you know or you think how heavy it's going to be, so you plan actions, you predict what's going to happen in the world. Someone throwing a ball towards you in baseball and you adjust your hand to catch it and so forth. So do we have this physics engine? And specifically, this is a study from Jason Fisher who is now at Johns Hopkins, but he was my office mate here at MIT when he was a postdoc. He wanted to ask whether there are brain regions that are engaged in physical inferences and are recruited more for physical inference than for other similarly difficult prediction or perception tasks. So obviously because we can do these judgments like I showed before, they happen somewhere in the brain. But the question is, do they get their own dedicated neural real estate? Are there regions that are particularly dedicated to this task? Or do we just use general resources like general intelligence, working memory to solve these things and they don't have their own unique engine?
And so to test this, Jason had to design a task that would cause people to use their intuitive physics. And he came up with this task, where there is a tower and all you need to do is say which side it's more likely to fall to. If someone, for example, if you bump against the table, is it more likely to fall on the red side or on the green side? So that's a prediction task. If requires simulation. You need to simulate the tower falling, and you need to figure out whether most of the falling is going to be on the green or on the red. OK, but I told you that this is not enough. I can't just give this to subjects and look at what brain regions increase their response to this versus nothing, versus just closing their eyes. Why? Neurons that are going to be active at the brain region are going to be active here are not just those that care about intuitive physics. There's those that care about colors, there are those that care about general attention because you're doing something. There are those that care about maybe simulation or effort, and so on and so forth. So it's not like every brain region that increases its activity to this task is actually doing intuitive physics.
And so we need a control condition or a baseline, and this baseline needs to have as many things in common with this as possible, except for physics. So imagine trying to develop a task that is as similar as possible without physics. So for example, we want a task where it's a high level task, it's complicated, it's not something simple like telling whether something is red or green. We want a dynamic scene, right? It won't be fair to give someone a task with just a picture, because this one was a rotating scene, so we want something dynamic. And we want an objective answer, because this task has an objective correct answer. These were built with a computer physics engine, and we can simulate the fall and see what happens. And so Jason tried to come up with a control task that has these features, and the first thing he came up with is lie detection. And what he did was he recorded people drinking drinks. They're all transparent. Some of them were tasty. Some of them were disgusting. And before you drank the drink, he told you whether you had to say that it's good or you had to say that it's bad.
So in some cases, he would tell me you need to say that this is good, I would drink it, it's good, and then I'll say, it's good. In other cases, you'll tell me you need to say this is good, I'll drink it, it's disgusting, and I need to do the best I can to lie and say, mm, yummy, this is yummy. And he would record me and other people on video, and then subjects in the experiment see these and they need to guess whether I was lying or not. So it looks something like this. So this is one example, and this is me. I look like I'm lying, but this was actually really, really disgusting. And so the idea is that if we take the activations in this task and subtract the activation in this task, we're hopefully subtracting away all the brain regions that are common to the two tasks. And what we're left with is only this, is only what's unique to this task which hopefully is physical reasoning. Does that make sense? That's cognitive subtraction.
Cognitive subtraction means if you want to isolate a particular mental function, have one task that uses this function and then another task that is as similar as possible except for the one that you care about. And then you subtract them, and you're left with just this process that you want. Well, you can look at this and say, listen. [INAUDIBLE] I mean, those scenes are so different. One of them has a person, the other one doesn't have a person. The dynamics are very different. It's like completely different tasks. It's true that they're both high level and they're both abstract and there is an objective correct answer, but these are not matched at all. This is not enough. And he realized it's not enough. And what he wanted was to start with two tasks or two stimuli that are as similar as possible. So he said, OK, let's forget about the task for a second. I'll just take the stimuli, and I use the same stimuli with two different tasks. One task is physical reasoning, so again, you look at this and you have to tell whether it's going to fall to the right or to the left. And then he said, OK, I need another task that uses this and is hard and has a correct answer but doesn't involve physics.
Any idea what he did? But if I want to use exactly this, what task could I give people? Yes.
AUDIENCE: You have to keep the colors.
IDAN BLANK: You need to tell there's more yellow or more blue, and these are tiles where there are many more blocks than you can count. So it's color reasoning, so it's exactly the same stimulus but in some cases you make a physics judgment and in some cases you make a color judgment. And hopefully, Jason thought, when we subtract them we'll get not just the physics engine but we'll start getting the physics engine. And so he did this. He followed regions in both hemispheres that responded, and these are just examples of three of them. Those are the three that gave the best results. So one is here. It's sort of the premotor region of the frontal lobe. That's the frontal lobe, so the eye is here and this is the brain looking that way. And then the supermarginal gyrus, and then this thing. No, this is the supermarginal gyrus, and this is the parietal lobe.
So we found those regions that seem to respond more to physical reasoning than to color reasoning. That's pretty cool. Are the plots clear? What you see on the plots? The x-axis is the signal change, and so this is relative to baseline, just to fixation. But what you care about is the difference. You care about this minus this, and this difference is significant. Is this enough? Why not? Not a specific task. So what we did was we used the same stimuli with two different tasks. What's missing? Just the opposite. Same task on two different stimuli. And if we get the same results that we got here, then we can be maybe a little more sure that what we're looking at is real. So this condition was exactly same movie, two different tasks. Now, we're going to have exactly same task, two different stimuli. And this is what he did. In the physical prediction, you're going to see two balls and they're going to interact like on a billiard table based on the laws of physics. And then one of them is going to disappear, and your goal is to imagine, simulate where it's going while it's disappeared.
And then it will appear, and you need to say whether the place where it appeared makes sense based on your simulation or not, whether it appeared something that's unexpected. Does that make sense? All right, so let's do this. Does that makes sense? No, right? We expected the red to go down. All right, that's physics reasoning. Now, how do we do exactly the same task with a different stimuli? We make these shapes behave like humans. And it looks like this. Does that makes sense? Yeah. So this is exactly same task but different stimuli. And so when he did this, he found [? the ?] regions again responding more to the physical prediction than to the social prediction. And that is not enough too. I mean, if you want to say that a region is really doing something, than you have to run a series of experiments, and one experiment or one paper with a few tasks is never enough. But I think this a good example of a first pass and a very intelligent way of trying to figure out, trying to isolate intuitive physics.
There are two questions here. One question is, where do we get the physical knowledge from? And the fact that if we have a brain region for physics-- I'm not sure there is-- but if we have a brain region for intuitive physics, it doesn't mean it's innate. Brain regions can acquire specialization through experience. So it's very possible and very likely that a lot of these things we learn from experience. But we also know for example from experiments in babies that there is some very, very basic things that they know and expect and that they find very surprising if these things are violated. So some of these things, very basic, seem to be innate but many others are not. The best example of a brain region that becomes specialized through experience is the visual word form area. That's an area that responds only to the letters that you know how to read, or only two words in your language or letter strings in your language. And of course, that brain region can't know what language you're going to learn before you're born, and it acquires that specialty. Even in blind people, even though it's a visual region, it responds to Braille reading.
If you learn a language later in life, it learns to respond to that language as well. So experience can really shape the brain. We want to know whether this is just physical reasoning or not, so these regions that he found, some of them appear to overlap with regions that are engaged in motor planning and in tool use. So when you do tasks about tool use, you get [AUDIO OUT] regions. When you do tasks that are about motor planning, so planning how you move your hands or something, you get similar regions. And the question is, does that mean that these activations in our experiment, in Jason's experiment, does that mean that they're planning actions and that's what we're tapping? If we see the same activation in the same region that was reported in other studies to do this, does it mean that that's what's happening here? And this is perhaps the biggest fallacy in fMRI inferences-- and if there is anything I want you to remember when you go out to the world and encounter fMRI studies in popular media-- is the fallacy of this inference. This is called reverse inference. I'm going to demonstrate it now.
And to do this, I'm going to look at a different experiment that asked whether breaking up really hurts. Physically really hurts. So imagine that we do the following experiment. You look at your ex, and you imagine the breakup. So in this experiment, they actually took people who had a long relationship and a recent breakup in the past, I don't know, two weeks or a month or something. And you look at a picture-- you need to think of the breakup, it's very unpleasant-- versus look at a picture of your friend and imagine something pleasant. So again, two tasks that are relatively similar. And you can be very critical that there are many differences here, and you would be right, but they're relatively similar except for that breaking up part. And then we hope that if we subtract the activations to the friend from the activations to the ex, we'll get whatever regions are responding to this social rejection pain of breaking up. And let's say that regions that are found in other studies to be involved in processing physical pain increase activity in this experiment when you imagine the breakup. This is the kind of conclusion that is often reported in fMRI studies.
We find activation in this region for my task. Other studies in the past found activation in this region for, for example, physical pain. Hence, what my participants are experiencing here is also physical pain. So therefore, imagining the breakup really hurts. And that is not necessarily correct. And the reason is that there is a lot of inter-individual differences in where exactly different functions occur in the brain. So the mapping of function onto anatomy is not consistent across individuals. And just to demonstrate this, this is a brain with activations to some task. It doesn't matter what, but there is some contrast between two conditions, a minus b. And here are brains of two other people, and you can see that the overall topography is the same. We get some things here and we get some things here, but the exact locations are not the same. And here's the crucial part. This is condition a minus condition b. Now, I'm going to show you what regions show the opposite pattern, condition b minus condition a in blue. And what you can see is for example that that region in these two subjects prefer condition a over b, in this subject prefer condition b over a.
So different functional system that happened to fall in the same location in different people. And here's another example, and here are two more examples. So we can't look at anatomy and from there infer back function. That's why it's called reverse inference. You look at anatomy, to try to figure out the function, because other people found some function in the same anatomical region. We have no guarantee that that's going to work or that that inference is correct, and many, many fMRI studies-- today luckily fewer and fewer-- but still, many of them make those kinds of claims. So reverse inference is guessing the function of a signal based on its anatomical location. So instead, what we want to do is do something like this. We want to look at pain and breakup in the same person. Take each participant, scan them both in a task that involves breakup like that and in a task that involves pain, and see that the same brain region in the same person responds to both. Yes?
AUDIENCE: If you just showed that basically every person's brain is unique and they experience every emotion differently, doesn't that just make it impossible to make broad conclusions about what regions of the brain are activated with response to a certain stimuli?
IDAN BLANK: So there is some consistency across people. It's not that a region that in my brain is here in your brain would be here. But there could be differences of a few millimeters between where they fall, and those regions when you look at, for example, [AUDIO OUT] of response across many tasks, they respond in a similar way. So my region here and your region here can respond very, very similarly across many, many tasks which convince us that they are the same functional system. They just don't happen to fall exactly in the same place in your brain and in my brain, if that makes sense. All right, so we need to look in the same person, and that's what they did in this study in 2011. First, what they did is they gave the participants either they held this metal, I think, and it was either warm or painfully hot. Not very painfully, but painfully hot. And so you want to look at regions that respond more when it's painfully hot compared to when it's warm, and you get these regions and it doesn't really matter where they are, but this is the response of pain versus just warmth.
And then in the same participants, they told them to do this breaking up task, and they looked in the same regions. So they use this task to [AUDIO OUT] regions, and then they said in those regions that we already found, with those regions respond more to breakup than to thinking of a friend? And they do, and so then you can say, OK, therefore imagining the breakup physically hurts, because in the same person the same region responds both to physical pain and rejection. There is another problem here. So the problem here is that why assume that the same brain region only does one thing? Maybe the same brain region does both physical pain and social rejection, and it's not that social rejection physically hurts. It just happened to be that both of those things are implemented in the same region, or maybe even in the same voxel but different neural populations. But maybe let's focus now on maybe there are other processes. And who knows, maybe that region responds to a gazillion other things. Maybe it responds to, I don't know, seeing caricatures or dreaming or whatever.
And so what they did here is they did something that is not perfect, but it's very careful and it's getting as close as we can to making sure that this inference is correct. And what they did is they looked at existing literature, and they searched all studies that reported the same regions that they found. All of them. And again, this is based on anatomy, and we know that that's problematic, but they looked at all of them. And they asked, what were these studies about? In other words, what are the topics that people study when they find those regions? So they looked at working memory, at attention switching, and long term memory, interference, physical pain, and emotion just generally positive negative emotion. And what they saw is that the vast majority of papers that report those regions are papers about pain, which suggests that those regions might really not be doing anything else, and maybe breaking up really hurts. So again, we can't be sure that this is right, but I think that this is an example of a paper that acknowledges its limitations and still tries to go as far as possible to verify its claim.
So far, what I've shown your studies that sort of use the traditional cognitive subtraction method. You have condition a, condition b, you subtract them, and then you just look at the response. And these three other things I'll talk just about this. I think this is very cool. The last one-- mazel tov. The last one is very similar to this, and adaptation is I cool too, but this is related to what we talked about. It's called multi-voxel pattern analysis, and here's the idea. All right, so traditional analysis like what I've showed you, they usually focus on the strongest activations assuming that if a region responds very strongly to a, then it cares about a, and if it responds, let's say if its response to b, c, and d is half the size of the response to a, then b, c, and d don't matter. But that region might still process those things just not as strongly. It might still contribute to their processing. So imagine that this is the level of activation in the brain. This is like voxels and voxels, and right here is the amount of activation.
What usually we do is we have a threshold and we say, OK, everything above this threshold is significant. We do this with statistics, and we care about this, and anything else, all other regions, we don't care about. They don't respond to this condition. And furthermore, if this region doesn't respond strongly to other conditions, that we ignore those other conditions. It probably doesn't care about them, which seems a shame. And we also average signals across voxels within a region. We take a region that we're interested in, we take all the voxels around there, and we average the signal. And it's a shame, because there might be information distributed across the different voxels. It's not necessarily the case that meaningful information is just within one voxel. Maybe the difference is across the brain. Maybe the pattern of ups and downs contains meaningful information. And so multi-voxel pattern analysis tries to do exactly that. You're looking at distributed activity across voxels, and here is a simple example. Let's say that you're doing a visual experiment, so you're looking at the visual cortex. And these are voxels. Voxels in real life are much, much smaller, but this is just for a demonstration.
And let's say you have two conditions that you're showing your subjects. You're either showing them these-- they're called gabors, but these patches that are oriented in [AUDIO OUT], and these are activations in each voxel. So let's say that white is strong and black is low, so some voxels respond more to the stimulus and some less. And the other condition is these lines oriented in the other direction, and you record activity and you see that some voxels care more about it and some voxels care less. OK up to now? All right, then you repeat this many, many times. So you show them something like the first stimulus again, and you record from the same voxels, and again and you record from the same voxels. And then you show them the second stimulus or something very similar, again and again and again. And so you collect many instances of how these voxels responded to stimulus a and many instances of how they responded to stimulus b. And then you train an algorithm, and you tell the algorithm, look at these things.
I am telling you that these are associated with a, these are associated with b. Learn to distinguish them. Now, here you can do this visually pretty easily. It seems like all of these have whites at the top, and these sort of have these diagonal. And that's sort of what the algorithm does. It finds what patterns distinguish between these two cases, and then you test it on a case that it hasn't seen. So you scan the subject on one more, let's say shape. This was not shown to the algorithm during training. We leave it aside. And then we ask the algorithm, OK, what do you think this is? Is this that shape or the other shape? What is chance? Chance performance. What level is chance performance? How many correct trials they can get. 50%, right? If the algorithm didn't learn anything, then it's just guessing randomly a or b, so it has a 50% chance of being right. If the algorithm does better than 50%, it means that somewhere in this distributed pattern, there is information that distinguishes between these two things.
So even though you averaged across this region, you would get the same average signal here and here, and you would think that that region doesn't distinguish between these two stimuli, right? Because in both cases, we have sort of three whites and the rest is low. So if you averaged all of this and all of this, you get kind of the same. The average would look like the region does not know the difference, but the pattern shows you that there is information about the difference. Does that make sense? That's MVPA, and to demonstrate and that's where we'll finish, I want to go back to whether breaking up really hurts. So here is an experiment that was done three years ago. It was published in Nature. They did sort of the same thing. In some conditions it was the hot versus warm, in other conditions it was the friend versus the ex partner like before, and the other tasks don't matter for our purposes. Now, here is what they trained. The first thing they trained is they told the algorithm, find me the brain regions where you can tell apart pain from warmth, ex partner and friend. So pain versus all of the other three conditions.
And you have this window of several voxels, and you slide it across the brain and you ask, what regions of the brain contain information that allow me to tell apart pain from everything else? All right? Yes, no? OK, and you do the same for ex versus everything else. So what regions of the brain, and they're shown here in color, help me tell apart the ex condition? What brain regions respond to the ex condition differently from the other three in a way that allows me to learn how to classify these or identify these? And now, they did the testing. And this is the cool part. The thing they did is they looked at the regions that they identified with the pain versus everything else. So these are regions that we know differentiate between pain and other conditions. But now they ask, can we distinguish pain specifically from x? We know that we can distinguish those regions pain from everything else, but what if we just look at pain versus ex? Can we do this?
We can do this very well. And then they asked, can we do pain versus warmth? We can do it very well. And now the critical thing. Can we do ex partner versus friend in that region? We can't. So if you look at that region that distinguishes between pain and warmth, you can't tell apart ex partner from friend. And then they did the opposite thing. They looked at the regions that could classify ex partner versus everything else, and they ask, can I tell apart ex partner from pain? I can. Can I tell apart ex partner from friends? Yes. Critically, can I tell apart pain from warmth? In those social regions, can I tell apart the physical pain? I can't. Which suggests that regions that contain information about pain versus warmth do not contain information about social breakup and vise versa. So there is still no consensus about whether breaking up really hurts, but I wanted to show you two studies that use two different methods and reach two different conclusions. And I'll end here, and you deserve a break and lunch, and then I'll see you at some point later today to analyze real data.