Bob Desimone: Visual Attention
June 7, 2014
June 7, 2014
All Captioned Videos Brains, Minds and Machines Summer Course 2014
Topics: Change blindness; receptive fields increase from V1 to IT; clutter is a challenge for recognition; one computational purpose of attention is biased competition; training an object classifier using recordings from IT in monkeys attending images of objects; normalization models: response is a weighted sum of all inputs with attention modeled as an increase in weights from the attended stimulus (e.g. Reynolds, Heeger, Neuron 2009; Ni, Ray, Maunsell, The New Cognitive Neurosciences 2012); biological mechanisms (Wilson, Runyan, Wang, Sur, PNAS 2012); evidence for role of synchrony (Singer) in spatial attention effects; study of dual recordings in FEF and V4 to test whether top-down inputs from FEF cause V4 synchrony (Gregoriou et al., Science 2009); feature attention; study of source of top-down object feature signals using MEG and fMRI in humans (Baldauf, Desimone, Science 2014) showing that FEF biases visual processing to different visual field locations and IFJ biases processing to different objects/features
PRESENTER: OK, we'll get started. So what we'll do now, as I said before, we have Bob Desimone coming from MIT, who's going to talk until about 3:30. And then we'll have a break. And then Ed Boyden [INAUDIBLE] we'll hear from [INAUDIBLE].
So it's a great pleasure for me to introduce Bob Desimone here, who is a man who doesn't really need an introduction. He's been really teaching our viewpoints and illuminating our thinking about visual attention by virtue of his recordings for the macaque mental visual cortex. You already have heard his name as well in terms of the discovery of [INAUDIBLE] and his work with Charlie Gross earlier in his career. And also, more recently, beautiful work combining functional imaging and energy. So without further ado.
ROBERT DESIMONE: So we could spend this entire course just on the topic of vision. So I have quite a bit of material. We don't have to get through it all. We can just get through as much as we get through. And we can discuss any points, interrupt, have rebuttals, whatever. And I think we'll get an introduction to the work in the field.
So-- whoa. This is like-- maybe have this less booming, like the voice of God. I can? No, I don't think so. Oh, I see. Do you need it this loud? My gosh. How's that? Is that better? Less godlike?
So there's an informal rule in neuroscience, that you can't talk about attention without giving demonstrations of how big an effect attention has on your consciousness. So we will start off with some examples. And all the good examples, of course, have been used up by all the people giving attention talks.
So you may have seen one or more of these before, but for those who haven't-- so what you're going to see in the next one is this big, complex scene flashing on and off. It's called a change-blindness demonstration. There is something changing in the scene from flash to flash. And you are just to mentally note when you see the thing that's changing from flash to flash.
You gotten it yet? Now, psychologists who study this stuff have shown at the time it takes to find the change in the display is inversely proportional to IQ. I don't know if that has encouraged anyone to look harder. But I see there is one smart person in the back of the room. All right. I could give you a hint. It's the airplane engine. See that? You all see that now? I hope none of you are pilots.
So the point is that this scene is stimulating your retinas this whole time. But somewhere between your retina and your soul, the image of the airplane engine somehow got dropped out. It wasn't because the sensory stimulus wasn't there, but because you weren't paying attention to it.
It was something inside your head. And if we had electrodes in your retinas, we could determine that you were looking probably right at it, across it, and so on at times. And again, you weren't really processing it.
So OK, so then you think, OK, so this is a trick with the flashing images. So the next demonstration is a mud flash demonstration. So with each mud flash, there is a change in the scene. So you get that? Yeah, you got that? I hope none of you are drivers in Boston. So you see the line in the road? Yeah?
OK, so finally, if you are still unpersuaded, then the last demonstration was given to me by Aude Oliva at MIT. These are gradual changes that occur in the scene. So you're just going see gradual change. And just mentally note if you see something gradually changing.
Did you all catch all those changes-- all the people that appeared and disappeared, doors, windows, signs? Yeah. It was real. They all changed.
And since there was no temporal transient, none of the changes attracted your attention. And so you didn't see them. So psychologists that use these kinds of demonstrations-- what's that?
AUDIENCE: Request that you play it again.
ROBERT DESIMONE: You want me to play it again? Because you don't believe that it's changing. That's the first time anyone-- well, usually the people don't believe what I say. But actually, after I get to the science part of the talk.
All right? Believe me now? Actually, I just saw Marvin Chun give this talk where he gave a change blindness demonstration. And it was great, because everyone had studied it, and they couldn't see it. And then he told them, actually there was no change.
That was the best. OK, so now that I've convinced you that something important goes on inside your head that determines what it is you really see, now we're going to talk about how that comes about. And in the system that we study, in mostly monkeys but also humans to a certain extent, is the visual pathways, which I assume you have heard something about so far this meeting.
And for the object recognition system, which is the one that's relevant for this recognition of objects, it starts in V1. It goes down to the temporal lobe. And as there's a progression, elaboration of features along the pathway.
This pathway gets inputs from the [? colinar ?] and gets feedback inputs from the parietal cortex and the prefrontal cortex. And we'll talk more about that later. But somehow, in the actions of the system, perception and its modulation by attention takes place.
So one of the other things that we know about the properties of the system is that as you go from V1 down into the temporal cortex, receptive field size gradually increases from the pixel size in V1 to the whole room in the temporal cortex. And that one characteristic of this processing pathway is both a blessing and a curse. So the blessing is, as you may have heard from some other talks, is we think these large receptive fields contributes to our invariant recognition of objects.
So you can have objects positioned anywhere in the retinal field, and the cells will still be driven. Or objects can be different sizes, and so on. So the cells essentially get a view on objects that is resistant to these kinds of changes in objects that aren't relevant for their object identity.
But on the other hand, the problem, as we'll see, with these large receptive fields is that now, you can have more than one object in the field at the same time. So that raises a computational problem for any model that attempts to understand our ability to recognize objects. And so why is that?
And we're now going to move into an experiment that was done in collaboration with Ethan Meyers, who is sitting in the back and you have presumably interacted with during the course. And Ethan, in fact, did all of the analyses that I'm about to show you on all the neurons that were recorded in this experiment. And it was also a collaboration with Tommy Poggio and John.
OK, so what's the computational problem that's caused by these multiple objects in the receptor field? So now we have come from how do we think object recognition comes about. And the general framework that we had developed over many years for thinking about recognition and attention in the ventral stream was we call it bias competition.
And the general idea was that objects, like the car, would be represented by a pattern of neural activity in the temporal cortex, with different neurons representing different features of the object. And if you had another object, the same population of neurons, it would just a different pattern of activation across those neurons, because different features.
The problem comes when you have more than one object. Because now, if they're stimulating the cells at the same time, in principle, they could be activating all the same cells at the same time. And so now the neural code for any one object is now sort of confused or muddled, because you now have the same neurons representing features simultaneously present, but in different objects.
So the idea was that clutter would degrade this representation, but that the reason we have this attentional system-- one of the reasons-- would be to isolate the focus processing on just one of the objects so that you would get back the code for just that one object. So that was the basic idea. And this is the equivalent in a computer recognition program for objects. You'd put a bounding box, say, around your objects, and just run your recognition algorithms on what's inside that box, and not the whole scene at the same time. And then your ability to recognize an object in that box would be enhanced.
So what was the experiment that was used to test this idea? The idea had been talked about for a long time, but we actually never tested the computational implications of this. So as I said, Ethan took the data we had collected from monkeys who were looking at objects on a screen. And each object would create a pattern of activity in the recorder of neurons. And then you would have to send it through a pattern classifier that would then learn the association between the object and pattern neural activity.
So different objects would have different patterns of neural activity. And you could make predictions about whether the pattern classifier had learned it properly, so you'd know whether you were correct or incorrect. And so that was the basic idea. And this was known in vision analysis as the coding kind of model. You take a lot of the data from the population, and use it to decode what the animal or person is seeing at that time.
And here are the relevant-- the experiment from the monkey's point of view is, the monkey would be fixating a spot. And this, this and almost all the experiments that I'm going to tell you about today, we are holding fixation constant and just varying covert attention. So you can keep your eyes fixated on a spot, but you can attend things off that spot.
And although it seems somewhat unnatural, in fact, you're moving your attention before your eye movements all the time. And so your eye's still. Your attention moves, and your eyes moves later.
So we're making use of covert attention in these kinds of experiments-- so monkeys fixating a spot, and then either one or three objects would appear in the receptive field. And after the objects appeared, a tiny, little, dim line would point towards one object. And through training, the animal would know that's the object it should pay attention to.
Over some random period of time, that object will change color slightly, and the animal would make a saccade to it. And then on another trial, it would point to another object, and so on. And then you can measure this decoding performance in time windows all through the trial.
And the objects that were used were for cars, fruits and vegetables, faces, and furniture, which as you know, are the four major classes of objects in the universe. So we could completely generalize to any object from one of these four classes. There was a test just to get started. And as I said, we're going to measure coding performance in this window. It's going to be measured with area under the ROC curve. 1 is perfect classification and 0.5 is chance.
If you have just a single object in the receptive field, here we're now looking at population decoding performance. And time 0 is when the stimuli come on. And at time 500 milliseconds, the attentional cue comes on. But when there's only a single object in the field, we're assuming the animals are attending to it the whole time anyhow, and there's no other distractions to block out. So that's sort of the baseline performance.
So now let's see what happens when you're adding three objects simultaneously. And the red and green lines now show the decoding performance when you're trying to decode that the identity of the object the animals are being cued to attend to-- say this one-- and the green is when he's been cued-- so sorry, so the decoding of one he's not been cued to attend to.
And you can see. So here the cue comes on, and as soon as the cue comes on, the decoding performance for the attended object goes up, and the decoding performance for the unattended object goes down. So that's exactly what we were hoping to see.
The decoding performance for the attended object in clutter never goes up to what it would be if the object was there by itself. So attention moves you towards performance for a single object in an isolated scene, but it's not as good as actually not having any distractors at all. So one lesson from that is, you're trying to focus on something and keep yourself away from distractions, it's better to physically remove them than to try to just block them out using your attention all the time. Yes.
AUDIENCE: You might get to this, but is it that it increases the gain of some sub-population of neurons, or is it inhibiting the non-selective population?
ROBERT DESIMONE: Right, so now you're getting into models for how we-- well, what do we think has happened in the neural populations causing these changes? And we're going to get to that very soon. Any other questions, issues?
AUDIENCE: [INAUDIBLE] color in these neurons, which seems to be the relevant feature for the task?
ROBERT DESIMONE: Oh, color-- so in fact, in this example, they're actually colored objects. But in the real experiments and in the slides I've just shown you, they're all black and white objects.
AUDIENCE: OK, the monkey has to report the color changed?
ROBERT DESIMONE: He has to just say when a color change occurred. And it's a very slight color change, so it forces him to pay attention. Other questions? OK.
Now what happens to the actual firing rates of the neuron? This is going to start moving us in the direction of the model, but in order to have a prediction about, or even track what's happening to the change to the firing rates of cells, you've got to segregate the responses in some way. And the way that we and others have chosen to do this usually is to take whatever object we've shown, and for a given neuron, we determine which object elicits the best response for the cell. So this would be your isolated best object.
When you present that object itself, the neuron responds the best. Then you find an object that by itself is the worst object for the neuron. So let's say the neuron responds to a face not a couch. And so the best would be the face, and the worst would be the couch.
So then we just track those responses over time compared to the mean. And actually, the red line is for the best, and the blue line is for the worst. And now this is when you attend, in clutter, to the best or the worst.
And here's where the cue comes on. And that purple line is the actual firing rate of the cell to that attended best object. And you see, it jumps up really close to the response you would have had had the object been presented by itself. Whereas the response to the worst object goes down. Now, that isn't to say that it's being inhibited, but first it would go down on this scale.
And just to say, one of the things in the experiment convinced us that we really were, in a sense, tracking what was going on inside the animal's head in terms of what he was actually attending to. So here what we were looking at was, now we took the same data that I just showed you-- the decoding accuracy. But now we synchronized it to the time of the color change of a distracting object.
So let's say the animal's attending to this object, and then something changes color over here. So now we just resynchronize the data to that point. And here, this is the decoding performance for that attended object, which is getting really good up to the time that now the distractor all of a sudden changes color.
And now you see the decoding performance for the object he was attending to drops down. And then all of a sudden, the decoding performance for that distractor that changed color jumps up. And so what do we think is happening right here?
What we think is happening is that that color change of the distractor actually attracted the animal's attention momentarily. And for that brief moment, his temporal cortex was reading out-- instead of the thing it should be reading out-- it was now reading out the properties of the distracting object. Does that make sense?
So it's like when you're focusing on something, then something happens across the room. Something salient attracts your attention. For that brief moment, your brain is processing the distractor, not the thing that you had wanted to process.
So now, there have been a number of models that have been developed for how these response changes could come about. And they all have an aspect called normalization about them, which is that in order to understand the response to a given stimulus, you've got to divide by the inputs from a lot of different stimuli. So that the more and more stuff you have in the environment, the bigger the divisor, and the more you would pull down the response, which is basically keeping the response within a fixed range.
Otherwise, just keep adding stuff in the visual field, response can just keep going indefinitely. And you take that basic normalization ideal and apply to multiple stimuli in the receptive field. You get something like this.
This is a model that John Reynolds developed in my lab now many years ago. And the model was basically this-- you'd have a target neuron that gets inputs from neurons in an earlier level of the visual stream. And that these inputs were a combination of excitatory and inhibitory weights. And the reason one stimulus might be good enough and another might be poor for a cell is the good stimulus could have a greater ratio of excitatory to inhibitory weights than a poor stimulus.
So here you have your two competing stimuli and one neuron's receptive field, and the response to these inputs is a weighted sum of all the inputs. This is without attention. This is just the weighted sum of all the inputs.
And what that gives you at a population level is a kind of average response to all the stuff in the receptive field. And then in this model, the attention is simply modeled as an increase in the weights coming in from that attended stimulus. If you just increase the weights then the neuron's response now becomes dominated by the inputs from that stimulus.
And if it's a poor stimulus for the cell, it will drive the cell's response down. And if it's a good stimulus for the cell, it will drive it back up to the rate it would have had had it been there by itself, depending on the strength of that intentional bias. And again, you can shift back and forth in the model.
And quantitatively, the effects come out pretty similar to what's measured from neurons. So if you look at the response to a stimulus as a function of contrast with and without attention, the biggest effect of attention in that model is to shift that conscious response function to the left. So it's as though the stimulus has a higher contrast. And so the biggest potential effects are found at both contrast. And in fact, that is what's found physiologically.
And then if you take the condition where you have, let's say a preferred and non-preferred stimulus in the receptive field and look at those responses as a function of contrast, here you have without attention response to the preferred and the response to a non-preferred. And when you put them together, this is sort of, again, sort of the average. But where you fit in this range will depend on the model parameters and strength of those inputs.
But if you now attend to the preferred, the response will go up closer to the preferred by itself. And if you attend to non-preferred, the response would go down closer to what you'd have with the non-preferred there by itself. And now, you'll see in the literature there's a number of other models, more recent ones from Reynolds in here, which I'll talk about in a moment again.
But also, from the lab with John Maunsell and Tommy Poggio is a soft match model, which has a normalization component to it. All these normalization models, as it turns out, have similar behavior. And so the field has converged on the basic idea of normalization.
But actually, one real advance of the Reynolds and Heger model is it takes, compared to just the ones that treat the attended and unattended stimuli, it takes in a lot of spatial parameters. So you may run across this model in the literature. And it has to do with not only just the receptive field, but also what's in the surround. And it has to do with the size of the attentional field, which is never modeled before.
So you can have a wide attentional field or a narrow field, and you can have a small stimulus or you can have a big stimulus. And in this model, all of those are important parameters that will affect how the cells respond to stimuli. And people doing their physiological studies are now working through those kinds of variations in stimulus and attention field and so on, and testing the model's performance with all those different variations. And so far, the Reynolds and Heger model seems, to me at least, to be standing up pretty well to predicting the facts of the spatial parameter change.
There's also a lot of work going on in the cortex, the visual cortex in particular, but also the somesthetic cortex in mice, looking at the really detailed biology of what produces these response scans and response scans with normalization, and response shifts, and so on. And I just took one example from the literature, which is from [? Rigam ?] [? Fasur ?] at MIT, who is using all the new methods for genetically targeting cells. So you can now image particular classes of neurons in a two-photon imaging experiment.
And you can track the activity of neurons that have been identified this way. And you can look at things like the response [? stream ?] as a function of contrast, and look at how these curves change when you activate different populations of neurons-- so for example, different classes of inhibitory neurons. And just in this one study, the parvalbumin containing cells changed the gain of that response contrast function. And there, unlike what we saw in the normalization model, the biggest effects are actually at the highest contrast, whereas if you activate another type of cell, is this case the somatostatin-containing inhibitory cells, then it actually looks more like a shift in the curve, maybe more like what we saw in those models that we were just looking at.
And in fact, there's been a lot of attention now on the role that inhibitory neurons might have in those computations. And a lot of that work was really encouraging. But these are still early days.
And I just put up this slide from Carandini, who looked at these contrast response functions while blocking all GABA transmission in the cortex with, or at least GABA receptors in the cortex with gabazine. And of course, none of the effects are what you would have predicted by knocking out all the inhibitory cells. So in this model, he got the shifts up and down of the curves, which isn't the effect he would have predicted from any one of those cell types in isolation.
That's just to show you that there's still a lot of work that needs to be done. But we're moving in the direction of really having the biology of all these circuit computations in the cortex, with now being able to use these methods, and the kinds of methods that Ed Boyden is developing for these optogenetic control of neurons for testing the roles of different cell types. Any questions or problems?
So now I want to move on to another topic, which is one that is debated in the field, one that I'm going to present some evidence to you about. And that's the role of synchrony in all these attentional effects that I've just described. So far, everything I've told you has just been based on the average firing rates of cells. This is the standard way that people look at processing in the cortex, and the brain for that matter-- just averaging firing rates over some period of time.
But there's lots of evidence now accumulating from different neuroscience studies that it's not just the average integrated rate over some hundreds of milliseconds that's really important in neural processing. But it's this actual structure in the spike trains at fairly narrow scales, in the range of milliseconds as opposed to hundreds of milliseconds. And to a certain extent, this has to be true given what we know about the temporal integration properties of neurons.
So if a neuron gets, say, different inputs from different cells, it can't integrate them over infinite periods of time. If one input comes in here and another comes later, as far as the cell's concerned, it just treats them independently. However, if they come in very close in time, then they will respond. So the cell will summate those different inputs. And so the question is, how close in time do those inputs have to be before you see that kind of summation? And the answer is probably different for different cells and different circuits, but they're more on the order of milliseconds and tens of milliseconds, probably, than seconds.
So here's an example of how synchrony could affect responses of the cell in the temporal cortex, like the one that we described in those coding experiments earlier, that gets inputs from populations of cells at an earlier level-- say area V4. And so if you have this IT cell that's getting these inputs-- and a typical cell will get over 1,000 different inputs in the cortex-- if one population has firing rates of cells in the population that are asynchronous with respect to each other at random, but the other one has the firing rate synchronized so that brief periods of time, all the inputs are coming into the cell in a narrow temporal window.
And you're more likely to push the spike over threshold during those periods and get an output out of the cell. So what you could say is that, because of the synchronization, you have allowed one population of cell to control better the output of a downstream neuron. So we, and a number of other people in the field, have questioned whether such a synchronous mechanism might play a role in attention as well.
Now, when you talk about neural synchrony, some of you have heard about synchrony and the binding hypothesis. So let me just tell you what that was. It originated with Wolf Singer.
And he had a very particular problem that he spend a lot of time worrying about, which was that we have neurons with receptive fields distributed over the visual field and different feature properties and so on. And the question was, how do you put all that information together? So this is a slide from one of his figures illustrating this problem.
So here you have this sort of ambiguous face with the candlestick in front of it. And at one moment in time, you might see it as a face with a candlestick in front of it. But at another moment in time, you might see this as two profile faces facing each other.
And then we know that, in fact, let's say in V1 or the early visual cortex, there'll be neurons that have receptive fields, say, on the different sides of the face and on the candlestick itself. And his idea was maybe, when you see the face as an integrated face what's happening is that the cells whose receptive field is on one side of space have synchronized their activity with cells whose receptive field is on the other side. And so now you get one face. But when you're seeing them as separate faces, maybe you've just synchronized the activity of cells whose receptive fields are just in one profile or the other. And if they're going at different frequencies or different phases, now you might see them as different objects.
And this idea provoked years and years of experimentation and argumentation in the field. And I would say that the consensus view now is that synchrony is not likely to be the solution to this binding problem. But this idea frequently comes up, which is why I'm bringing it to you now.
And I just want to alert you that this is not the same synchrony idea that we're talking about in the attentional experiments. And if you see in the literature people working on synchrony in the hippocampus, and the prefrontal cortex, and so on, they're not talking about synchrony as related to this binding problem, even though that's the most popular idea in the field-- synchrony has something to do with binding.
Really, what I just described to you is that synchrony might play a role in increasing the gain of the impact of one population of neurons on another. So it's just a way of increasing impact, and not necessarily binding different types of information. Is that clear?
So let's go back to this idea of synchrony. And I mentioned that cells integrate information only within narrow time windows. And that can be experimentally measured.
I just put up this example from the work of Justin [? Yee ?], who, some years ago, was measuring the temporal and spatial integration properties of dendrites in the hippocampus using the caged glutamate technique. So he could uncage glutamate and [INAUDIBLE], which would then depolarize portions of the membrane of the dendrites of the cell. And he could either vary how closely spaced they were in time or in space across the dendrites. In this slide here, what we're looking at is the response of the cell in terms of the membrane amplitude and then the number of inputs that caused that response.
And the circles are when the inputs are closely spaced in time. And the triangles are measuring when they're widely spaced in time. And you can see that for a given number of inputs, you get a higher output if the inputs are closely spaced in time. This is just telling you. It solves some finite integration property that's going to affect how they respond to a stimulus. So if you think this is general, then it's not a matter of whether cells will be sensitive to the temporal structure of their input. Essentially, they must be sensitive to the temporal structure on their input.
So how could you test those [INAUDIBLE] in attention? So in experiments that were done now quite a few years ago, we recorded from neurons from monkeys, and at the same time, recorded besides the action potential trains also the local field potential in different electrodes. And so you put bunches of electrodes in the cortex. And have you had any discussion of the local field potential in the class so far?
ROBERT DESIMONE: You did? OK, good. So the local field potential is measuring the general polarization state of membranes in some volume. It's still debated exactly how big that volume is.
But if you're recording from a neuron, the way you would separate out the local field potential from the spikes is just differentially filter it. So you could low-pass filter it to get the local field potential, which might look something like this. And if you high-pass filter it, you'll get out the spikes that look something like that.
And one way of relating the timing of spikes in one cell to the whole population is to look at that timing relationship of spikes to variations in that local field potential. And what many people find is that if you average the local field potential around every spike, you get something called the spike-triggered average of the local field potential. And in most experiments, it looks something like this, where the spikes are occurring at time of what people would think is the state of maximum depolarization in that local network.
So when the whole network becomes more depolarized, not surprisingly, spikes tend to occur at that time. And we could debate whether it's the cause or effect of the change in local field potential, but it's a fact that that's when spikes tend to occur. And you can see it has this oscillatory structure to it, advanced oscillation. What it's telling you is there is some frequency relationship with the spikes to the field potential, but it's not constant, which is why it's damp.
OK, now you record from cells in an area that's intermediate along that pathway. And the animal either attends to it or not. This is an area known as V4. You get something like this in the firing rate histogram. And here's the spike-triggered average of the local field potential. In both of these figures, the red line is when the animal's attending to the stimulus. And the blue line is when the animal's ignoring the stimulus.
So with attention, the firing rate stimulus goes up. And in addition, you see there's more structure in the local field potential, and greater amplitude of that ringing. So you can analyze this using some measure of coherence.
This is the phase locking of spikes to that local field potential, which is what we've done here for a population of cells. Here's coherence. The more coherence, the more phase lock.
And here's the function of frequency components in a field potential. And you can see that in this range of about 40 to 70 Hertz or so, with red being the attended condition, there's more phase synchrony of spikes to the local field potential than in the unattended condition. This is what's now been found in lots of studies.
So if you like the idea that one aspect of the mechanism of the attention is this increase in the temple synchrony of cells, then this would be evidence that you like to see-- that there is more temporal synchrony in a range that puts spikes into the sort of sweet spot of the temporal window. You also see here in this low frequency regime, this is suppression of synchrony to the low-frequency components of the local field potential. And we're not going to have a chance to talk much about that phenomenon today, but that's a whole other lecture, a whole other topic. There's a lot of interest in these low-frequency effects of attention we could talk about later on.
So this gamma synchrony in this sort of 30 to 70 Hertz range-- how does that come about? There's a lot of work going on in the field right now looking at the biology of gamma rhythm. But regardless of the details, they nearly all have the flavor of some kind of interaction between inhibitory cells and excitatory cells, in that the frequency of gamma-- that is to say 40 Hertz-- is closely related to the time constant of the GABA-A receptor.
And again, you can wire this up in different ways. But basically, if you were to activate the inhibitory cells, they would suppress excitatory cells. But then they would rebound.
So this is from electrically stimulating inhibitory cells. You stimulate them. There's some inhibition of the excitatory cells. But then they rebound and they would burst.
And then if there's some feedback of the excitatory cells and inhibitory cells, well, the inhibitory cells would shut off the excitatory cells again, and so on, and so on. And you'd get some sort of oscillatory structure here. And the frequency of the oscillation would depend on all kinds of parameters, including the strength of the drive, and so on.
But it would get you up into that gamma frequency range. And besides interest in this because of attention and other kinds of processing in the cortex, there's a lot of interest in these gamma mechanisms because they have been implicated in a variety of different brain disorders. There's some evidence of some problem with inhibitory cells, and with some dysfunction in these little gamma circuits.
So now let's get to how the processing in this pathway is, in fact, modulated by all these top-down attentional inputs. So when we train an animal, and tell the animal 10 here, 10 there, whatever-- what's causing these [INAUDIBLE] themselves? Just before we get to the feedback, any other thoughts, questions?
OK, so the two brain areas that are most talked about in terms of their feedback role in attention are the parietal cortex and the prefrontal cortex. There's a lot of interest in the thalamus as well, but we're going to focus on the cortex. And knowing humans that if you damage, particularly in the right hemisphere, particularly in the parietal, but also from prefrontal, if you damage these structures, you can get something that's called neglect, where people fail to attend to stimuli in the contralateral states.
You all heard about the neglect? And you've heard of people that fail to attend to things in half the space and so on. You get extinction syndromes, where people can attend things if there's just one thing in the visual field. But if you put a competing thing in the other visual field, then the thing in the bad field will disappear. So we know from the lesion evidence there's some important role of these areas in the control of attention.
And Tirin Moore in the monkey side has really done remarkable series of studies using electrical stimulation, which is sort of the opposite of a lesion. He's used electrical stimulation to show that the frontal eye fields in the prefrontal cortex in particular plays an important role in mediating some of these effects of attention that feed back into visual cortex. So this is just an example from one experiment that Tirin Moore did looking at this role of the electrical stimulation of the FEF cells while monkeys were in one of these kinds of attentional experiments.
So here's a monkey fixating a spot. Here's a receptive field in the cell in V4. Here you can put two stimuli in that receptive field, and one could be a preferred stimulus and we could be a non-preferred.
And what Tirin did was he electrically stimulated in the frontal eye field at a representation that was either centered on one stimulus or the other in the same neuron's receptive field. So you could stimulate and say, focus processing here. Or you could stimulate somewhere else in the FEF and say, focus processing there. Does that make sense? So instead of telling the animal to attend, you would electrically stimulate at that same spot. Yes.
AUDIENCE: [INAUDIBLE] much the size of before? That it's--
ROBERT DESIMONE: No, they're actually bigger, and so it's an interesting issue. So they're actually bigger. But if you electrically stimulate at the spot, the eyes will go to one very precise spot. So it's not like they go anywhere within one of these large fields.
And the general thinking is that places like the frontal eye fields, or the colliculus where you have big movement fields, is that any given location in a field is specified by the intersection of fields that are only partially overlapping. So you can target a precise spot even using large fields, as long as it's a population equivalent.
OK, so this slide simply shows that if you had one stimulus in the field and you stimulated that spot, the cell would give an enhanced response to that stimulus. The slide I meant to put in here was going to show that if you stimulated the spot with a good stimulus, the response went up. But if you stimulated the spot with a poor stimulus, the response goes down. So in other words, the cells were mimicking the effects of attention, except that all these effects have only been produced by electrical stimulation. So anyhow, there's a lot of experiments showing that frontal eye fields should play an important role in control of attention.
So an experiment I'm going to tell you about now, that was done in my lab-- there were simultaneous recordings in the frontal eye fields in area four. And the task of the monkey-- I sort of glossed over the task, so I'll give you a little bit more detail here. So the monkey's fixating a spot on the screen. In all these experiments, you monitor the monkey's fixation with infrared video cameras. And you can put multiple stimuli on the screen.
And in this experiment, Georgia Gregoriou, who did these recordings, she found a spot in the frontal eye fields in area V4 where the cells had overlapping receptive fields, like you see here. And then one stimulus would go in there. And then there would be two other stimuli on the screens. And they'd be different colors.
And then, after the stimuli were on the screen, then the fixation spot would turn a color. And that color would tell the animal which colored grating it should attend to in that trial, because then at some random time afterwards, that grating would itself turn color slightly. And then the animal would indicate it with the bar [INAUDIBLE].
And then it would get a reward, which was juice. And then you can change the color from trial to trial, and change the position of the grating, and so on, so everything's random, counterbalanced, so on. And you can look at the effects of the stimulus on the cells with and without the animal attending to it.
Now, if you do that, and you record the firing rate of cells in the frontal eye fields as before in both areas, when the animal gets the cue to attend in the receptive field, the firing rate goes up, which is the red line. And if the animal gets the cue to attend to the stimulus outside the field, you see the firing rate goes down. So there's actually a push.
In terms of the read-out of the cell's response, it can go in either direction-- either up, below, or above the previously stimulated condition. But the effects occur earlier in the frontal eye fields than before. And that's consistent with the idea that the frontal eye field cells are the controllers of the visual cortex. They do their thing first, and the visual cortex follows.
And if you look at synchrony in these two areas, both areas-- so effects of attention of synchrony-- so they show enhanced activity in this gamma frequency range when the animal attends to a stimulus in their field.
And furthermore-- now, this is the interesting thing-- as you look across areas, then the effects of synchrony are even bigger across areas than within an area. So in either direction, from V4 to FEF or FEF to V4, the very strong enhancement of synchrony across the two structures with attention. So one area is firing when the other cells are at the right phase of their response.
And it's highly selective for just the cells that have the overlapping receptive fields. So if you take cells with overlapping fields and measure synchrony, then the effects of attention are very big. But if you look at cells that have different receptive fields in the two areas, and look at synchrony across those, there's no synchrony, and there's no increase in synchrony with attention. So you've got to have everything lined up for it to work.
And it's also specific for the type of sell in the frontal eye fields. Frontal eye field has some cells that are purely movement. They respond just before saccades. You've got some cells with mixed properties. And then you have a lot of cells that we call visual cells. They actually have visual responses.
And if you look at the synchrony of the visual cells with the V4 cells, they're strongly modulated by attention. But none of the visual movement in movement cells-- there's no significant modulation. So they not only have the receptive fields, but they've got to be the right cell type.
And we know from anatomical studies that these special cells tend to be in layers 2, 3. And these are the cells that tend to project back to the visual cortex, whereas the movement cells tend to be located in layers 5 and 6. And those tend to be the cells that project down to the colliculus. So that's suggesting that the attentional system is making use of visual cells, but it's not just the output of movement cells that are involved in making saccades.
This is an argument. This is the debate that's occurred with the field for many, many years-- is spatially directed attention really just the ocular motor signal but with some sign of suppression of the eye movement itself? And this is the kind of study that would suggest that actually, it's different cell types involved in these two different functions. Make sense?
Now, one thing that you can look at to get an idea of the timing relationship between the two areas-- so I told you there's synchronous activity. But I didn't tell you about what's the phase relationship in activity between the two areas. So one possibility is that you could have a constant phase across frequency between the two areas.
So for example, they could fire at the same phase with each other. Or you could have some phase shift between the two areas. And it turns out that when you actually measure this phase relationship between the two areas, it varies across frequencies.
So this is showing the phase in gamma, and this is in beta, and this is in theta. And you can see at every frequency, it's a different phase. But the really striking thing that came out of this analysis was that at every one of these phase relationships, if you convert it to milliseconds, it comes out to the same number of milliseconds.
So in other words, there's a constant time shift between the two areas that has to translate into a different phase at different frequencies. So we're going to come back to this type of analysis. But it's the kind of analysis that's been used in other kinds of studies and with other kinds of data. But it is kind of looking at these-- the timing of phase relationships can tell you something about the interaction pump between the two areas.
And what we have argued is that this constant time relationship is the time that's needed for spikes to actually get from one area to another. And that is the conduction time plus the synaptic delays between one area and another. And that when you have this constant time, it means that cells in one area can fire, and by the time the spikes get to the next area, they will arrive at the time that this population was in its depolarized phase, so that they were most likely to have an impact on these cells.
You could imagine the cells over here fired, and the cells over here happened to be in the hyper-polarized part of their firing phase. They may not respond to inputs at all. But if they're depolarized, then those new inputs might push them over threshold and they'll fire. And likewise, you can go back and forth and back and forth this way. That make sense?
So everything I told you up to now had to do with spatially directed attention. And that's the most natural. As I said you, move your spatial attention before you move your eyes. That's what people think about and so on. But we know that people can attend to what we call features and objects irrespective of the location.
So for example, if I ask you to find the girl in the pink shirt in that scene, you can do that. So how did you do that? Did you scan the scene like a raster in the old CRT displays, and eventually it hit a girl in a pink shirt? Yes? You did that?
And for the frontal eye fields, if you tell a monkey, attend to this one spot in the frontal eye fields so it becomes active at that one spot, how difficult can that be for a brain? But if I just tell you to find a girl in a pink shirt, where does that come from? Where's the map of pink shirts in your brain?
And is there a map of girls versus boys? And where does that come from? And you had to use your knowledge of what shirts are like, and what the color pink is like, girl looks like, and so on and so on. A lot of stuff went into that.
And yet you probably did it very quickly, maybe just a couple eye movements and boom-- find the girl in the pink shirt. And this is something we do all the time. You are constantly guiding your attention through objects, without necessarily knowing in advance where you should be attending to. If you're like me, you're usually spending your time looking for the stuff that you lost in the environment all the time right?
So you do this all the time, and you do it very efficient. It's not that different than the question of if I say, imagine in your mind's eye what a girl in a pink shirt might look like. Or imagine what your mother looks like.
Now, how do you do that? So you can use your long-term memories of these things. And you presumably have some way of loading it back up into your visual cortex. Because there's lots of evidence that you actually do use your visual cortex during memory recall and imaging.
So you have some way of getting back to visual cortex or something. And so our guess is there's a similar kind of mechanism. So how dos that work?
So I'm going to tell you about is this one experiment that we published recently on trying to understand how that works. And if then if there's time, I'll talk about a monkey. So in this experiment, we used magnetoencephalography coupled with fMRI in people. Does everybody know what magnetoencephalography is? You've heard about it in the class before? They grew up with this-- kids these days, what they learn.
So if you have neurons that are oriented on paths, and if they fire, they will, of course, generate not only electric currents, but the currents tend to be along a path. That will generate a magnetic field. It's the right-hand rule, so you can tell which direction the magnetic field is rotating.
And an MRI machine can induce magnetic changes that affect the spinning of protons in your brain. So you have giant magnets. But in magnets. it works the exact opposite of MRI, in that you detect changes in magnetic field. You don't induce them.
And by detecting them, you can try to solve the inverse problem, which is, of course, in principle not solvable in every case. But you can try to approximate the sources of those magnetic field changes. And one of the advantages that MEG has over EEG, which is another way of measuring the time course of neural response changes, is that with EEG, the electrodes are all attached to the same conductor.
So you're trying to pick out electrical fields, but they're connected to your scalp, which is itself an electrical conductor. And so you've got to have very, very good models of the scalp and skull and so on to try and understand how those fields are going to be differentiated from each other. Whereas in MEG, all the detectors are totally independent, so it avoids that problem.
And many people feel, therefore, that the localization you get is better in MEG, although that's still debated. But in modern MEG systems, they actually have EEG built in as well, so you could, in principle, always do both.
So now, in spatially directed attention, you have this great advantage in that you can look at just different parts of these visual topic maps in the cortex, and say, oh, what happens when you attend there? What happens when you attend there?
But for feature and object attention, that's not so straightforward, right? So here, what we wanted to make use of was all the fantastic information on separate representation for different objects and features in the cortex. So particularly from the work of Nancy Kanwisher, who has been here-- she's not here at this moment-- she had the earliest studies that showed that there is an area in the temporal cortex, the fusiform facial area that's specialized for the analysis of faces and the parahippocampal place area that specializes in the analysis of scenes and houses and so on.
And so we decided we would make use of those stimulus classes in looking at object-based attention and how it's induced in the cortex. Nancy had previously shown with fMRI that you can modulate the overall activity in these areas so subjects attend to one stimulus and another. And we wanted to look at the timing of these activity changes.
Now, with MEG, actually you can get some localization of where the signals are coming from in the cortex. But what you'd really like is you'd really like a signal that could just tell you what stimulus is generating that signal. And so the way that other people have done this is they've used something sometimes called steady state of potential or sometimes call it frequency tagging. It's that if you present a stimulus class at a fixed frequency, you can actually pick up that frequency from the brain.
So just like you have something going [MAKES NOISE], the brain signal would go [MAKES NOISE]. And if you have two different things at the same time, which we're going to have in this experiment, you can just present them at different frequencies. And so if you see one frequency in the brain, and that's the frequency of one class, that must come from there. And the other frequency must come from the other.
And we wanted to put the objects on top of each other, because we didn't want the subjects to be able to use spatial attention. So the objects were faces and houses. And they were presented in this temporal sequence in which they were often overlapping in time.
And the stimuli were going in and out of phase sequence, so that they were becoming more or less visible in a kind of sinusoidal pattern at these different frequencies. And the subjects are doing a one back memory test, so that forced them to focus their attention.
Now, show you what that actually looks like, here's an example here. So imagine that I've instructed you to attend to, let's say, faces. And you were to signal when one face was repeated in that sequence. OK, can you all do that?
And because the stimuli are going sort of sinusoidally in and out of visibility, there's no sharp movement transience, which causes some distortion effects in the frequency domain. And you can present them at different frequencies. And of course, you would balance it, so faces would be at one frequency and the houses at one, and then you reverse that.
Now, so to get better localization then you normally can get from MEG, you use fMRI to localize these areas. So you just did a localizer test for the place and face area. And then we had the subjects doing that attention demanding task compared to a control task. And that gave a significant activation in an area of the human frontal cortex known as the inferior frontal junction, which a number of other studies had shown are important for working memory and general cognitive processes in the human frontal cortex. OK, so then we have some areas to focus our attention on in the MEG.
So here's what happens to the sensory signals as a function of what the subjects are intending to. Here's the power, and here's the frequency. And you can see, here's the face area. Here's the place area.
And the blue line's when they're attending to faces, and the red line's when they're attending to houses. So you can see in the face area, that frequency for the faces-- the power is very high when the subjects were attending to faces. And the power for the house is low when they're attending to houses.
But in the place area, it's the exact opposite. So here, the power's high when subjects are attending houses and low when they're attending to faces. Does that make sense? So this is the modulation of the sensory response by attention that you would expect to see from earlier work.
The frontal eye fields didn't show a significant effect. That's not spatial. And here's this inferior frontal junction, where in the same area you get the attentional effects on face and house signals. But you see the attentional effects are even bigger here than back in the temporal cortex.
It's like almost all or none. So when the subjects attend to houses, you only see the house. There's virtually no place signal. And when the subjects attend to faces, you see the face signal, but there's virtually no house frequency there-- so very strong.
So this is all encouraging, that maybe the IFJ is maybe involved in this task. But we're going to get to, in just a moment, the evidence for some interaction between these areas. But one of the things that is good about using these signals that are sinusoidal is that you can just read out the phase of the signals very easily.
And that phase tells you essentially a kind of latency for the area where you get the signal from. So V1-- that phase translated into a latency of 162 milliseconds, whereas the face and place area, it's about 20 milliseconds later. And in the IFJ, it's about 20 milliseconds after that.
So we would argue that this is consistent with a sort of 20 millisecond per level jumping in time, forward in time, as you go up the system with the sensory information, from V1 up to the frontal cortex. Now some people say, oh, that seems like a very late number for V1. V1 normally responds to stimuli earlier than that.
But remember, this is the response to visibility. It's not to a luminance onset. And so you can't really compare these numbers to what people get to flashing things and measuring [INAUDIBLE] response. But anyhow, hold that number in mind-- the 20 milliseconds.
OK, now let's look at something that's not locked to the stimulus frequency. Let's look up in the gamma frequency range. And now I'm just showing you here power in the gamma frequency range over time in the trial.
And here if you look, for example, at the face area and the place area, the red's when they're attending to houses, the blue to faces. This is the start of the trial. And during the cue period, the subjects are told little character fixation, what they should be attending to.
And then during the fixation period, there's nothing on the screen. And during the stimulus period, the stimuli are on. So you can see that as soon as they get the cue, the power in gamma starts filling up, and specific for each area, as though the area is preparing itself for the stimuli that the subject's going to be attending to. And then it stays high through this-- say this is like the working memory delay, the blank period. And then it stays higher throughout the stimulus period-- again, the opposite effect from the face and place area.
In the IFJ, it shows the same temporal pattern, but it does show more gamma power for attending the houses. And we don't know why that's true. But we speculate it's because it was harder for the subjects to attend houses. So we think the IFJ is working hard during attention [INAUDIBLE]. Again, this is speculation.
OK, so now let's look at interactions between areas. And now we're going to look at coherence. And here we're looking at coherence between the IFJ and the face area, and here between IFJ and the place area.
And what this shows is that in the face area, you have this bump up in gamma frequency coherence when the subjects attend to faces. And in the place area, you get the bump up when the subjects are attending to places. And this gamma frequency coherence is, to us, the exact parallel to what we saw between the frontal eye field in V4 when the monkeys were attending to locations.
Now it's more objects and people, and this IFJ area as opposed to the frontal eye field. Does that makes sense? So we would say possibly, the IFJ functions as sort of the parallel to the frontal eye field in terms of how it is interacting with these two areas.
Now I told you that as you go from one area to another up that hierarchy, there's about a 20 millisecond jump as you go from area to area. What about the feedback? In the monkey, we saw there's like a 10 millisecond time shift between the areas in terms of feedback.
And we could do exactly the same analysis in these human subjects, where we look at the phase shift translated into milliseconds as a function of frequency. And if you plot this-- if you plot the change in time as a function of frequency-- if it's a constant time, this should be a linear slope. And the magnitude of that slope will tell you how big the time shift is. And the direction of the slope tells you which direction the interaction's in.
And what this is telling us is that there's about a 20 millisecond time shift between the IFJ in these two areas and the IFJ leads the activity in face and place here. So we think that, again, is the time it takes for activity sort of to get from the frontal cortex now back to the temporal cortex with all the synaptic delays and so on. And we think 20 milliseconds could be just because the human brain is bigger.
But again, we don't know that for sure. But that would be our interpretation of these data. So you'd have this sort of optimal timing for activity across the two areas to enhance processing attention to the appropriate object.
So here's the parallel. We think that, again, from the monkey experiments, we have this synchronous interaction between the frontal eye fields and visual cortex, which is V4, based on spatially directed attention. You have cells which fire at the appropriate phase for the area it's coupled with.
We see the same kind of thing between the IFJ in the face and place area, again, with the same kind of timing relationship. So we think it could be just sort of a common principle for how areas interact with attention, this feedback control system. That make sense? Any questions? Yes.
AUDIENCE: What was the fMRI contrast you used to get the IFJ observation?
ROBERT DESIMONE: That was between the attention task itself. Then we had just a control, a very difficult fixation task. It wasn't meant to be the best of all fMRI experiments of isolated attention. It just gave us some candidate areas to look at in the MEG experiment.
AUDIENCE: Is there, like, an obvious candidate for [INAUDIBLE] in the monkey brain?
ROBERT DESIMONE: Yes, we think. So in the last few minutes, I can tell you about what we think is the homologue in monkey brain. And it's an area that we've started calling the VPA, or ventral prearcuate, which in the monkey is immediately in front of the frontal eye field.
And we have started recordings in this area. And unfortunately, we don't have the same experiment as we have in people. So you can't do a direct comparison in the same task. That's just how life works.
But we do have a task which has been very, very helpful in separating out the effects of spatial and feature attention in these monkey recordings. Because they're just really the key for us, is that separation between spatial and feature. And the task that we use is visual search, which is more like looking for the girl in the pink shirt.
So the monkey starts by fixating. Then it gets a cue which is different on every trial. But it shows an object that the monkey should then look for. This would be like pink shirt.
And there's a blank fixation period. And then when the array comes on, the monkey can look around anywhere he wants. But when he finally finds the thing that he's supposed to be looking for, he just holds his fixation there and gets a reward. Now why is that a good experiment for separating spatial from feature?
Well, for spatially directed attention, you can just look at, let's say, the response to a stimulus and the animal's making an eye movement to it versus making an eye movement someplace else. So that's easy-- attention before eye movement. Now for feature attention, what you can do is take a time period where the animal's about to move his eyes to some stimulus-- let's say this one over here. But our receptive field is someplace else, like over here.
And so the animal is not attending to it spatially. However, it may or may not have the thing that the animal's looking for. So let's say he was looking for a pink shirt. There may or may not be a pink shirt in that field.
So we can say, oh, how does the response to the pink shirt vary depending on whether the animal's looking for pink shirts or whether the animal's looking for blue shirts, or something else, or a shoe-- whatever? And so we can compare those two situations-- spatial versus feature. And we were not the first to use this design. It's been used in a number of experiments on the frontal cortex.
OK, so you record from VPA, compare it to the frontal eye fields. And we did some recordings in the IT cortex. The VPA and frontal eye field cells have almost identical receptive fields. And this is shown in this sensitivity plot here. The sizes are almost the same. IT receptive fields are much bigger.
But unlike the FEF, VPA cells are also selective for the object features. So here what we've done is, we've taken the response to the best versus the worst stimulus for every cell. And the red line shows the best, and the blue's the worst. And so IT cortex has very good sensory selectivity. Here's the best. Here's the worst-- very nice. Frontal eye fields, there is no best and worse, because there is no feature selectivity. But in the VPA, sort of intermediate. There's pretty good feature selectivity in addition to that spatial selectivity. Yes.
AUDIENCE: Is there a bit of retinotopy in the inferior frontal junction?
ROBERT DESIMONE: So in the monkey, to us it looks sort of random. Unlike in the FEF, there's a course typography to it. And in the VPA, they all seem to be mixed in. The objects seem to be all mixed together. Preferred objects in the receptor fields also seem kind of random. So that's in general true of the prefrontal cortex, is it's a lot of mixed up stuff there.
OK. So now, by doing that manipulation that I showed you earlier looking at feature versus spatial attention, we can look at the effects of feature attention and spatial attention on the responses. And this is the difference between the green, red, and blue lines here in the frontal eye fields and VPA. The green and red line-- that difference would show you the effects of feature attention, and the green and blue line would be the effects of spatial attention.
And the frontal eye field does show effects of feature attention, in that frontal eye field cells do seem to know where the thing is that you're looking for, even when you're not attending their spatial. That's been known for some time. It's mysterious.
How do they do this? Where does that information come from? Because they don't have any feature selectivity themselves, so they must be getting that information somewhere, right?
VPA also has both spatial attention effects and feature attention effects. And they happen all about the same time. So maybe the VPA is the source, but who knows. So to test that, you have to do a causal experiment.
So what Narcisse Bichot did was he deactivated the VPA and then measured the effects behaviorally, and measured the effects on cells in the frontal eye fields. So here is looking at effects of the muscimol deactivation of the VPA. And these are the behavioral data pre and post injections.
And here we're looking at number of saccades to find the target. And here, we're looking at errors. Let's just look at errors for a minute.
You can see these are errors in the contralateral field, which are much larger than when the animal's looking for it in the other hemifield. And it extends to the midline as well. And then it also takes the animal more saccades to find the target if it's in the field contralateral to the deactivation.
So that's good. So behaviorally, the VPA seems to play some role. But now the really key thing was deactivating the VPA and recording in the frontal eye fields. So this is the frontal eye fields, same cells before and after the deactivation. Here's the effects of feature attention. Here's the effects of spatial attention.
And after you deactivate the VPA, the effects of spatial attention stay, but the effects of feature attention go away. And so this, to us, would be the evidence that the VPA is supplying that information to the frontal eye fields about what it is you're looking for, and where it is in the array. And that's the information that the frontal eye field cells use to direct your eyes to that stimulus. Does that make sense?
So it's not just that this, what we think could be the equivalent of the IFJ in the monkey-- it's not just that it supplies the information that's important for attending to objects when they're superimposed on each other. But we think it's also the information that you use to orient to, to move your eyes to the things in the environment that you're looking for. You've got to know not only that it's there.
You've got to know where it is, right? You've got to know where is that pink shirt? And then your eyes would normally go, and you would then fixate the pink shirt.
And so we think that an interaction between these two areas-- that should be VPA there, by the way-- the interaction between these two areas would be really interesting to understand, to try to understand this interaction between spatial and feature attention. And it also occurred to me-- so I didn't put in the right collaborators. This was assembled at the last minute.
But the MEG experiment was done by Danial Baldauf. VPA-FEF experiment was done by Narcisse Bichot. I left off the initial experimental on IT cortex with decoding, which was done with Ethan Meyers and Tom Poggio and Ying Zhang.
And I left off the V4 FEF experiments that were done with Georgia Gregoriou and Huihui Zhou and Steve Gotts. So those are the people that actually did all that work. And I hope this has caused you to become interested in attention.
Any other questions? Yeah.
AUDIENCE: If you would [INAUDIBLE] frontal eye fields, would you get similar behavioral effects in a visual search?
ROBERT DESIMONE: If you deactivate the frontal eye fields?
AUDIENCE: Instead of VPA [INAUDIBLE] behaviorally [INAUDIBLE]?
ROBERT DESIMONE: Right, so it's actually a little puzzling about the deactivation of the frontal eye fields. There is a muscimol experiment from Tirin Moore that showed that there were behavioral effects just at the spot in the visual field where you did the FEF deactivation, but it didn't have any effects back in V4, which is puzzling. We've done, in an experiment that's just about to come out, a lesion that included the frontal eye fields. And there, it did affect the cells back in V4. So that's going to need to be sorted out.
We're doing an experiment now on collaboration with Ed Boyden using optogenetics to try to deactivate the frontal eye fields at very precise periods of time during a memory-guided saccade test. And that's showing work that's being done by Leah Acker, who's a joint student with Ed and myself, that the frontal eye field plays an important role at every time period of that task, from the initial encoding of where the stimulus is during the memory delay, and then the final targeting of that stimulus by the saccade. So I think there is a parallel behavioral function of the FEF, but we just need to sort out details. Somebody else had their hand up. Yeah.
AUDIENCE: I just wanted to make sure I understood the big picture. Is what's possibly going on is the IFJ is sending signals to the FFA to hold it at the depolarized state, that's almost like near [INAUDIBLE], I guess, so that these neurons can easily synchronize?
ROBERT DESIMONE: Yeah, so this is all, at this point, a combination of sort of theoretical speculation, hand waving. We haven't done all the causal experiments, right? But our interpretation of the results is that it's not just that the IFJ cells are maintaining a chronic level of activity, and just keeping the, say, the FFA polarized during the face task, but that the activity actually has some oscillatory component to it.
And their oscillations are synchronized so that when the IFJ does send information back, it's sort of hitting those FEF, FFA cells at this time when they're most receptive to getting an input. So basically, both areas of the cells are being more depolarized during this trial. So they're more likely to respond to a stimulus that you're being cued to attend to.
But the speculation is, the cells are making use of this oscillatory synchrony to facilitate just what you're saying-- to facilitate that response. Because in the end, what you want is that response to the sensory stimulus. You want response to faces to be really high when you're doing the face task, and vice versa for the place task. Yeah.
AUDIENCE: Are the VPA cells connected to [INAUDIBLE]?
ROBERT DESIMONE: Yeah, I showed that. Those are intermediate.
ROBERT DESIMONE: Yeah, that's right. In fact, I didn't show this, or I didn't point it out. But if, let's say, a VPA cell responds best to a shoe, so the activity will be high-- it depends on where the receptor field is. But if falls within the receptor field, it might be high when you first present it as a cue.
And then it might be maintained during the delay. But then actually, those cells tend to have a higher firing rate throughout the whole trial that the animal's searching for their preferred feature. And so when their preferred feature does occur, so they're already primed to get a larger response to it.
AUDIENCE: And then these cells, they have smaller receptive fields than some of the [INAUDIBLE] field [INAUDIBLE].
ROBERT DESIMONE: Yeah, we don't know for sure, but yeah, you can imagine that if you are attending--
AUDIENCE: [INAUDIBLE]. Throughout the visual field, right?
ROBERT DESIMONE: So we didn't-- in the visual search test, you do have it throughout the visual field. There's a lot of experiments, behavioral experiments, in which people have shown that even if you are attending to a feature at a particular location, it kind of spreads through the visual field. Why that would be with VPA cells that have receptive fields, it's a little hard to know. But yeah, it could be that you just activate the whole VPA in that case. I don't know.
The other thing I didn't talk about, but I might have, which is that if you go just in front of what we're calling VPA, and even into classic prefrontal cortex, and that means the cells do things you don't understand. They just do stuff. Like they respond in one task and not another task, or they respond in one part of the trial and other-- they do the kind of things that people describe for PFC.
And if you deactivate them, then you're really impaired. And so even though they don't make any sense, you need them for some reason. So there's obviously a lot of interesting things going on in that cortex that we have to figure out. Yeah.
AUDIENCE: Is there a proposal for what's causing the phase locking? Is it [? Albinar ?] or sort of a higher-level area?
ROBERT DESIMONE: I mean, the obvious thing-- and some people have modeled this-- is that if you have two groups of cells that are anatomically connected and one starts going into an oscillation, you will get phase coupling just because of the anatomical connection. I didn't bring my-- I could've brought my hardware demonstration. I really should have done that.
So I saw this demonstration at a talk. I was at a meeting in Germany. And it was so good, I had to physically reproduce it.
And what this person showed in the video was they had a set of metronomes. And they put them on a board sitting on two soda cans. And if they're not sitting on the board on the soda cans, and you have a whole bunch of metronomes going at the same frequency but at random phase, of course, nothing changes.
But once you put them on this board so that there's if one oscillates a bit, it tends to influence the other ones, after a short period of time what starts off as random oscillations actually all goes into phase synchrony. They all start and they're at exactly the same phase with each other. And so the basic idea just comes out through the connectivity. But you don't need something else.
Associated Research Thrust: