Something Else About Working Memory
Date Posted:
May 12, 2021
Date Recorded:
May 11, 2021
Speaker(s):
Earl K. Miller
All Captioned Videos Brains, Minds and Machines Seminar Series
Loading your interactive content...
Description:
Prof. Earl K. Miller, Picower Institute for Learning and Memory, BCS Dept., MIT
Abstract: Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (less than 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
TOMASO POGGIO: I'm Tomaso Poggio. I am still the director of CBMM, and this is the last regular season weekly seminar of CBMM for this spring semester. But we'll have irregular seminars throughout the summer, and the next one will probably be June 8, will be very interesting, advance in the mathematics of deep learning. But today, as I said, is the last regular seminar. And I'm very, very happy to have Earl Miller speaking here, because he will speak about high cognitive functions.
And this is really what is becoming more and more the focus of research for CBMM. So this is highly relevant for the present and the future of CBMM. And it's really about the neural basis of high-level intelligence. It's not human, still the monkey.
EARL MILLER: Closer.
TOMASO POGGIO: But Earl will speak about it. And there will be a question also about, what about humans? But let me introduce him very briefly, although he does not need much of an introduction for most of you. He's the Picower professor of neuroscience at MIT. He got his PhD in Princeton University, I think with Charlie Gross, correct?
Yeah.
Of course, this covered [? face cells ?] and was not believed originally back then. And his lab has really been a leader in trying to understand and studying the real basis of high-level cognitive functions-- so frontal lobes, mainly-- and recording from many different areas, technical feats that have advanced the field in various ways. Earl is incredibly creative and imaginative. I am glad to have been a collaborator in some of his past works. Let me stop here just to welcome Earl. And I give the screen and the cursor to him. Earl?
EARL MILLER: Thank you very much, Tommy. Thank you for that nice introduction and also for the invitation to speak. So let's dive right in.
As Tommy mentioned, we study the neural dynamics of high-level cognitive functions. And we do so using multiple electrode recordings, anywhere from dozens to hundreds of electrodes, in monkeys performing cognitive demanding tasks. We also use computational analysis and modeling of these data.
And what I'll be talking about today in my talk to you about a high-level cognitive functions are two different types of measures-- spiking activity, you think of them as the voices of individual neurons near the recording electrode; and the local field potentials, the average activity or the roar of the crowd near the recording [INAUDIBLE], the average activity across millions of neurons.
So today's topic is working memory. What is working memory? It's our mental sketchpad. It is the purposeful holding and manipulation of information in mind. When we hold things in mind, we think about things, we imagine things, we're using working memory.
We're probably all familiar with working memory. But importantly, and why I highlighted purposeful manipulation, is that working memory is not just short-term memory. That's important to keep in mind, because a lot of studies of working memory focus on the maintenance functions alone.
But the maintenance is one of the least interesting things about working memory. The most interesting thing is that working memory is under volitional control. And this is why working memory is fundamental to cognition, because there's a volitional control.
We can choose what to think about, how to think about it, when to act. Working memory, in short, is goal-directed. It's a primary mechanism that elevates us from creatures that can just react to the environment, to creatures that can act on the environment. We can have this goal-directed, purposeful thought and action.
Now, this top-down control, this control, volitional control, comes from our brains' ability to learn top-down information. That is the rules of the game, how the world works, the goal-relevant structure of our world. And this is thought to be a major function of the prefrontal cortex, the brain's executive, where many of my studies today will focus on.
Now, what I'm going to do is, I'm going to talk about working memory later in the talk. But first, I'm going to give you a quick, lightning-fast, whirlwind background tour of information, of studies that led to our thinking now in working memory. Now, starting about 20 or so years ago, we began to test this hypothesis, that a major function of the prefrontal cortex is to absorb the rules of the game, the goal-relevant structure of the world. And one of our key early studies was a study we did in collaboration with Tommy's lab.
What we did is we trained monkeys to categorize a set of computer-generated images as cats and dogs. So there were morphs made of blends of three prototype cats, three prototype dogs. These are the blends between them. We can generate hundreds of these stimuli.
And we drew a category line halfway between the cats and dogs in this category space, in the shape space. Turns out you can draw the line anywhere. It doesn't make a difference. But we taught the monkeys through training that anything on this side of the line was a cat, and anything on this side of the line was a dog.
So just to quickly run through the logic here, what this test is, they have a bunch of different-looking cats, a bunch of different-looking dogs. We have many cats that look like dogs, across the boundary here. These cats look like dogs, and these dogs look like cats.
So what we want to know, does prefrontal cortex activity reflect the physical appearance of the images? Bottom-up, that's the sensory information. Sensory information entering your sense organs, that's called bottom-up information, whereas this knowledge about how the world works is called top-down information. So we wanted to know whether the PFC activity reflected the actual physical appearance of the images or their category membership. And what we found is that PFC neurons reflected top-down category information, not bottom-up physical appearance.
And this is just one single neuron here. These different tiles are all the different cat stimuli we using in that day. Here's all a different dog stimuli. This is the firing rate.
White and yellow is a lot of spiking activity from the neuron. Dark and red is very little spiking activity. This neuron, as you can clearly see, lit up, activated for all the different-looking cat stimuli, did not activate for all the dogs stimuli, and have responded pretty much the same for all the cats and responded pretty much the same for all the dogs. It just differentiated cats versus dogs.
So having seen this, we then began to think more about this ability of the prefrontal cortex to acquire and signal top-down information. And again, the first study was our shape category study, where we found that neurons reflected category membership, not the actual appearance of the stimuli. The actual physical stimuli were not relevant to the task. So they were not regarded by the prefrontal cortex. The prefrontal cortex just pulled out the task-relevant information.
When Andreas Nieder was in the laboratory, he did similar studies, but this time looking at the monkeys' judgment of small numbers, numerosity. And you could signal these numbers using a wide variety of different-looking stimuli. And we get PFC neurons, prefrontal cortex neurons, that signal the essence of what was demanded in the task. That is the number the monkey was judging, and not the stimulus the monkey was looking at.
And then Joni Wallis, Kathy Anderson, and Rahmat Muhammad, they were in the lab. They took this up to an even higher level, where we taught the monkeys not the categorize images, but taught the monkeys to apply different rules. The monkey switched back and forth between what we call match and non-match rules.
Matches, you choose the stimulus that looks like the one you just saw. Non-matches, you choose a stimulus that that's different from the one you just saw. So the monkey is going back between these two different rules, choose the same stimulus, choose the different stimulus. You cue the animals on one rule on one trial, and another rule on another trial. And what they found was that neurons in the prefrontal cortex, a lot of them kept track of what rule the monkey was currently following.
So in essence, what we have is we have-- prefrontal cortex is part of or maybe the apex of a process where there's this boiling away of irrelevant details that leaves the essence of top-down task demands. But this led to a curious finding Lisa was curious about 20 years ago. We found with these recordings, we were recording from electrode arrays, many electrodes simultaneously. So we're not preselecting neurons or any sort of property. We're just randomly selecting whatever neurons we encounter. And we found these effects in like 30% or 40% of randomly-selected neurons in the prefrontal cortex.
Now, this is a weird result if you think that individual neurons all have one particular function. And maybe you younger people in the audience will find this strange, but back then, 20 years ago, that was the dominant way of thinking, is every neuron has a function, and you figure out the brain by figuring out what all the neurons do. But this is the case for finding all these neurons that show these top-down properties after the animal has went through this training.
What does this mean? Well, one possibility is the task is overrepresented as a result of training. Our monkeys spend weeks or months learning these tasks. Maybe all this [? daily ?] concentration on the task means that the representation of the task is greatly, abnormally expanded in the prefrontal cortex.
This seems unlikely. Our monkeys have weeks or months of training. We humans have years and decades of experience. So this amount of training they got pales in comparison to the kind of knowledge that you and I drag around with us.
Another possibility is the monkey can learn two or three things, and their brains will fill up. This was suggested to me sarcastically when I first reported for this result, saying this is impossible. Another possibility actually suggested that we just weren't clever enough to figure out what these neurons are actually doing.
Now, that may be, but it turns out none of this is the explanation. I'm happy to say that we've done follow-up studies. This result has been replicated over and over again. And other laboratories doing similar studies have found the same effect. You get these top-down-- a top-down knowledge trained into the prefrontal cortex in huge numbers of neurons.
So the explanation seems to be that many higher cortical neurons-- especially, but not only, but especially in places like the prefrontal cortex-- are multifunctional. They don't just do one thing. They show adaptive coding. They can change what they do, change their function, depending on task demands, on cognitive demands.
This is a property that Stefano Fusi termed mixed selectivity. And this is computational work from Stefano Fusi and Mattia Rigotti, using data collected by Melissa Warden when she was in my lab. And what they showed using our real data on their computational models is that this property of multifunctionality, mixed activity is needed to have a brain that can do complex, higher-order cognition.
They created a high-dimensional representational space, which allows solving more complex problems. Problems can be parsed more different ways if you have this high-dimensional representational space. That means the networks have great storage capacity, because the information isn't just encoded in a small number of fixed neurons. It can be encoded across many, many neurons.
And it gives you greater flexibility to bring information together and faster learning, all the kind of things you need for a brain that can do high-level cognitive operations, have intelligent thought. And if you want to know how the brain creates nice, stable representations in this high-dimensional space for all those activities impinging on the neurons, recent work from Leo Kazakov in my lab has been addressing this directly to show how the prefrontal cortex and other brain areas can lock into the stability it needs in order to have a functioning brain.
But this mixed selectivity, this multifunctionality, raises a question for the old way of thinking about the brain, namely, that what it must mean is that neurons participate in multiple ensembles. Now, we probably all know what a neural ensemble is. Just in case, the neural ensemble is thought to be a collection of neurons, a network of neurons that represents a percept, a thought, an action, a memory. And the old way of thinking about things is that there's a unique ensemble that represents every percept, thought, action, and memory.
But mixed selectivity multifunctionality implies that neurons don't just participate in one ensemble. You have ensembles lying on top of ensembles lying on top of ensembles, like I've pictured here, where we have these two different ensembles for these two different thoughts, if you will. And the two ensembles share many overlapping members that participate in both ensembles.
Now, the other old way of thinking was that anatomy in the brain is destiny. If two neurons have a connection, and the synaptic weight is strong, one neuron will fire, the other neuron will fire. Now, not quite that simple, but that was the whole idea, is that anatomy determines which neuron's firing and connectivity strength.
So if this is the case, if these two ensembles share common elements, common units, common neurons, how do you select the single ensemble from the overlapping anatomy? By trying to activate the pink ensemble here, the activity would run over to the purple ensemble. Then we have two ensembles being activated, and we have a jumbled mess of thought.
So how does this work? Well, one suggestion is that neural ensembles are formed by synchronizing brain rhythms. Your brain is highly rhythmic. It oscillates anywhere from 1 hertz to 100 hertz or more a second. That's what brain waves are. And one suggestion is that synchronizing the rhythms, the ongoing rhythm to these neurons across these frequencies, can help unambiguous neural ensembles form.
The idea here is take on the old [INAUDIBLE], neurons that fire together wire together. It's neurons that hum together temporarily wire together. If two sets of neurons, two networks are humming together, and they're synchronized, they go to activity states. What the oscillations are are neurons becoming activated and going quiet together. If two networks of neurons are humming together, they're in their highly-active states at the same time, they can communicate with one another. And if they're out of phase, they can't, because they're just not on the same page at the same time.
So this allows one ensemble to be selected from overlapping anatomy. We know which ensemble's the purple ensemble, versus the pink ensemble, because the purple ensemble elements have synchronized rhythms that synchronized within that ensemble. And the pink ensemble has its own rhythms or own phase relationships, and it has its own set of synchronized patterns.
Now, what [? this mix ?] may allow is cognitive flexibility. What high-level cognition is all about is flexibility, changing your behavior in a complex world in different contexts. As your goals change, the situation changes, you change what you think about. And it can't be done just by constantly rewiring the brain. That takes too long. That's where our long-term information gets encoded in the brain.
So we think this shifting patterns of oscillatory resonance forming ensembles can underlie this flexibility to form and change thoughts at the moment on the fly. So the way to think about this is anatomy is not destiny in the brain, as we previously thought. Anatomy is like the road and highway system.
This just says anatomy is possibility. It says where traffic could go. Spikes are the traffic, and these patterns of oscillatory resonance essentially are directing the traffic. Now to test this directly, when Tim Buschman was in the lab, he examined data from a study in which we taught monkeys to switch back and forth between two different rules.
Remember we talked earlier about neurons in the prefrontal cortex absorb this kind of top-down information like rule information, the knowledge needed to solve a goal-directed task. So in this experiment, the monkeys were looking at either a vertical bar or horizontal bar. And on different trials we cued them, tell us what the color of the bar is, ignore the orientation-- and that's the pay attention to color rule-- or pay attention to the orientation, make the judgment on that, and ignore the color. That's the pay attention orientation rule.
And what Tim found is summarized in this slide. This is a view of the lateral prefrontal cortex. I assume you guys have been seeing my cursor. If not, somebody let me know. But this is the lateral prefrontal cortex here. It would be about right here on the side of my head.
This is posterior, that's anterior. And here's a bunch of circles corresponding to different recording sites where we had an electrode array in the prefrontal cortex. And what the lines show are which sets of recording sites showed an increase in rhythmic synchrony in their LFPs, in their local field potentials, when the monkey was following one rule-- pay attention to color-- versus the other rule, pay attention to orientation
We look for this rhythmic synchrony across all frequencies from 1 hertz to 100 hertz. And we found these effects were limited to the beta band, 12 to 30 hertz. And what we see here is what we'd expect from mixed selectivity kind of writ large, and that is these two different rules form different patterns of beta synchrony between different recording sites in the prefrontal cortex.
They're overlapping, so many of the recording sites participate in both rules. But there are unique patterns from each rule. So it seems that these beta rhythms are actually really helping form these neural ensembles that signify which rule the monkey is following on a given trial.
If you want to know how this works at a higher level, this initial study we just focused on these pairwise-- we inferred network properties from these pairwise analyses. Dimitris Penotsis, my collaborator and colleague from University College London, has been taking this work to a further level, where he's studying these effects on the middle of on the whole network level. So I encourage you to take a look at his work.
Once we had that study in place, we decided to look for-- this is really a principle about how the brain encodes top-down information. These patterns of synchronized ensembles, we want to see how often do we see it in other circumstances? So Evan Antzoulatos, when he was in the laboratory, trained monkeys on, not dog and cat categories, but spatial categories of up versus down.
And he found the same thing, these unique patterns of beta synchrony in the prefrontal cortex, in the parietal cortex, a high-level cortical area in the parietal lobe, for these spatial categories. Not only within each area do we see these unique patterns of beat ensemble, but also in the patterns between the two areas.
We looked back over our old cat and dog data, our shape category, and found the same thing. In addition to neurons responding differently to cats and dogs, we saw different patterns of these beta ensembles for the two different categories. When the monkey was doing a cat, one pattern; when the monkeys make a judgment about dog, another pattern.
And also Evan and Andreas Wutz, they actually studied how these beta ensembles form as the animal's learning new categories. So they train monkeys to learn new categories in the course of about one or two hours while we eavesdrop on their cortex, and in that study, also the striatum, as the animals were learning these categories. And we could actually see these beta ensembles gradually form in parallel with the animal's learning.
OK, so you get the idea. So now [INAUDIBLE] background information, the idea that top-down information, prefrontal cortex, beta rhythms play a role in communicating top-down information, now let's get to the subject in hand, and that is working memory. Now, how do you hold-- working memory is holding manipulation of things in mind. How does it work?
Well, the classic model for the past 15 years or so has been persistent spiking of neurons. And here's a classic figure from Pasko Rakic's lab. This is one neuron averaged across multiple trials. This is the spike right in the y-axis, time on the x-axis.
Here, you show the monkey at cue some stimulus the monkey is supposed to hold in memory. It goes away over this memory delay-- nothing on the computer screen, nothing from the monkey. Then here, the monkey makes its response.
And what you see here, it says, neuron shows elevated activity when the cue is presented. And then the neuron, the activity drops down a bit, but doesn't drop back down to baseline. It stays elevated above baseline until the animal makes its response. So the idea here is the way working memory works is like a latch signal. Some cue, some important stimulus comes into the animal's brain, and the neurons in the prefrontal cortex latch onto that stimulus, latch onto that ensemble, and simply maintain its activity over this memory delay until the monkey finishes that trial and no longer needs the working memory.
But over the past 20 years or so, we've made lots of advances in multiple electrode technology. It allows us to look at larger and larger populations of neurons and also look at neural activity in real time, not average across multiple trials. I'll show you an example of this real time in a moment.
But when you look at this new data coming from this new technology, we find that most neurons don't really show something-- few neurons show something that looks like persistent activity, but the bulk of the neurons show the sparse and bursty activity. They only fire once in a while, occasionally. They fire above baseline rate during the memory delay, but in a very sparse and bursty way.
So this led to some updates to the classic model. If it's a year-old model, of course, there's going to be updates to it. And one update came from [? Pasko ?] and [? Rakic. ?] In one of her last studies, she showed that the prefrontal cortex has a neural mechanism to maintain information about spiking, and spiking induced temporary increases in synaptic weight. So there's a set of spiking, and the synaptic [? waste ?] between the neurons that were spiking increased their efficacy for about 750 milliseconds or about one second. So the synaptic weight changes help carry the memories between the episodes of spiking.
And other models, like the activity silent model of a Mark Stokes, who talked about this in a similar way, and also how the working memory activity evolves over time-- it's not a simple maintenance of some recent sensory input, it's actually more dynamic and complex-- and dynamic attractor model of Michael Lundqvist, these models say the same thing. Spiking is not doing all the work. It's helped along by short-term plasticity.
This makes sense, because spiking costs it energy. It takes a lot of energy to make a spike. So the short-term plasticity of these impressions that spiking leaves in the network allows the brain to keep ensembles in this more or less active state, with some short-term plasticity between these episodes of spiking, thereby saving energy.
But it also showed-- surprise, surprise-- that the neural dynamics underlying working memory are more complex. It's not just a simple latching on to a sensory input to maintain that activity. There's something more complex going around. The activity evolves and changes over time.
And what these dynamics have led to, observations of these dynamics, are some insights in how we gain this all-important volitional control over working memory. And that's what I'm going to tell you about now. So Michael Lundqvist published this dynamic attractor model of working memory when he was a graduate student. It was a nuts and bolts model, both from the bottom up. Then he came to the laboratory as a postdoc to test this model against data.
And what we found in the data supported the model. What we found is shown in this slide. That isn't just persistent activity. On a single trial level in real time, neurons show the sparse, bursty spiking to help maintain working memories, but they also show these changing oscillatory dynamics.
So we looked at the spiking activity to actually carry the working memories. Whenever there was spiking-- so first let me orient you to the slide. What we're looking at here is not spiking, but local field potentials.
We're looking at the power in local field potentials, high power in red, low power in blue. And here's time on this axis. This is frequency of the power, whatever frequency band the LFPs are oscillating in. And S1 and S2 is a working memory task, where we instructed the animal to hold two stimuli in working memory, S1 and S2. And here's the working memory delay.
So we found when we looked for the spiking there that's actually carrying these working memories, these recent sensory inputs, stimulus 1, stimulus 2-- S1 and S2-- we found that that spiking was invariably associated with these brief bursts of gamma activity like you see here, here, and here. These are brief, narrow band gamma bursts, and they invariably occur alongside this spiking that's carrying the working memories. So we found that spikes during working memories were associated with the gamma burst, which was a prediction from the Lundqvist model.
But interestingly, we also found that beta and gamma are anti-correlated. You can kind of see that here. So here on the bottom of the slide, outlined in red, are two bursts of LFP power. But it's into the lower frequency range, in the beta range, not in the gamma range.
But notice that these beta bursts tend to occur whenever there is no gamma bursting, so gamma bursting here, here, and here at the stimulus, no beta bursting. The gamma bursting pauses, then the beta bursting occurs. Beta bursting is not here when there's another gamma burst. So they seem to be anti-correlated.
And that can be shown right here. Now we're looking at, not a single trial, like I showed you here on the left, but the average across multiple trials. And it shows the burst rate-- how often do we get these gamma and data bursts as a function of time during the trial?
So again, here's the burst rate in the y-axis, time on the x-axis, stimulus 1 presented here, stimulus 2 here. Here's the memory delay. You can see that there is an increase in gamma bursting, and again, that corresponding spiking. When you present the two stimuli to the animal, and then it drops down in the gamma burst drop, everything drops down and then gradually rises over the memory delay. And this gamma burst profile, you see the exact same thing in spiking activity, in the spiking activity that's carrying the working memories.
But look what's going on here within the beta range. The beta range is like a mirror opposite of gamma. Whenever gamma goes up, beta goes down. Whenever beta goes up, gamma goes down. So they seem to have this push-pull relationship of anti-correlation.
So if gamma's associated with the spiking that helps carry the working memories, what is beta doing? Well, you may remember that I told you a few minutes ago that data is carrying, not bottom-up sensory information-- the contents of what they monkey's holding in working memory-- but instead, beta is carrying top-down information about the rules, the monkey's knowledge about how to solve the tasks. It's carrying this top-down information the animal needs to solve this goal-directed task.
What this suggested to us is a possible infrastructure for volitional control over working memory. Beta is carrying top-down signals, and they're a control signal. And they have an inhibitory influence on gamma, which is associated with the spiking that's carrying the contents of memory, the bottom-up sensory information that the monkey has to load in the working memory to solve this task. So again, the idea is that top-down information carried by beta rhythms inhibits or gates the gamma that allows the spiking that's carrying the bottom-up information, the actual contents of working memory.
Now, the subsequent study-- which I won't have time to go through, because it's kind of complicated study, but it was published a couple of years ago-- we looked at a complex working memory task to see if this beta and gamma signaling-- the beta and gamma bursts-- really do act like a content signal in the control system for working memory. And what we found supported it. We found the gate of gamma-beta dynamics.
They gate information in working memory. If you want to load information in working memory, you drop down beta, gamma goes up, neurons start spiking, you encode information of working memory. If you want to clear out the contents of working memory-- the trial's over, the animal's done with the working memory, no longer needs it-- beta goes up, it drives down gamma, neurons stop spiking. Working memory is cleared out.
It also helps switch the contents of working memory. There was a condition where the monkey had to switch the contents of working memory from one stimulus to another, and you see this rapid cycling of beta and gamma that could explain how the animal's getting rid of one stimulus and loading a new stimulus to working memory, switching the contents of working memory. And neurophysiological studies use these predictions of the animal's behavior to see if what we're actually looking at is task-relevant.
For example, if the spiking we're seeing has anything to do with performance of the task, we expect the spiking to change in a way that predicts the errors the animals makes or the decision the animal makes. Well, we looked at that, but we also looked at the gamma-beta dynamics. We found the gamma-beta dynamics were actually a better predictor of the monkey's decisions on the task and whether the monkey was going to make an error or not, better than-- spiking can do it, but gamma and beta did it better.
So next we wanted to look at how this actually works on the microcircuit level in the prefrontal cortex and other brain areas. So Andre Bastos and Roman Loonis, when they were in the lab, they decide to look at this microcircuitry of these beta-gamma interactions. And they did so using these so-called laminar probes.
These are electrodes that have recording sites all along the shaft of the electrode. And what that allows you to do is record from all layers of cortex simultaneously. Cortex has multiple layers.
You're interested in these layers because, generally speaking, the superficial layers of cortex are the feed-forward layers of cortex that carry bottom-up information from the back of the brain-- sensory cortex-- to the front of the brain, which does the superficial layers of cortex. Whereas the deep layers of cortex, they're the feedback layers of cortex. They are thought to carry top-down information from the front of the brain to the back of the brain.
So the first thing they looked at is working memory spiking. This is the classic, where do we see this sustained, this elevated spiking that's carrying the working memories? And what this slide shows on the y-axis, the cortical depth. So this is the basic contact along the electrode shaft.
The middle layer of cortex, where information first comes into each layer, each section of cortex, is shown in the dotted line. Everything above it's a superficial layer, is the feed-forward layers, below it, the deep layers, feedback layers. And what's shown here is the spiking activity while the monkey is performing a working memory task.
So here's when the cue is presented. This is during the memory delay. And most of this-- not all, but the bulk of the spiking that's carrying the working memories-- are in the superficial layers of cortex. This makes sense, because we're cuing the animal with the sensory cues, this picture. And this sensory input, this picture, is bottom-up information, so it should be in the superficial feedforward layers. That's how they get trafficked across cortex.
Next they looked at the relative strength of gamma and beta power. And what they found-- here now is the layer again, with the middle layer showing dotted, so layer along this axis. And this is a power in the gamma band, shown in gamma bands about 40 hertz to 100 hertz, in this case. And beta, again, is about 12 to 30 hertz.
The blue line shows the gamma power as a function, power being more power this way, less power that way. Gamma power is a function of a cortical layer And beta power in red shows a function of cortical layer.
Now, as you can see, there's more gamma in the superficial layers of cortex. Now again, I said earlier that gamma is associated with the spiking that's carrying this bottom-up information, these sensory cues that the monkey's holding in memory. So that makes sense that gamma should be stronger in superficial layers, because that's what superficial layers do. They traffic bottom-up information.
But beta was stronger in the deeper layers of cortex. And that makes sense, because I concluded earlier that beta is carrying top-down signals. And we should expect more beta carrying top-down signals in the deep layers of cortex that [INAUDIBLE] the feedback, top-down signals from the front of the brain to the back of the brain via the deep layers of cortex.
And when we looked across these push-pull, anti-correlated beta and gamma dynamics, we found that push-pull relationship was true, even across the cortical layers. So it's the same thing, just played out across cortical layers. This is the layer providing gamma, deep versus superficial; the layer providing beta, superficial versus deep; positive correlation in red; negative correlation in blue.
The way to read this is whenever beta was high in the deep layers of cortex, gamma was low in superficial layers of cortex. And whenever gamma was high in superficial layers of cortex, beta was low in deep layers of cortex, so the same push-pull dynamics, but now taking place between deep and superficial layers of cortex. So this supports this hypothesis that beta is carrying top-down information in the deep layers of cortex from the front of the brain to the back of the brain, and it plays this role in regulating the feedforward gamma signals that are occurring in bottom-up information in the opposite direction in the superficial layers.
So to sum this up so far, what we found is that top-down information is carried by signals in deep layer beta. And we see these patterns of beta that actually carry the top-down information. We don't see much pattern in gamma. In the gamma range, most of this information is sensory information, and it's carried largely by spiking. But these beta signals are pattern signals that carry top-down information, the knowledge the animal is using to solve the task.
This top-down information occurring at deep-layer beta regulates superficial layer of beta. So there's beta in both layers, and the deep-layer beta-- which is stronger-- couples to the weaker, deep-layer beta in superficial cortex. And that superficial-layer beta inhibits or gates or controls the gamma and the spiking that's carrying the working memories. So it works almost like an on and off switch. It can turn on and off working memory storage by turning up and down beta.
Now, to carry this further, some of our recent work has begun to look at a kind of a mystery of a visual cognition. And that is, working memory is split between the right and left visual fields. Now, probably most, if not all of you, have learned that in the back of the brain, the visual cortex splits the world into everything on the right side of vision versus everything on the left side of vision. And there's been a tacit assumption for a very long time, that somehow this split between right and left vision gets solved somewhere maybe at the front of the brain in higher-level cognition.
But we looked at this more recently on a series of studies starting in 2007, is that even at the level of the prefrontal cortex and working memory, the split remains. Even when you're holding things in mind, your brain splits the world between the right and left sides. Yet visual cognition seems seamless.
When I see things travel across my visual field, they seem like the same thing. I don't notice the split. So how does the brain heal the split, like the right and left sides of vision?
We'll do that-- Jacob Donoghue, Meredith Mahnke, and Scott Brincat, in the lab, they trained monkeys in a task that was designed to encourage monkeys to switch working memories between the right and left hemispheres. So the way it worked is the monkey is fixating a spot of light on one side of the computer screen, has to keep his eye there. We present a stimulus to either the right side of vision or the left side of vision in another trial. And then on half the trials, during the memory delay when the stimulus is no longer on the screen, the monkey moves its eye to the other side of the computer screen.
Now, if that banana was still visible-- the monkey was still looking at it-- that means it would move from one side of the brain to the other as the animal changes its perspective. But in this case, the stimulus wasn't there. The monkey's just thinking about it, holding its image in working memory. That happens on half the trials. On the other half the trials, the monkey's eye just stays where it was, so we have something to compare things to.
So what we wanted to know-- and this study is a bit of a gamble. We want to know whether when the monkey moves its eye on these memory delays, does it actually cause the image, the mental image the monkey's holding in working memory, to switch from one side of the brain to the other? And the answer-- well, here's how this might look.
Remember the brain is lateralized, so your right side of vision is processed by your left side of the brain, left side of vision by the right side of your brain. So if we present a stimulus on the left side, you get more activity in the contralateral or right hemisphere. So this illustrates what we might see.
The dark lines are what happens when a stimulus is presented to the contralateral hemisphere, and the monkey does not move its eye. It just stays right there. Those are the dark lines. Or it's presented in the ipsilateral hemisphere, and the monkey doesn't move its eyes. The lighter lines show what may happen when the monkey makes a saccade, and the image, the mental image, may hopefully switch from one side of the brain to the other.
Now one possibility, which is why the study is a bit of a gamble, is there would be no effect. The memory just stays where it was. In that case, the saccade would have no effect to all the trial. Stuff in the contralateral side would stay in the contralateral side, the ipsilateral side was staying the ipsilateral side.
The other possibility is that there's a shift. So again, the dark lines are what happens when the stimulus arrives at the contralateral hemisphere. Dark green stays there.
The dark brown line is what arrives in the ipsilateral hemisphere and stays there, because the monkey did not move its eye. And now this light green line shows what happens when the monkey moves its eye. And if the memory moves from one hemisphere to the other, that should cause the contralateral stimuli to switch to the ipsilateral representation, and there'd be a corresponding decrease in neural activity and information.
And the same thing for ipsilateral. The saccade causes this memory to switch from the ipsilateral side. The contralateral side, there should be an increase in activity after this saccade in the contralateral hemisphere, because the memory has shifted over there. And what we found is that, indeed, memory shifted hemispheres.
So here's the cartoon illustration we expect to see. This is the actual data. It looks amazingly like the cartoon. Here's where the monkey made-- here's the first part of the memory delay. Here's where the monkey made its saccade that shifted the stimuli to the other hemisphere-- we hoped would shift to stimuli to the other hemisphere.
And again, these dark lines here are what happens when there was no saccade, and they stayed there. And then the light lines are what happens when they shifted. And as you can see, the representations do shift. Here comes the saccade. There's a burst of activity in the prefrontal cortex, because the monkey's moving its eye.
But then when the animal re-achieves fixation, the contralateral stimuli switched to an ipsilateral-like representation. And the ipsilateral stimuli switched to a contralateral-like representation. So as a result of this saccade, memory traces disappear from the original sender hemisphere and reappear in the receiver hemisphere.
Now, how this links back to what I was just telling you about gamma and beta is that we see the same gamma and beta dynamics happening during this cross-hemisphere transfer of working memories. I haven't mentioned theta yet. Theta and gamma co-track in cortex. Gammas tends to be stronger in cortex, and beta is a weaker signal.
Well, when you do see theta, it tracks along with gamma. And what is shown here is, here's the memory delay, here's a time of the saccade, and this is the change in synchrony between the two hemispheres. We're looking at synchrony between the LFPs of the two hemispheres and the change in synchrony as a result of the saccade, which is transferring the memory from one hemisphere to the other. And what you see is there's a decrease in beta, followed by an increase in gamma and theta.
So again, the same permissive role here-- the memory is being transferred from one side of the brain to the other, so the two sides synchronize. They talk in this gamma band, which is carrying the contents. And this decrease in beta is presumably permitting this synchrony in this transfer between hemispheres.
And when we looked at the Granger causality analysis to look at these LFP signals, to look at where the signals are going, which direction the signals are going, we found that the Granger causality indicated that the directionality of the synchrony, the influence of the synchrony, was in the direction you'd expect from memory transfer, from the sender hemisphere to the receiver hemisphere, and not the other way around.
Now one result from the study, we found very surprising. And that is as the working memories are transferred from one hemisphere to another, they actually change, the ensembles change. So I'll explain this to you, then explain to you what we did.
What we did is we trained a classifier around electrode arrays in the prefrontal cortex. And we train them on trials in which a stimulus started at the contralateral hemisphere and then just stayed there, because there was no eye movement. So once you train the classifier, we then applied it to trials in which the memory started in the ipsilateral hemisphere, but then was transferred to the contralateral hemisphere.
And if you're just simply copying the memory from one hemisphere to the other, it's the same memory, after all-- same stimulus, same location, same everything. All that's different is where the memory is coming from. It's either coming in a bottom-up way from that side of the brain, or it's being transferred from the other side of the brain. So if the ensemble is just being copied from one hemisphere to the other, if you train them, the classifiers, on these contralateral trials and then apply it to these trials in which a stimulus switches from ipsilateral to contralateral, you should be able to decode the signal.
The other possibility is that Mel hypothesis. You can't do that. You train these things on the ipsilateral trials, and when the memory comes over from the other hemisphere, it's completely different. Well, what we found is a null hypothesis, hypothesis 2. The memories are completely different. So again, we train on the contralateral, the stimulant.
Here's the trials in which the memory started on the contralateral hemisphere and stayed in the contralateral hemisphere. That's in the faded green line here. The faded brown line is what happens when the stimulus stays in the ipsilateral hemisphere, starts in the ipsilateral hemisphere and stays there.
And the dark line shows what happens when the memory-- whenever we're seeing this, the memory is transferred from ipsilateral to contralateral. And the level of neural activity, we're seeing the ipsilateral transfer, in that neural activity goes up, and even information about the stimulus and spiking goes up. But as far as the classifier is concerned, it's a completely different pattern. There's no savings, no transfer of the classifier trained in the contralateral ensembles to, presumably, the same memories, but ones that came from the opposite hemisphere, so if the ensembles are just completely changing in their structure, in their shape, in their ensembleness, when they transfer between the hemispheres.
Now, getting back to our beta and gamma dynamics, when we conducted these studies, we wondered, why would these dynamics apply to just working memory in the frontal cortex? If you look across cortex, you see gamma all over cortex, spiking all of the cortex, for that matter. And you see these low-frequency dynamics, beta in the frontal cortex. When you go back in the visual cortex, the frequencies seem to shift down a bit to the alpha range. So why would these push-pull dynamics just happen in just one part of the cortex?
If we see these oscillations all over cortex, should they play a more general role in cortical function? So first [INAUDIBLE] is Michael Lundqvist, he looked at these dynamics all over the cortex, frontal, parietal, visual cortex. And he found the same push-pull dynamics. Wherever we stuck an electrode in cortex, we found them. And this suggested these are similar mechanisms that play all over cortex.
Now, one interesting thing is that as you move forward from the back of the brain to the front of the brain, all the oscillations increase in frequency slightly. So as a result, alpha in visual cortex, which has this push-pull relationship with gamma, becomes beta in frontal cortex. Essentially, the alpha-beta is the same signal. It's just increasing in frequency a bit as you go up the cortical hierarchy from the back of the brain to the front of the brain.
So now that we had this in mind, we thought, well, what sort of general cortical function could be served by these push-pull beta-gamma dynamics? Well, one thing that occurred to us is something called predictive coding. Predictive coding is that your brain is constantly making mental models of the environment, constantly anticipating, constantly predicting what's coming in the next second or so. And it's doing so, so it you could not be overloaded by sensory inputs.
Your brain wants to process things that are interesting and important, not things that were predictive, because things that were predictive, things that are expected, by definition, are really not that informative. So predicted inputs are suppressed, because they're not that interesting. Unpredicted inputs, things that weren't expected, prediction errors, are fed forward to update the mental model.
So everybody, virtually everybody agrees that some form of predictive coding is fundamental to how the cortex works. What we don't know is how this works in the actual brain. And models to date have used specialized circuits at each level of cortex that do things like the tech prediction errors and feedforward signals or suppress signals. Well, we thought that these beta-gamma dynamics might be another way of implementing predictive coding in a simpler fashion that doesn't require all this extra circuitry-- a more parsimonious explanation.
So here's how we thought this might work. Can gammas feed forward, bottom-up? Alpha-beta is the top-down signals. So the idea here is that now I have higher brain, like the prefrontal cortex, and higher areas on the right, lower-order areas of the sensory cortex on the left.
And the idea is that alpha-beta carries the predicted predictions. And they inhibit the gamma, and therefore the spiking, in the pathways that process sensory inputs that match those predictions. So what prediction errors are, are just simply circuits back in sensory cortex that have not been affected by these alpha-beta inhibitory prediction systems. You get a prediction error because the alpha-beta signal didn't inhibit that pathway, because the alpha-beta signal only inhibits representations, circuits, networks of stimuli that represent a predictive stimulus.
So Andre Bastos, in collaboration with Nancy Kopell's lab, we did a series of studies where the animal will do a simple working memory task, hold a picture in working memory over a short delay. And what we varied is how predictable this sample stimulus was. Was it highly predictable, or the same stimulus is used multiple trials in a row, so the animal knows the next stimulus, if it's a car in this trial, it's going to be occurring in the next trial; or a block of trials in which the sample object-- this picture-- was unpredicted, it was randomly varied so the monkey couldn't predict what the stimulus was?
So we're doing two different types of objects. They're all the same objects, but in some conditions they're predictable, in some conditions are unpredictable. And what we found is consistent with this model of alpha-beta gamma dynamics playing a role in predictive coding. Namely, this just shows LFP synchrony between two areas, between the frontal eye fields in frontal cortex, in the area of V4 in visual cortex. This shows the coherence on the y-axis, frequency on the x-axis. Here's gamma, there's theta, which tracks with gamma, and there's alpha-beta lying just below gamma.
And what this secretly measures is the difference between trials in which the sample stimulus was predicted-- predictable versus unpredictable. So up on the axis is more activity when the sample stimulus was predictable, and down on the axis is more activity when the sample stimulus was predictable. Sorry, did I say it backwards? This is higher to unpredictable sample objects, and this is higher to predictable sample objects, or unexpected versus expected.
And what we see is consistent with this model. There's more gamma and theta synchrony between these areas whenever the animal sees an unpredicted stimulus, one that it wasn't expecting. And there's more alpha-beta signals when the animal is in a block of trials where the stimuli are highly predictable.
And again, this higher alpha-beta signal is because there's something to predict. These signals are presumably carrying these predictions, and this higher gamma synchrony to unpredictables, because there was no inhibition. These networks are allowed to express gamma, because there was nothing, no predictions inhibiting them.
And we saw this writ large across the cortex. Here's now gamma. This is a measure of synchrony or coherence in the gamma band, beta band, alpha band, and theta bands. And to make a long story short, blue shows more and thickness of the line, and the color shows more synchrony when the stimuli are predictable. And red shows more synchrony when the stimuli were unpredictable.
And the way to read this is that gamma coherence was, first of all, stronger in the feedforward direction, from the back of the brain to the front of the brain. And it was stronger when the stimuli were unpredictable. By contrast, alpha-beta coherence was stronger in the feedback direction, in the anterior to posterior direction. And it was stronger whenever the stimuli were predictable, so consistent with the predictive coding alpha-beta gamma model.
But importantly, the effects were stimulus-specific. These effects of alpha-beta were strongest at the recording sites in the visual cortex, where the spiking preferred a predicted stimulus. And the gamma synchrony, the gamma power was stronger at recording sites in visual cortex, where spiking preferred an unpredicted stimulus the animal just saw that elicited the gamma.
So that is consistent with the model, in which alpha-beta is like-- all this consists of a model, which alpha-beta is like a control signal that feeds top-down information through the cortex, from front of the brain to the back of the brain, in the deep layers of cortex. It regulates the expression of gamma, primarily superficial layers of cortex, that traffic bottom-up sensory information, the gamma that's associated with the spiking that's feeding forward sensory information. And we've seen this everywhere we've looked, these same dynamics everywhere we've looked in cortex.
So we think this may reflect a modal cortical circuit that plays a role in top-down control of working memory and attention, also predictive coding, and even the transfer of memories between cerebral hemispheres. And they could explain things like sensory overload and autism, for example. The role of beta is to regulate gamma so you don't have this sensory overload that that beta somehow disrupted. Everything that your brain sees is going to seem like a new stimulus. It's going to be fed forward with these gamma signals, and you're going to get sensory overload.
And also things like top-down attention-- if you want to not pay attention to something, pay attention to something else, you want to send those alpha-beta signals down, so they can inhibit the representations that reflect the things you don't want to pay attention or things that made distract you. So again, a modal cortical circuit plays a general role in allowing top-down signaling in your brain, top-down information in your brain to regulate the flow of sensory information in your brain. And with that, I thank you. And I especially thank all the people in my lab who do all the hard work conducting these experiments. Thank you.