Coding of space and time in cortical structures
Date Posted:
September 18, 2023
Date Recorded:
September 12, 2023
Speaker(s):
Prof. Michael Hasselmo; Director, Center for Systems Neuroscience; Boston University
All Captioned Videos Brains, Minds and Machines Seminar Series
Loading your interactive content...
Description:
Abstract: Recordings of neurons in cortical structures in behaving rodents show responses to dimensions of space and time relevant to encoding and retrieval of spatiotemporal trajectories of behavior in episodic memory. This includes the coding of spatial location by grid cells in entorhinal cortex and place cells in hippocampus, some of which also fire as time cells when a rodent runs on a treadmill (Kraus et al., 2013; 2015; Mau et al., 2018). Trajectory encoding also includes coding of the direction and speed of movement. Speed is coded by both firing rate and frequency of neuronal rhythmicity (Hinman et al., 2016, Dannenberg et al., 2020), and inactivation of input from the medial septum impairs the spatial selectivity of grid cells suggesting rhythmic coding of running speed is important for spatial coding by grid cells (Brandon et al., 2011; Robinson et al., 2023). However, entorhinal neurons code head direction more than movement direction, raising questions about the role of path integration for computing position (Raudies et al., 2015). As a complementary mechanism, allocentric spatial location could be coded by transformation of egocentric sensory input. Data from our lab shows coding of environmental boundaries in egocentric coordinates (Hinman et al., 2019; Alexander et al., 2020) that can be combined with head direction coding for a transformation into allocentric coding of boundaries and spatial location. Thus, a variety of functional neuronal responses could contribute to the coding of time and spatial location.
Bio: Research in the Hasselmo Laboratory concerns the cortical dynamics of memory-guided behavior, including effects of neuromodulation and theta rhythm oscillations in cortical function. Neurophysiological techniques are used to analyze intrinsic and synaptic properties of cortical circuits in rodents and to explore the effects of modulators on these properties. Computational modeling is used to link these physiological data to memory-guided behavior.
Experiments using multiple single-unit recording in behavioral tasks are designed to test predictions of the computational models.
Areas of research focus include episodic memory function and theta rhythm dynamics in the entorhinal cortex, prefrontal cortex, and hippocampal formation. This research has
MATT WILSON: So it's a real pleasure to be able to welcome and introduce a great colleague and friend of mine, Mike Hasselmo. So Mike didn't want me to go too long. We have a long history together. We actually were-- we started our careers at Caltech together. I was a graduate student. Mike was a postdoc in the lab of Jim Bauer.
Mike started off doing slice electrophysiology. We were both studying the olfactory system. I was doing neuromodeling and some electrophysiology. Mike was doing slice electrophysiology. You know, he was afraid I would actually bring this up. And I wasn't going to bring it up, but I've got to bring it up.
So one of the things-- our paths have gone in parallel tracks. First, right across the hall from one another while at Caltech, and now right across the river. But one of the things about Mike is that when he would do his behavioral electrophysiology, one of the things he liked to do was to listen to TV tunes. And when I say TV tunes, it's like the intro theme-- so like Flipper or the Beverly Hillbillies. Just this sort of mindless.
Now, he did this because doing electrophysiology is kind of a mindless thing. It just requires-- you have to sit, and there's this-- you're applying electrical stimulation. So there's not a lot to do. You have to monitor it, but there's not a lot of active thinking going on. So the show tunes, the TV tunes, kept his mind in an appropriate place. I don't know where that place is, but I was always there with him.
And so from that time as a graduate student, as a postdoc, Mike then went on, got a faculty position at Harvard, and then moved to BU where he's now director of the Center of Systems Neuroscience and really kind of championed a program and approach that drew from his roots in basic cellular biophysics. And then later into computational neuroscience, developing models of complex biological systems, and then incorporating things like behavioral electrophysiology.
So it really was kind of the whole package from understanding how cells, circuits, systems give rise to basic computations underlying behavior. And so I got in the TV tunes story. And I'll leave the rest of the story to Mike. So Mike, welcome. It's great to have you here.
MICHAEL E. HASSELMO: Great. Thanks very much.
[APPLAUSE]
Yeah. It's really great to be here and get to see old friends and be reminded of the old days. You know, actually, I still have TV tunes on my iPhone. And when I'm driving in the car and a TV tune comes on, I'm literally reminded of Matt Wilson. So it's great.
Yeah. So I'll be talking about Coding of Space and Time in Cortical Structures. I'm going to cover kind of a wide range of phenomena that we've been studying in the lab. And right up front, I want to give credit to the various researchers in my lab that have done a lot of the work that I'll be presenting today.
So a lot of what I present was done by Jennifer Robinson, who's a postdoctoral fellow working with me, as well as Mark Brandon. Also, I'll present work done by Patrick LaChance, who just joined my lab, and I'll present work done by Andy Alexander and Holger Dannenberg who were former postdocs.
Andy is now at UC Santa Barbara. Holger is at George Mason. Jake Hinman, I'll present his work. He's at University of Illinois. And also, work done by Ben Kraus, Caitlin Monaghan, and Mark Brandon, who's now at McGill University.
So most of you know about episodic memory function and the structures involved in episodic memory function. I've been very interested in the neural representations in structures, including the entorhinal cortex, the retrosplenial cortex, the hippocampus. As you know, there's severe impairment of the encoding of new episodic memories that was observed in patient HM when he had bilateral removal. I like to emphasize, it was all of his entorhinal cortex but only half of the hippocampus.
There's also a lot of fMRI data, including work by my wife, Chantal Stern, who's also at Boston University, showing that neural activation, fMRI activation, in the hippocampus and entorhinal cortex and associated structures is correlated with the accuracy of subsequent memory for events. So there's a lot of evidence implicating these structures.
And the rodent structures make a great model system for this because they're relatively large relative to the rest of the brain-- the hippocampus, entorhinal cortex, and retrosplenial cortex-- and that means that it's appropriate for doing unit recording and behaving animals as the animal's running around the environment.
So I'm going to talk a lot about different cell types here-- time cells, distance cells, grid cells, place cells-- but I want to emphasize right up front that even though I'm talking about these different cell types, they're all involved in a mixed selectivity. They're really just different responses in a broad continuum of responses that include mixtures of these different dimensions.
And you probably all know about the work by Earl Miller and Stefano Fusi, showing the importance, the computational importance, of this mixed selectivity across different dimensions. So I'll talk about the time cells in the hippocampus and entorhinal cortex. Will also often fire as grid cells or as place cells. There's grid cells that also show speed coding and head direction coding.
There's head direction and speed cells. The retrosplenial egocentric boundary cells that I talk about also have head direction coding and speed coding. So really, there's quite a broad mixed selectivity, even though I'm going to talk about these different categories as if they are discrete, non-overlapping categories.
All right. So I'm going to talk first about coding of time, specifically in these structures in the context of episodic memory. Then I'll talk about coding of space. And obviously, that's a humongous topic, but I'll focus in on certain aspects of it relevant to episodic memory. Then I'll talk about the egocentric coding of boundaries and the dynamics of encoding and retrieval.
So first, I'll talk about coding of time. So I developed kind of a broad model of episodic memory that was published in 2009. I was just talking to Ila Fiete about it earlier because she actually found this paper and has been working with a model very similar in structure to it. And then I also published this in my book.
But this is basically focusing on episodic memory as a spatiotemporal trajectory. So probably many of you know episodic memory was originally defined by Endel Tulving as the memory for events that occur at a particular place and time. And he was contrasting this with semantic memory, which is the general knowledge of facts about the world where you don't remember the specific event.
For instance, all of you know that Paris is the capital of France, but you probably do not remember the time and location when you first learned that fact. In contrast, you hopefully have a memory of walking into the room at the beginning of the lecture today and whatever you did at lunchtime today, and that would have a very specific time and place associated with it.
So I emphasize the idea of a spatiotemporal trajectory because I think it's very important to also think about components of trajectories such as the speed and the direction at a particular position. So you can probably remember what speed you walked into the room or what speed you walk down the sidewalk this morning, as well as the general direction that you were moving.
So I modeled this as a spatiotemporal trajectory where I could give an input, for example, related to one of my mornings at the time that I was writing the book, where I would park the car in the parking garage and walk down to my old office location-- it's now up here-- and meet with different people and then go to different offices around the University. And I could encode this as a trajectory, shown in gray. And then I could actually have the simulation retrieve this trajectory, as well as the events occurring along the trajectory, using these various functional cell types that I've been talking about.
Now, obviously, a very important part of that is to discriminate the events that occur in the same location at different time points-- sitting in my office, meeting different people, or what you'll experience, hearing my talk, sitting in exactly the same location. Hopefully you can differentiate something that happens now from something that happens 20 minutes from now when you're trying to remember it later.
So time cells could be very appropriate for making that type of distinction, at least on a shorter term time scale. And I want to emphasize right up front that time cells, or what they called episode cells, were described by Eva Pastalkova in Gyorgy Buzsáki's lab. Then they were called time cells by Chris MacDonald and Howard Eichenbaum's lab.
So there's a large number of studies now showing these types of time responses. And Marc Howard at Boston University has done analyses across a wide number of different experiments, including work in Earl Miller's lab, showing these time-specific responses of neurons.
But what I'm showing is a set of time cells that are firing during a 16-second delay period as an animal is running on a treadmill here in a spatial alternation task. So it's going to run for 16 seconds. Then it'll make a left turn response. Run for 16 seconds. Make a right turn response. And that's what it's been doing continuously. And you can see these cell types are actually going to fire consistently across many trials.
So I'm going to start the movie now. The popping you hear is the-- the different tones indicate different cells. You can see the cell coded in red fires first for a period of time. And then, even though the animal is not changing its location or its direction, then, a few seconds later, the cell coded in green fires, and then a few seconds later, the cell coded in blue fires. So these three different cells are coding very distinct time periods during this 16-second interval.
And as you can see up here on the left, these cells are very consistent across many trials firing at the same time points. But the cell coded in red, in addition to having a time field, also fires as a place cell at a particular location on the track. And you may have noticed right at the beginning of the video, the cell coded in blue fired in another place on the track, and it reliably fires in that place. So these are a great example of mixed selectivity where the cells are coding both time and space.
And so here you have the red, the green in the middle of the running period, and then the blue at the end of the running period. So this is a great potential mechanism for storing the time of events, at least on this 16-second time scale, during the running.
Now, obviously, we had to-- in order to get the experimental data, we had to run many, many trials. And so you might say, oh, well, it's distinct from an episodic memory. But at least it provides a framework in which these different events could be stored at different times.
Now, the reason we had the animal running on a treadmill was that we wanted to be able to differentiate the coding of time from the coding of distance. And so we could have, for a single cell-- you can see here, this cell is showing firing at the end of the 16-second period. And the different colors are the same cell but the different running speeds.
So when the running speed was slow, it'd be covering a short distance. When the running speed was fast for the treadmill, it'd be covering a long distance. But you can see, because the time is consistent, the cell tends to fire at a reliable time and not at a reliable distance.
In contrast, in a different day, when the animal was running the same distance, you can see here the distance was held constant by having it run for a long period of time at a slow speed or a short period of time at a fast speed. And in this case, the time coding was variable, but the distance coding was consistent.
And so even though the cells did actually cover a range of different responses with different combinations of time and distance, there was a bias towards time coding when time was held constant or towards distance coding when distance was held constant. And this is confirmed, actually, in a more recent analysis done on the same data by Dori Derdikman's lab.
Now, we've also done calcium imaging in the lab. This is a project done by Will Mau in a collaboration with Howard Eichenbaum before he passed away. And this shows that he could track-- you could track calcium events occurring in different cells at different time points during this 10-second period of the animal running on a treadmill.
This is showing different laps, showing here, again, the same sequence of time cells occur where they're showing these calcium events at particular time points during the 10-second period. So as you probably know, calcium events can be somewhat sparser than electrophysiology. But it's still reliably showing unit events at the beginning or at the end of this 10-second period.
Now, the calcium imaging also allows tracking of larger numbers of cells. So Will had 172 cells that were coding particular time intervals. And you can see they're covering a good range of these 10 seconds, but there's more of them at the beginning with short time fields and fewer of them towards the end. But the ones towards the end are having broader event fields here. And that's an interesting characteristic that seems relatively consistent across the time coding, and I'll talk about that more.
Calcium imaging also allowed the same population to be tracked across days. And so here you can see, on day four, you can still see the same overall population coding the different times during this 10-second interval. But some of them, there's some drift. Some of them are changing the time of their response. And you can see that as a bug, but you could also see it as a feature as it allows you to actually distinguish by the population an event that occurs on day four versus on day one.
And even within one day, the correlation across the population between trials falls off as there's a larger lag between trials. So it would allow the animal to be able to discriminate events across different trials. And this is promising because this also suggests that the evolution of time responses is sufficient for coding time on a time scale of minutes, which is, of course, much more meaningful for you guys. If you're sitting here in my talk, you need to differentiate things that happen 10 or 20 minutes apart.
So this was shown further in work by Yue Liu in a project in Marc Howard's lab at Boston University. He showed the data I've already shown you, where there's coding of time cells on a time scale of seconds in the work by Will Mau, and also another project by Sam Levy. And he also looked at data from the Ziv lab. And so here's a cell that fires at the beginning of the period, here's one that fires in the middle, and the correlation tends to change smoothly over the time period of 10 seconds.
But he also showed changes that occurred on the time scale of trials within a day-- so on a time scale of minutes-- where consistently this particular neuron is increasing on later trials, or this one's firing at the first trial, and then decreasing, and then coming up. And the correlation structure is relatively similar in the sense that there's higher correlation between adjacent trials than between more distant trials.
And so this is encouraging because it does suggest that time cell representation is relevant to a wide range of different scales. And this has been a major focus of Marc Howard's research. So in one of the projects we did, it was a collaboration to do a more detailed spiking model of his standard model. His standard model is essentially the Laplace transform.
And the idea of it in the context of neuronal activity is that you could imagine, when the animal is coming on to the treadmill, that's a very salient event for the animal. The treadmill starts moving. It takes them a while to get used to that because it's a very salient event. But the idea is that in this Laplace transform model, you'd have a set of neurons that all start firing, and then some of them decay with rapid time constants, and others decay with slower time constants.
And this was inspired by some slice physiology that I've done with Ángel Alonso, showing cells that could decay with relatively slow time constants firing over time. In the model, then, these cells with different time constants could interact via excitatory and inhibitory connections to generate time cells.
And this is a lot like the model of dual exponential synaptic potentials, where you have a positive-going exponential that's decaying, and then you subtract from-- you have subtraction of a negative exponential that's a faster time constant. And so you get the rise time and then the decay. But it has the characteristic, then, that the ones that are coding a short time interval will have a short firing field and the ones coding a longer time interval will have a longer firing field.
And that's what's shown here. And that corresponds to what we see in the experimental data, with the short firing fields at the beginning of the period, and the longer firing fields at the end of the period. And obviously, this is all being timed relative to the onset of the treadmill moving.
Now, we were excited by the fact that when we sent in this model, a little bit later, the Moser lab came out with a paper showing that these types of long-term decays of the firing rate occur in in vivo recordings in the lateral entorhinal cortex as well. So here's one with a very slow decay of firing rate over many, many trials in open fields. Here's one where each new open field has this same decay characteristic. So this is consistent with the idea that time cells are arriving from this type of exponential coding.
Right. So that's kind of a brief history of time. Now I'm going to move on to talking about coding of space. And again, I'm going to focus in on particular aspects of the coding within these structures relevant to episodic memory. So probably many of you here would have heard about grid cells. Ila Fiete has done a lot of work on grid cells, and so it should be familiar to you.
I'm just going to show you a quick movie for anybody who hasn't seen grid cells. These are cells recorded in the medial entorhinal cortex. They were discovered in the Moser lab over in Trondheim, Norway. And what I'm going to show is a schematic of a recording done by Mark Brandon, and then a video of a recording done by Caitlin Monaghan.
In both cases, the animal is foraging around, looking for bits of crushed up Fruit Loops in an open field environment. And you can see here, the LEDs on the front and the back of the head mount allow the tracking of the position and the head direction of the animal as it's running around. And so as it's running around, the position of the animal, each time a single grid cell fires, a spike is plotted with a red dot.
And you can see that initially, it looks maybe random, but as the coverage of the environment continues, you can see that these red dots are clustered in very highly localized firing fields that can be described as being located on the vertices of tightly packed equilateral triangles or as having a hexagonal layout. And this regularity of the firing requires the animal to have a very accurate representation of its location in the environment because if you simulate this and you have any kind of noise in there, you very quickly lose that accuracy.
So one question was, do these cells have mixed selectivity as time cells? Mark Brandon and Ben Kraus worked on this together. So Mark Brandon did the recordings in the open field environments to identify individual grid cells, and then he gave them to Ben Kraus, who actually had done the previous data I showed you. And Ben ran them on the treadmill in the spatial alternation task.
And some of them actually would just have a single time cell firing field, but others had multiple time cell firing fields. So that's what I'll show in this video. It fires actually right before the treadmill starts, then as it's running. It'll fire in the middle of the treadmill period, then stop. And then fire in the end of the treadmill period. And this one just jumps right away to the next treadmill running period to show that it consistently fires at the same time points in each of the periods of treadmill running.
And it actually shows a widening, and it's very characteristic. Shows this widening of the firing fields as the interval continues, suggesting that if you were modeling this with the Laplace transform, you'd need to have multiple-- maybe six interacting exponentials in order to model that cell.
So another characteristic of grid cells, in addition to the fact that they can, in some case, have this mixed selectivity, is that they also have very strong firing and relationship to the theta rhythm. And this is kind of one of the kind of main themes of what I've been working on in the laboratory, is to understand the role of oscillatory dynamics in the response properties of these neurons.
Now, probably many of you know that the theta rhythm is regulated by input from the medial septum to the hippocampus and the entorhinal cortex. The GABAergic neurons in the medial septum contact GABAergic neurons in both of these structures to cause rhythmic inhibition that causes a rhythmic disinhibition that results in this large amplitude oscillation in the local field potential, which tends to be around 7 or 8 hertz in rodents running around in an open field environment or on a linear track or a treadmill.
The medial septum also has cholinergic neurons and glutamatergic neurons that are coming in and modulating the dynamics, and I'll talk more about that later. And the grid cells show very striking phase specificity relative to the theta rhythm oscillations. This is a grid cell in Layer II, showing firing that starts in a middle phase and moves earlier and consistently in these different firing fields, starting at later phases and moving to earlier phases.
There's two cycles plotted here, but you can see this curve here where they're starting out at late phases and shifting to earlier phases as it runs through the firing field of an individual grid cell. And so this suggests an important role of theta rhythm oscillations in the mechanisms of grid cells, and this is something that I am very, very interested in. It's still somewhat controversial, whether theta rhythm is essential to grid cells, but I think our data supports that.
So one of the main pieces of data was worked on by Mark Brandon using a technique that had been used for a number of studies, showing the ability to shut off theta rhythm in the hippocampus and entorhinal cortex. And this is done by infusions of the GABAA agonist muscimol into the medial septum. will inhibit all of the cell populations in the medial septum, shutting off the cholinergic, glutamatergic, and GABAergic neurons and resulting in a loss of theta rhythm.
And he saw quite a striking change in the grid cell firing. Essentially, it was lost when the theta rhythm was lost. So here's the baseline condition, with the theta rhythm present in the local field potential. And then here's the grid cells spatial periodicity in the baseline condition. During the medial septum inactivation, there's a loss of this theta rhythm in the local field potential, and there's a loss of the spatial periodicity of the grid cell. And then, as the theta rhythm recovers, the spatial periodicity comes back.
And a similar result was published in a companion article in Science from the Leutgeb lab, showing very much the same effect but using lidocaine, which has a faster time effect on the medial septum. So this, to me, was very suggestive that the theta rhythm was important, though it could be these other modulatory inputs. And we've actually explored that in more recent studies that I'll show you.
We also wanted to rule out this just being a general loss of all specificity of the firing. And again, the mixed selectivity was useful for this because there are conjunctive grid by head direction cells in the entorhinal cortex. And so we could record those. Here's one showing this grid cell firing characteristic and the fact that this one is firing when the animal's head is pointed towards the south.
So this is a polar plot showing the firing rate being high for the south, and then very low for east, north, or west. Then, during medial septum inactivation, there's a loss of that grid cell spatial selectivity, but the head direction selectivity is maintained in the same way. And then the grid cell selectivity comes back.
Now, in a different study, Jeff Taube's lab showed that inactivating the anterior thalamus would wipe out the head direction selectivity coming into entorhinal cortex. And that would also cause a loss of the grid cell selectivity. So it seems like both this head direction input and the medial septal theta rhythmic input are important for the spatial selectivity of the grid cell firing.
As I mentioned, we were very interested in understanding the relative role of the different subpopulations in the medial septum. And Jennifer Robinson took on this project in a collaboration with Mark Brandon. And bottom line is that it looks like the GABAergic neurons that are regulating theta rhythm are particularly important for the grid cell firing.
So what Jennifer did was to generate expression of archaerhodopsin in the medial septum so that she could optogenetically inactivate the activity of the GABAergic neurons, thereby losing this GABAergic rhythmic modulation of GABAergic neurons in entorhinal cortex. And she wasn't affecting the glutamatergic or cholinergic neurons.
And during this laser on period, you can see here there's a very high amplitude power, around 7 to 8 hertz, for the theta rhythm as it's running around. And then, when the laser goes on, that is lost. And you can see that plot here. That's a very striking loss of power in the theta range from 7 to 8 hertz.
And she saw, correspondingly, a statistically significant reduction in the gridness score of grid cells. So here's a grid cell with multiple firing fields. Those firing fields seem to be greatly decreased during the inactivation. Here's another one showing this decrease. I'm, of course, showing you really nice examples of that. There were some that didn't show nearly as strong an effect. But on average, the gridness score was greatly reduced from the baseline.
So here's the baseline gridness score, and then the gridness score computed from the 30 seconds-- both the 30 seconds of laser on and the 30 seconds of laser off in between. And that was because, I have to admit, when we designed the experiment, we had these relatively rapid transitions, and there wasn't enough time for the grid cells to recover during that 30 second laser off period.
But they did recover by the recordings a few hours later. In this case, 24 hours later. And then she did another experiment where she did 30 seconds on and 60 seconds off. And you can see a somewhat weaker effect there because there was some more time for recovery. Here's showing the time course with the baseline gridness score. And then the running average-- so the first 0 to 30 seconds of the light on.
And actually, the loss of the gridness score comes on progressively over maybe the first 30 seconds after the lights coming off, and then it starts to recover. So there's a time course of the reduction in the gridness score, and then a beginning of a recovery of the gridness score.
And I was kind of happy to find out that this is consistent, actually, with both grid cells and head direction cell recovery. If you switch between light and darkness, the recovery of those cells often will take at least 20 or 30 seconds. So it's not unusual for recoveries to be relatively slow.
Jennifer also did inactivation of the glutamatergic cell population in medial septum to see their role. She did see some changes in the firing characteristics, but she did not see a loss of theta rhythm, and she did not see as large an effect on the gridness. And Holger Dannenberg did inactivation of the cholinergic input to the entorhinal cortex while recording grid cells and also saw maintenance of the grid cells.
So it really looks like it's these GABAergic inputs that are the most important ones for regulating the grid cell responses. And that's consistent with the fact that they're also the ones that are regulating the theta rhythm.
Now, I'm fascinated with the fact that Eva Pastalkova did the same type of experiment as the original one using muscimol-- infusing muscimol into the medial septum while recording time cells. This hasn't been done yet for time cells in entorhinal cortex, but she's done it for time cells in the hippocampus.
Here is a population of time cells firing at different time points during a 9-second delay. And you can see that during the medial septum inactivation, there's also a loss of the selectivity of time cells that is then recovered. So it's fascinating that both the time response and the spatial response seem to require this modulatory input from the medial septum.
Now, this raises kind of interesting questions about, what is the mechanism for the generation of these grid cell firing characteristics? And so this next part of the talk, actually, I'm going to talk about a couple of different potential mechanisms for the generation of the grid cell spatial selectivity. The first one is the one that was used the most in the early models of grid cells, including the attractor model that Ila Fiete published, and this is the path integration of self-motion velocity over time.
I'll go into more detail about it, but basically, this is just taking the speed and movement direction of the animal and integrating it over time. And if you integrate the velocity, you can generate the position as long as you know the starting position. But then, there's a potential complementary mechanism-- they aren't necessarily exclusive of each other-- that would involve looking at the egocentric view of the world and transforming that into the allocentric representation of position.
So I'll talk first about the path integration. And this seemed very plausible right from the start because it requires coding of speed and direction. And there was already plenty of data showing coding of speed by neurons in entorhinal cortex and in other structures like the hippocampus. So here's an example of a neuron showing a change in firing rate with running speed.
And the earlier models also pointed to the very extensive data on head direction cell responses in entorhinal cortex and other structures. So here's an example of that. You already saw it earlier, but this one's a head direction cell responding when the head is towards the southwest and not when it's towards the north or the east.
If you think about what path integration involves, you realize, of course, path integration requires movement direction rather than head direction. And I'll come back and talk about that issue. But first, I want to talk about the speed coding and how it might relate to the grid cells.
So there's, again, many examples of mixed selectivity here. The grid cells often will show speed coding, where they change firing rate with running speed. The conjunctive grid by head direction cells will show this type of speed coding. The head direction cells will show the coding, as well as what the Moser lab labeled speed cells, which are really just cells showing the same type of speed coding but not showing head direction or grid selectivity.
Now, we were very interested in theta rhythmicity, and so we actually described a different type of speed coding. In addition to the firing rate coded speed, there's a dissociable representation of running speed that involves the theta rhythmicity of the neurons in entorhinal cortex. And you can see this with an autocorrelogram.
If you plot the autocorrelogram for an 8 hertz firing, as you shift that spiking relative to itself, it'll peak at the period of the oscillation, 125 milliseconds, and then at 250 milliseconds, and so on.
So that's what's shown up here with these peaks at 125 and 250. And if you plot the autocorrelogram at different running speeds, those peaks get closer together, indicating a shortening of the period. And that is essentially a change in frequency from around 7.6 up to 8.4 hertz or so. And this occurs in a lot of different cell types.
But as I mentioned, they're dissociable. Here is a case where the inactivation of the medial septum is wiping out the periodicity entirely. It's very strongly periodic before, and then it wipes it out. So there's no rhythmic coding of running speed. But to our surprise, the actual firing rate code of running speed is better. It's actually more accurate. So that's kind of an interesting dissociation.
And here, where there's a cell that still has rhythmicity, it actually changes its characteristics from increasing frequency with running speed to decreasing frequency with running speed. So it still has a perturbation of that coding. So this, to us, suggested that maybe it's not the firing rate code of running speed but instead the theta rhythmic coding of running speed that's important for the generation of the medial-- for the generation of the grid cell spatial selectivity.
And I just wanted to mention that this loss of the grid cell spatial selectivity could also underlie the impairment in spatial memory tasks, such as the Morris water maze or the eight-arm radial maze that occurs with the same type of muscimol infusion.
So there was already a pretty extensive history of evidence that muscimol infusions in the medial septum would impair spatial memory behavior even before we describe this loss of the grid cell spatial selectivity, though interestingly, place cell firing is still present when the medial septum is inactivated, which is kind of an interesting puzzle.
Now, further work we did suggested that this theta rhythmicity is also specifically dependent upon the visual input being present to the animal. So Holger Dannenberg compared it during darkness and light conditions, and he showed that the firing rate coding of running speed still was the same in light and dark, but the rhythmicity coding that was present, going from 7.6 up to 8.4 hertz in light, that seems to be lost so that in darkness, the cells do not show this shift in rhythmicity, suggesting somehow it's being driven by the visual input. And you can see here the autocorrelogram is getting narrower in the light condition but not in the dark condition.
And consistent with that, the grid cell firing fields actually get, in Holger's experiment, somewhat messier in the dark conditions. So the gridness score is lower in dark than in light condition. And other earlier experiments that were done, where they removed other sensory cues like auditory stimuli or the walls of the environment, caused an almost total loss of the gridness in the darkness. So it might be-- at least in our experiments, it suggests that the visual input are driving this theta rhythmic coding of running speed that could be contributing to the gridness.
Now, I wanted to make a connection to a paper that just came out from a lab here at MIT, the Flavell lab, looking at C. elegans. I never would have expected a connection across these species, but they had a plot that looked exactly like a plot we had in our 2019 paper, where they showed that the coding of velocity of movement of C. elegans had different time scales. So here is one that's showing that when it's averaged over 22-second time constant, the activity of the neuron very closely follows the velocity of the C. elegans, the worm.
They show it for different time scales. And Holger Dannenberg's work, we showed that on short time scales, the firing rate and the running speed did not always correlate. But at longer running scales, like 6 to 8 hertz-- sorry, 4 to 8 or 8 to 16 seconds, or even up to 16 to 32 seconds, you see a much better coding of running speed by the neuronal activity, suggesting that there's actually maybe a multi-scale representation of running speed, consistent with Marc Howard's models of this multi-scale coding of time and that maybe this is a cross-species phenomena.
Now, finally, I mentioned earlier that the models of path integration require movement direction rather than head direction. And so this raised the question, well, do grid cells-- sorry, is movement direction being coded as an input to grid cells? The grid cell models are actually using movement direction for path integration. So here's the attractor model showing the grid cell firing fields when movement direction is the input.
If you give head direction as an input, you can't get accurate firing. You can't just average the head direction of an actual animal to get that. And so we thought, well, do the neurons actually code movement direction? So we looked at neurons in the entorhinal cortex when the animal's head direction was 30 degrees or more from the movement direction, and we found that all the cells were coding-- all the cells that were coding head direction were not coding movement direction.
So at least within the entorhinal cortex, it's head direction, rather than movement direction, that's being coded. There might be some separate movement direction input coming into the entorhinal. But at least in the entorhinal cortex, head direction seems dominant.
And that leads to the next complementary mechanism for grid cells, and that is this transformation from egocentric viewpoints to allocentric position. And if you think about that just on an intuitive level, you guys sitting out here in the room, you have a very clear sense of your position in the room just by looking at the walls of the room.
So looking at your egocentric view of the room, you know where you're positioned in the room. You don't even need to path integrate. You could be asleep and wake up and know where you are. So this is really quite a strong mechanism for coding allocentric position, and this leads me into the next part of the talk, which is talking about the evidence for the egocentric coding of boundaries.
So this is something that needs to be dealt with in anything involving visual influences on the representation. So the visual input is coming in in retinotopic coordinates. I'm showing it here being projected onto the image plane. This is the way it would be presented on a computer screen in a virtual task-- in this case, a virtual foraging task. It might also be in spherical coordinates for the retina.
But it's, basically, the retinotopic coordinates would need to be transformed into egocentric coordinates, that is, the position of objects and boundaries relative to the animal that is to the left, right, front, and back of the animal, and then transformed into allocentric coordinates, which are the coordinates of the grid cells that are firing based on the north, south, east, west coordinates of the environment. So it's firing based on the positions of the animal relative to the boundaries, not the position of the boundaries relative to the animal.
All right. So that's a pretty profound difference in coordinates, and we are intrigued to find that there seem to be representation of both coordinate systems. This has actually already been shown in parietal cortex earlier.
So I want to give-- I give you an input, an example of a movie of just a bunch of dots on the walls of an environment and the floor of an environment. Just watching this movie, you can get a very clear sense of your position in the environment based on the optic flow on the ground plane or based on the features at the base of the walls or at the corners of the walls.
So both of those mechanisms can be used, and I'll talk about their potential role. Neil Burgess had modeled how egocentric representation in the model of the boundaries could be combined with head direction in the retrosplenial cortex to generate an allocentric coding of the boundaries that could then drive place cell responses in allocentric coordinates. And he had shown allocentric coding of the boundaries, but he had not shown egocentric coding of the boundaries.
And so we were very excited when we found, in recordings from retrosplenial cortex and dorsomedial striatum, that there was a clear egocentric coding of the boundaries. So that's what I'm going to show here in this sequence of slides, is an example of one of these egocentric boundary cells.
Jake Hinman initially found these in my lab in dorsomedial striatum, and then Andy Alexander showed them in retrosplenial cortex. And now Patrick LaChance has come to my lab, and he found them in the postrhinal cortex in work with Jeff Taube.
So when you plot an egocentric boundary cell in the allocentric coordinates, you can see here it's the animals foraging in the open field environment. The trajectory shown in gray. And the head direction of the animal when each spike occurs is shown by the color. So you can see when it's against the west wall, it's spiking when the head direction is towards the northeast, when it's on the south wall, it's spiking when the head direction is towards the northwest, and so on.
All right. And this made Jake realize that maybe it was more efficient to represent the firing in egocentric coordinates rather than allocentric coordinates. So what he did was to take-- for each spike, he would look at the head direction of the animal and the position of the animal, and he would plot in the egocentric polar coordinates. That is, to the left, right, back, and front of the animal, he would plot the position of the boundary for that one spike.
So this spike here is the same as the position of the boundary here for that one spike. This shows the position of the boundary for three spikes. And then this shows the position of the boundary averaged over hundreds of spikes, giving you a firing rate map that indicates that the boundary is essentially-- or the cell is essentially firing when the boundary is to the left or when the corner is to the left and behind the animal.
And you saw many of these types of very characteristic responses where there was a consistent position of the boundary relative to the animal. So here's more examples of that. Here's an example where the boundary-- the cell is firing when the boundary is close by on the left side of the animal. So when it's going to the east against the north wall or going south against the east wall and so on.
So this is a really nice example, but this one could be generated by the whiskers of the animal. But here, you can see this cell is firing when the animal is 20 to 30 centimeters away from the wall. All right. So this is well outside of the whisker range. And it's got the same nice, clear coding of the relative position of the boundary.
So he saw a variety of responses, but he tended to see responses that were strong to the left side of the animal when he recorded in the right hemisphere or to the right when he recorded in the left hemisphere. And they're across a range of different distances and angles, including well outside of whisker range.
He also looked at different shaped environments, like a circular environment versus a square, and showed the same egocentric responses. He turned the environment and showed that the response was based on the position of the wall relative to the animal, not the distal cues in the environment. In contrast, head direction cells were responding in the allocentric coordinates, so they'd be responding based on compass direction.
He changed the size of the environment and showed that, at least for the retrosplenial cells, they seem to follow the walls and be responding on the distance of the walls, though there's other neurons in postrhinal that seem to respond more on the center of the environment. He also did novel and familiar environments and showed a similar response. So these egocentric responses seem to be a general code for barriers that translate across novel and familiar environments of different sizes and shapes.
We were excited. Recently, work in Chantal Stern's lab doing fMRI activation looked in the retrosplenial cortex and parahippocampal cortex as regions of interest as a human was doing, in this case, a foraging task for hidden coins that would only appear when the person got to that position in the open field environment in contrast to crushed up Fruit Loops. And they showed coding of distance and angle of the boundaries of the environment. So it's very exciting that the retrosplenial cortex in humans shows a similar coding of the egocentric position of boundaries.
Now, we were very interested in how these egocentric responses could be generated from the retinotopic inputs that the animal is getting, so we collaborated with an Australian group-- Simon Williams, Yanbo Lian, Tony Burkitt-- who took the trajectories that Andy was recording in the open field and generated what the retinotopic input would look like, and then computed the simple cell and complex cell responses, and then applied the sparse coding, the non-negative sparse coding algorithm of Bruno Olshausen.
And without really tweaking the model anymore, they generated these egocentric boundary cell responses very effectively. So here's an example of one of these cells where the simulated neuron is firing when the boundary is quite close on the left side of the animal. Here's one where it's responding at a greater distance from the walls. So the simulations-- again, without really having to fiddle with the parameters, the simulations show a similar range of coding of distances from the walls of the environment, as well as this strong left or right direction of the walls.
In the simulation, they could test, then, whether or not the range of the rodent visual field would underlie this left-right response. And that's what they seem to see because if they narrowed the range of the visual field, it actually tended to make the neurons respond somewhat more towards boundaries towards the front of the animal.
I mentioned earlier that the retrosplenial seems to have a different way of coding than the postrhinal cortex. This is work that Patrick LaChance has been following up on in my lab. He's been doing simultaneous recording in retrosplenial cortex and postrhinal cortex and has shown that the retrosplenial cortex seems to respond to the visual features of the walls so that the head direction response is very discretized.
So if you plot just based on when the head direction is towards the southwest versus the west or the northwest, there's really no overlap in the spatial coding, and the head direction responses seem to show these very strong discrete responses to different walls, whereas in the postrhinal cortex, there's less of this selectivity based on features of the different walls.
There's more of a continuum of head direction responses that you can also see in the colors here and in the distribution here. And this might be because the postrhinal cortex is getting more of an optic flow input from the superior colliculus via the pulvinar nucleus, as opposed to the retrosplenial cortex that gets more of a visual cortical input.
So this is all supportive of the fact that egocentric coordinates seem to be coded in retrosplenial cortex as a potential input for generating the allocentric coordinates of grid cells. Now, I mentioned earlier that Neil Burgess's lab had simulated these transformations from egocentric to allocentric coordinates using what are called gain field transformations on a population level, but I've been working on an alternate model of how this egocentric to allocentric transformation might work.
I've been very interested in whether this transformation could be involving just single neuron computations. And so what I did was to take a mathematical representation of the egocentric position of features on, in this case, three walls of the environment when the animal is conveniently located at the origin here. All right. So here's a array of vectors representing these three egocentric position of these features. Then, at a later time, it might be further into the environment.
And looking at these same three features, it has different vectors to these same three features, shown in this array. And I could have 100 features in this condition and this condition. But no matter how many features I have, the relationship between those features would be coded by an affine transformation matrix, shown here, that has the components of a rotation matrix, as well as a translation matrix.
And so my idea was that the cells in entorhinal cortex might have this array of affine transformation matrices represented by different neurons. And the one that most effectively maps the current egocentric input to the memory of an earlier egocentric position would be the one getting activated. And I've shown that that can work for coding place cells.
So if you have a single cosine-- low frequency cosine, you can actually generate a place cell response around the original position, or have an affine translation and generate at a different location, or I could generate grid cell responses. If I have a phase code that's relative to a periodic theta rhythm oscillation in the network, then I can get periodic grid cell firing, and that has different spatial phases based on the translation.
And I can also generate the conjunctive grid by head direction cells that have the spatial selectivity but also the head direction selectivity. So I think this is kind of an interesting idea that instead of using a population-level transformation, it could be done at the level of individual neurons.
All right. So that was kind of the overview of the egocentric coding of boundaries. Just in the last few minutes, I'm going to talk about one other aspect of the theta rhythmic modulation from the medial septum, and this is the regulation of the dynamics of encoding and retrieval. So this involves exactly the same circuitry-- the medial septum sending the GABAergic inputs to generate rhythmicity in the hippocampus and the entorhinal cortex.
And the proposal is that different phases of the theta could correspond to encoding versus retrieval dynamics. And now probably many of you have become familiar with Hopfield nets or simple associative memory models by Kohonen and so on. This maps to exactly the dynamics used in those types of associative memory models.
In those models, during encoding, the activity is clamped onto the network, and the outer product of the stored vectors is stored with a Hebbian learning rule without allowing any retrieval to occur. That's just the nature of the original Hopfield network. And that's what I'm proposing is happening at the peak of the local field potential in stratum pyramidale. The entorhinal cortex is very strong. Input is very strong, driving the activity in the network.
The synaptic modification is strong, but there's very little internal retrieval activity. And then, on a different phase, at the trough, the entorhinal input is weaker, but there's strong internal connectivity, for instance, in CA3 and from CA3 to CA1. But there's no LTP at this phase.
And this corresponds to what's been used in all of these associative memory models, but it's consistent with the experimental data in the hippocampus. If you look at the current source density across cycles of theta rhythm, there's a phase when the entorhinal input is very strong, as shown by a current sink in lacunosum-moleculare. And then a different phase, the CA3 input. This would be the retrieval phase when the CA3 input is very strong and the neurons actually are generating spikes, as shown by the sink in pyramidale.
This would also require a change from long-term potentiation on one phase to no LTP, or even long-term depression on another phase. And that's also consistent with the experimental data. It's somewhat counterintuitive, but a number of experiments have shown that the stimulation can induce ATP at the peak of the local theta when the synaptic transmission is actually weakest. And then at the trough, when the synaptic transmission is stronger, if you stimulate there, you're more likely to get long-term depression. And again, this corresponds to what can be used in the standard associative memory models.
So just to give you kind of an intuitive sense of why that would be important, the why is it implemented in all of these existing associative memory models, the reason is that if you let retrieval happen during encoding, you'll get severe interference. So here's a case where we have retrieval and encoding all happening at the same time.
And I give the example that, let's say, I'm at dinner with Matt this evening, and I happen to remember a conversation with Hector at lunch. If I have that memory appearing in my hippocampus at the same time that I'm experiencing these new events, I'm not going to know whether I had lunch with Matt or dinner with Hector and so on. You get these associations between the retrieved memory and the new memory in the form of all these additional Hebbian modifications.
And this is just the mathematical reality of these types of associative memory models. And that's why you have to separate retrieval phase. So if I have a retrieval about my memory of lunch, this would need to be on the retrieval phase when the internal activity is strong but there's no LTP. And then I can separately, just 60 milliseconds later-- it's going back and forth very rapidly. I can take the input about my current experience and have the long-term potentiation form a new, non-overlapping memory of that representation.
And this just shows a simulation of this-- an animal running on a T maze. If you separate encoding and retrieval, you can store associations between adjacent locations. But if you allow the retrieval to happen during encoding, you get this massive buildup of interference with these undesired associations.
And there's a wide range of experimental data supporting this. There's data showing spiking on different phases of theta for novel and familiar stimuli. Laura Colgin has shown gamma frequency input from entorhinal on a different phase from input from CA3. Josh Siegle and Matt Wilson's lab showed that optogenetic manipulation of interneurons would on one phase enhance encoding and on a different phase enhance retrieval.
And then the last thing I'm going to show is some data from Mark Brandon and also from Loren Frank's lab, showing input and replay on different phases of theta. This actually relates to something called theta cycle skipping which, if you look at the autocorrelogram, instead of peaking at 125 milliseconds, some cells show a larger peak at 250 milliseconds, suggesting that they're firing on one cycle, and then not firing, and then firing, and then not firing. It's called theta cycle skipping. It's been shown in a number of experiments.
But Mark Brandon took one of these theta cycle skipping cells and did an autocorrelogram with another cell and showed that the-- sorry, the crosscorrelogram is 0 at interval 0 and high at 125 milliseconds, which is quite striking. That shows this over many days. And this indicates that the cells are firing on alternating cycles of theta. They're not firing together on the same cycle.
And Loren Frank's lab. Ken Kay and Loren Frank's lab showed that exact same phenomena with the low cross-correlation. The alternating of spiking between different cycles. And in this experiment, it's very exciting because he shows that within a cycle, the cells are shifting from coding the current location-- that would be the encoding-- to retrieving the right arm, then switching back to current location, then the next cycle and retrieving the left arm, then current location, right arm, current location, left arm, and so on.
So this is perfect for both the retrieval of different arms but also the separation of the encoding of current position from the retrieval of future position. And I'll actually conclude there and take questions.
[APPLAUSE]