Hippocampal mechanisms of memory and cognition

Hippocampal mechanisms of memory and cognition

Date Posted:  August 21, 2020
Date Recorded:  August 20, 2020
CBMM Speaker(s):  Matt Wilson
  • All Captioned Videos
  • Brains, Minds and Machines Summer Course 2020
Associated CBMM Pages: 

GABRIEL KREIMAN: So it's a great pleasure now to introduce Professor Matt Wilson from the Department of Brain and Cognitive Sciences and Biology at MIT. And he has made some of the contributions to our understanding of the hippocampus as well as the interactions between the hippocampus and neocortex and the role of the hippocampus in navigation and memory and cognition. So welcome, Matt. [INAUDIBLE]

MATT WILSON: Thanks, Gabriel. We'll get started. Well, welcome, everyone. As we were talking about in the beginning, it's unfortunate we're not all down at Wood's Hole. But hopefully we can go through some of the stuff that I want to talk about and have a little bit of discussion on mechanisms of memory and cognition that are relevant for intelligence and artificial intelligence that are really expressed in the brain and can be revealed through the kind of behavior electrophysiology that I'm going to talk about.

So sort of recording in the brain, seeing how the hippocampus addresses this problem of learning from experience and the role of biological mechanisms, including coding and representations through spike activity as well as the coordination of that activity through macroscopic rhythms and how those rhythms can be used to incorporate additional coding mechanisms which perhaps haven't been as thoroughly incorporated into synthetic models. And these would be mechanisms of temporal coding, maintaining and evaluating temporal order of events. And so that's what I'd like to go through.

And if I can get to it, hopefully, I would like to talk about how some of those mechanisms of coding representation then get engaged during offline processing, which occurs during sleep. And I think just the idea of an interplay between formation of memory, re-evaluation of memory, and its use in constructing a generalizable, generative models in the brain is certainly an important and interesting question.

So we'll just get started. The structure that I'm going to focus on is shown here, the hippocampus, which is deep in the medial portion of the temporal lobe. What you see in cross-section up at the top is the classic tri-synaptic circuit of the hippocampus. So the hippocampus is at the apex of a processing hierarchy, information coming in from sensory areas moving to higher-order areas. And I know you've had a lot of discussion and talks about the deep layer or multilayer, perhaps, hierarchical organization of high-level sensory coding systems like the visual system.

Well, as you move to higher and higher levels, at the top of that processing hierarchy in which you have convergence of high-level associative modality-specific processing like vision or audition, the top of that hierarchy are really two structures. One is the hippocampus. And the other would be the prefrontal cortex. And so they have very complementary roles. And as I'll discuss a little bit, not surprisingly, the hippocampus and the prefrontal cortex are connected and communicate during the formation and use of memory in the context of goal-directed behavior and tasks that require evaluation of likely future outcomes based on past experience.

So the hippocampus receives this convergent high-level information from across the brain and then connects back to those brain structures. So it's at the top, receiving information, but also propagates that information back down the hierarchy. And so a question of what is the hippocampus doing and how is it doing it were really brought to the forefront through this historically seminal work involving the patient HM, who underwent a surgical procedure involving the resection of the medial temporal lobes. And that is to treat epilepsy.

They went in and scooped out portions of the hippocampus and the adjacent cortex, the entorhinal cortex, which provides most of the input to the hippocampus. They scooped that out. HM, of course, lost the ability to form any new memories of events or experience. And so that focused attention on the hippocampus as a central structure in the formation of experiential or what's been referred to as episodic memory.

And the fact that you have hippocampus-- and when you see it in the circuit, a really-- a structurally, functionally old or phylogenetically old circuit. And as you see that unlike the cortical-- the neocortical circuits that you've been discussing, for instance, in the visual cortex, high-level visual cortex, the hippocampus represents an older cortex that has a simpler structure, a three-layered structure. And it looks like-- this is the structures that you see here, the input coming in from the entorhinal cortex along this pathway, the perforant path.

Makes synapses here, along the primary subregions of the hippocampus. The dentate gyrus, shown here-- dentate gyrus makes connections in the area CA3. CA3 makes connections to CA1, CA1 to subiculum. And then this goes back out again. So this is the classic tri-synaptic loop.

And if you unroll this-- this is rolled up like a breakfast roll. If you unroll this, what you find is that the primary cell layers-- it's kind of a single-layered structure-- inputs that are very nicely organized along the axis of the dendrites of these cells. And so being a simpler cortex, being something that's evolutionarily conserved-- that is, if you look at the hippocampus in a human or in a rodent or even in amphibians, reptiles, you find that it shares a very similar structure.

And just another note, an evolutionary biological note-- when you look at the organizational structure of the hippocampus being [INAUDIBLE] cortex, very similar to another old cortical structure. And that's the olfactory cortex. So the olfactory cortex, this three-layered cortex. And chemosensation, or the ability to identify things in the environment, to identify where you are and where you need to go, in an evolutionary sense, has really been based on, initially, chemotaxis. That's the most primitive, the earliest form of environmental representation, and then other sensory systems which evolved later.

So olfaction and the hippocampus, they share, structurally, a lot of features. And they represent an older basic functional unit. And there are a lot of organizational principles that they share. So I think it's instructive to understand how the simple cortical architecture could implement the high-level or complicated functions that we think are essential for this function in episodic memory processing. So that's the hippocampal structure.

Now, again, historically, John O'Keefe, in the early 1970s, placing electrodes deep into the hippocampus, recording the activity primarily of these cells that are near the output side of the hippocampus, the CA1, or Cornu Ammonis, Ammonis' horn-- Cornu Ammonis 1 region, was able to identify-- at that time it was a surprising characteristic of the hippocampus. And that is that the individual cells seemed to exhibit this spatial receptive field coding. That is, individual cells would fire whenever an animal's in a certain location space. He termed these cells place cells and the receptive fields of those cells place fields.

And that idea of a spatial representation of the hippocampus actually fit very well with some of the work that had been done, including lesion studies of the hippocampus, in animals which found a primary deficit of hippocampal damage was a deficit in spatial navigation. So this fed into, at the time, what seemed to be a unifying theory of the hippocampus, hippocampal function. And that was the theory of hippocampus as a cognitive map.

And a very influential book that was written at the time by John O'Keefe [INAUDIBLE], Hippocampus as a Cognitive Map, which I strongly recommend reading, revisiting. It's still very prescient and timely. And when you go back and see what the view of-- at the time of the hippocampus was and you see that a lot of the early thinking has really held up to the kind of advances that have been made, and even in present times-- so they had a lot of good ideas and a lot of really deep insights in hippocampal function that are worth revisiting.

So the hippocampus-- damage the hippocampus, producing spatial deficits, hippocampal representations involving spatial receptive fields, and the hippocampus anatomically and functionally at the top of this processing hierarchy suggested that there was this mechanism for representation of space. And of course, O'Keefe, several years ago, received the Nobel Prize for his work on spatial representation along with the Mosers for the discovery of bridge cells in the input that becomes the entorhinal cortex. But when you pair that with the earlier historical human work with HM demonstrating the damage or lesions that the hippocampus produced, another kind of deficit that has deficits to episodic memory, it was a question of, what's the connection between these, episodic memory and human spatial memory in animals and rodents in particular?

And so the-- a simple working hypothesis that can reconcile those two apparently disparate functions and deficits is that navigation, you think of the kind of memory that's formed as linking sequences of spatial locations and time, evaluating trajectories in space and thinking about where you've been, where you need to go to, and episodic memory having a similar kind of structure. And that is linking together events as they occur in time. And so the one thing that they share is this critical dependence on maintaining information about temporal sequential order-- the idea that sequence order is important, of course, if you're thinking about not just storing memory but using memory to try to understand causal interactions in the world. Not just what happened, but why did something happen?

And how could we use that information to make things happen? And that is the what, why, and then how. What happened? Why did things happen, cause interactions? And then how could we achieve outcomes based upon those observations and insights and understanding?

So-- oh, I see a question pop up. Maybe I'll jump on that. And so the characteristics of the hippocampus enables one-shot learning. And so thinking about-- so and I'll get to that. So there's this idea of the hippocampus in episodic memory. The thing about episodic memory is, in principle, it's memory that's formed through single exposure to an episode. And as you experience something once and then you're able to maintain that memory, go back and revisit it, it's this idea of one-shot memory. And there's an important point that has to be made about the idea of episodic memory. And that's the idea that we actually have this veridical memory of experience.

And I always like to say that all memory is actually false memory. And that is, we think we remembered things as they happen. But what we actually do is we try to take our experience, what we do remember, and fit it into existing models. And that is that we remember things as they could have or should have happened as opposed to, necessarily, how they actually happened.

So it certainly is the case that we do form memories based on one shot or singular experience. But we don't necessarily form a memory, an accurate memory, of those events themselves, at least in the long term. So there's an idea that the hippocampus serves to form rapid, perhaps veridical memories of sequential experience for short periods of time, a short-term accurate memory store. But then, over longer time scales, that veridical sequential memory is turned into something that is actually more generalized into previous experientially formed models. And we'll revisit that question when we think about what happens after the formation of memory, the re-processing of information during sleep.

So this one-shot memory-- yes, there might be something rapid that goes on, capturing experience in an accurate way. And then after that experience, over time, that accurate veridical sequential memory is actually transformed into something that, while not veridical or accurate with respect to what actually happened, is actually more useful or evaluated as something that's more consistent with previous models. You update existing models. And so that's what actually, then, persists. So rapid one shot, but then more persistent, gradual learning that goes on.

And so this is this idea of the difference between gradual learning in a lot of artificial systems that you have to train with lots and lots of samples. And so one idea is the hippocampus on a short term that captures these samples and then trains, let's say, another network, perhaps neocortical networks, on the individual samples or instances to form this more generalizable memory. That would be one theory. So but the role of the hippocampus in capturing singular one-shot temporal sequential memory events is going to be a basic function working hypothesis.

So the basic strategy for trying to study and understand hippocampus in this kind of memory will be the use of multiple electrode extracellular behavior electrophysiology. And that is putting in tiny wires into the brains of rats, placing those electrodes, the recording-- the recording surfaces-- these are very fine wires, each wire about 30 microns across. The four wire bundles about-- each wire about 10 microns across. The entire bundle, about 35 microns across, and with insulated wire along the shaft cut to expose the conductive tips.

And what this allows is the recording of extracellular voltage trajectories. And that is the fields, electric fields produced by membrane currents generated in nearby cells. And those currents can be generated as the result of the communication between cells. And that is action potentials that are generated that form very small dipoles that are hard to pick up at a distance but can be picked up readily if you're nearby. And then the larger dipoles that are produced by synaptic currents that come in along dendrites. And those can be detected at much larger distances because the dipoles themselves are larger. And these make up what we think of as the local field potential.

So the kind of signals that you get by listening into the activity of populations of neurons would include spiking activity for nearby neurons and then these macroscopic local field potentials generated by the coordinated, largely synaptic inputs that are received by larger populations of neurons. And then having these wires monitored by portable, lightweight head stages that are placed on the animal's head allow these signals to be recorded as animals run around in a minimally constrained tethered environment. And so you can record these actives as animals are running around, basically.

And so this is what one of these devices looks like. Very fine wire, electron-- microscopic image of one of these four-wire bundles, or tetrodes, as we often refer to them. And then this is an electrode microdrive array that allows the adjustment of these fine wires after the animals have been surgically implanted, he recovers. And then you can adjust these to place the tips right next to collections of cells.

And in the hippocampus, it's particularly effective because, as I mentioned, the hippocampus itself is a simpler three-layered structure. And all the cell bodies are all organized in a single layer. It's a little bit more complicated than that. That is, there's a little bit more high-level structure. There's slight-- the single-cell layer is actually separating slightly more superficial, slightly more deep cells.

But these wires can be positioned very precisely within a few tens of microns with respect to these. So it's possible to actually precisely interrogate cells in very targeted regions of the hippocampus and of the brain in general. And you can put them into different places. So simultaneously, you can record from cells in, let's say, multiple hippocampal regions like CA1 or CA3 or in the hippocampus, the prefrontal cortex. So the ability to record populations of cells at multiple sites at the cellular resolution in a freely behaving animal-- that's the technology. That's the approach.

And so this is what the raw data looks like. I'm not going to go into too much detail. Just to point out that each one of these individual points represents a detected action potential. So we're looking at voltage trajectories, filtering at higher frequencies to see these rapid transitions which correspond to the signals generated by spikes. And then we pick up those action potential signals at these four recording sites, and then using a spatial amplitude triangulation.

And then that is just the field properties of these electric fields in space. And that is, they fall off as a function of distance. And so an action potential picked up with this wire, if it's-- this can be larger in amplitude if it's closer to the cell. And this wire will see the same signal with a lower amplitude. So you can use the relative amplitudes, same event detected at different locations as a way of identifying the source of those signals. And because you're nearby a whole bunch of cells, you can separate or resolve those multiple sources into independent units.

And here-- so the clustering of points in an amplitude space-- and this is just showing two dimensions of a four-dimensional amplitude recording. As you have single events that are recorded across four channels, you group the amplitude over two channels and you see here, for instance, the green cluster. These are lots of-- these are action potentials which are-- have a large amplitude on channel 1 and a small amplitude on channel 2.

And so from that you can deduce that, oh, cell or the dipole that is generating this unit, that's generating this must be close to channel 1, far from channel 2. And conversely, if you look here at the yellow cluster, it's large on channel 2, small on channel 1. So that must be over here. Close to channel 2, far from channel 1. And so this clustering, when you separate things out based on the location, the relative amplitude inferred location-- and then you measure the activity of those units during behavior.

And the behavior that I'll just illustrate here is the classic behavior that was first demonstrated by John O'Keefe. And that is navigation in an open field, or on a small tabletop. In this case, it's enclosed so the animal doesn't jump off, basically. It's enclosed. And then there are visual cues in there.

And this was work that I did way back when I was first recording from these populations of cells in the hippocampus using this approach. And what you see here is an illustration of about 80 cells simultaneously recorded as an animal explores for about 10 minutes in the box. And so what you see here is a base-- the basic property the hippocampus, which is place cells. The color coding here represents the firing rate or the density of spiking as a function of position with respect to the box. And each panel here is like a top-down view of that box that you saw in the previous slide.

And so this cell-- each panel is a cell. So this cell fires whenever the animal's in the middle of the box. This cell fires when the animals on the right-hand side. This when it's on the lower side, et cetera. And so what you find is that about 30% to 50% of the cells show some kind of spatial receptive-field-like properties. There are a handful of cells that seem to fire all over the environment. And then there are many cells that don't fire at all.

And so this is pretty typical of the hippocampus, that you have a large population of principal cells. And these cells of fire all over are actually a different class of cells, the inhibitory interneurons, which are connected in sort of a local circuit configuration to inhibit the activity of principal cells, the pyramidal cells shown here. So you have principal cells with place fields, inhibitory cells with high firing rate and broad distributed firing which still carry spatial information. The fact that it seems to be firing all over the place-- it's still carrying a lot of spatial information. And then you have a lot of cells that are silent.

Now, these cells are still place cells. And so if you take the animal out and put it in a different environment, these cells are perfectly capable of expressing place fields. It just happens that they were not-- they were not engaged in this task. So-- and the sampling seems to be somewhat random. And that is, if you take this pool of 80 cells, put the animal into a new box, and ask, well, which cells are going to fire? It's basically a random draw of about 30% to 50% of the cells that will have place fields.

And the location of those fields are also random. And that is, it's-- the fact that this cell fires in the middle of this box-- if you take the animal, put it into a different box, this cell might fire. It might not fire. It's just a random draw. And the location of that receptive field is, again, also going to be random. So what it means is that the spatial code for environments is something that is distributed across the hippocampus.

If I were to record the entire hippocampus, I would find that about 30% to 50% of all hippocampal cells in any given environment are going to be active. So very distributed. And it seems to be relatively random. And that is, the code, while consistent for a given environment, is uncorrelated across environments.

Now, a lot of-- there are a lot of work trying to ask the question, is it really random? Or is there some kind of structure that can be predicted? That's certainly something that we could discuss. And that is, if you make environments that you share characteristics, are the features of the hippocampal representation that might reflect that kind of similarity?

But as for the first order, unlike systems like the visual systems, there's no real topography. That is, there's no way of saying, oh, unlike in the visual cortex-- if I put an electrode into a particular location in the visual cortex, I can pretty much predict what kind of stimulant those cells are going to fire to, where in the visual space those cells are going to fire, because there's a predictable structure to the organization of cells in sensory areas.

That kind of structure is not present in the hippocampus-- except when you step out larger scale and think about organization on a long hippocampal axis, for instance. This activity you're seeing here is recorded from a more restricted region of the hippocampus, the dorsal hippocampus. If you think about the hippocampus as sort of like a banana-- and that is, it's this curved shape. And there's an upper part and a lower part in the rodent, [INAUDIBLE] dorsal and ventral hippocampus.

Well, the dorsal and ventral hippocampus-- while the circuit looks very similar, they actually get inputs from different regions of the entorhinal cortex. The entorhinal cortex tends to get-- different parts of entorhinal cortex get input from different regions of the higher-order brain areas that converge on the entorhinal cortex, in particular inputs that come from the so-called dorsal or ventral stream. And that is the information coming along the dorsal stream, all the spatial information converges on the more medial, dorsal medial portions of the entorhinal cortex, which then provide the input into the dorsal part of the hippocampus. So they're more spatial. You get more place fields there. Damage to the dorsal hippocampus produces spatial deficits.

The more ventral part of the hippocampus gets inputs from regions of the entorhinal cortex that are not as spatial. They get input from the amygdala. They connect with the prefrontal cortex. And damage to those structure-- that part of the hippocampus, the ventral hippocampus, seems to impact things like non-spatial memory, social memory. So similar function. And it's interesting to think about.

So thinking about temporal sequential encoding of space, that's the basic mechanism. What about temporal sequential encoding in social interactions or in other non-spatial contexts? What would that look like? And so that starts to connect together this idea of the hippocampus performing common computation on different kinds of influence that themselves have some kind of common computational constraint or imperative.

It's like, you still have to encode sequences of events, whether they're spatial or non-spatial, and then try to make sense of the causal interactions that are embedded in those events. And then also try to figure out, well, how can we understand those causal interactions so that we can use them to achieve some subsequent objective when we want to use that information? That is, in planning and goal-directed behavior. So that's just a long-winded way of saying there's some organization in hippocampus. And you can see that basic organization of place cells here, in this illustration of activity just recorded over 10 minutes as an animal explores a box.

But there's another kind of-- there's another property of these hippocampal cells that comes out when you record activity not when animal's wandering around in a two-dimensional environment but on a constrained one-dimensional environment that's a linear track. And so here, this is about a 2-- this is a 2-meter, 3-meter track. It's a C-shaped track. Animal gets a reward, or food, at the end, at each end. And it just runs back and forth from one end to the other, alternating back and forth along these paths.

And when animals run on constrained tracks like this, what you find is cells don't just fire as a function of the animal's location. That is, they don't just have spatial receptive fields of function position. But they also express this property of what's referred as directionality. For instance, this yellow cell will fire when the animal runs through this location in this direction, but not in this direction. The red cell fires here when an animal goes through this way, but not this way. The blue cell fires when the animal goes through in this direction.

So they're a set of place fields, a unique spatial code for each direction. And so what that means is not only do they fire uniquely for position and direction, but if you think about what happens as an animal runs along this trajectory, there's going to be a unique sequence of place cells that are active along each path. So each trajectory going, let's say, from top to bottom and bottom to top, gets its own unique sequential code as a result of this directionality.

So this question that came up about nature of the spatial information that they're capturing, we're going to-- that's something that we'll get into in these sorts of environments. And so one thing that's important is-- so I showed you that environment. You've got a box. You've got visual cues. And there's been a lot of work trying to understand, well, what actually controls these place fields? Are they just responding to the configuration of big-- of, for instance, visual cues?

There are theories of hippocampal function-- for instance, the relational or configural theories-- that say, oh, the hippocampus-- the reason you get firing in this location is because when the animal's at this location, there's a unique configuration of, for instance, what the animal is seeing or it's hearing. So a unique configuration of environmental cues. And that is true, to some extent. That is, the cells will respond to things like visual cues. And if you move a visual cue around a little bit, you can influence the place fields.

But one important point is, if you look at these fields and then you just turn off the lights so the animal sees nothing, there are no visual cues, these receptive fields will persist. The place fields persist in the absence of any kind of external sensory information. And that is that self-motion information alone is sufficient to drive the spatial representation. The animals are keeping track of where they are internally based on self-motion cues. And so this is the idea that there's some sort of internal path integration, that mechanisms for monitoring the odometry are keeping track of how they're moving, integrating linear and angular velocity and acceleration, keeping track of footfalls, counting footfalls, motor efference. So the animals are keeping track of what they're doing and then updating internal modeling, not just relying on these external cues.

And a very influential theory of these kind of spatial representations posits that you have the combination of these two streams of information-- one, the internal operating, updating, or path integration mechanism combined with some kind of external updating or validation of that internally-updated model using external cues. So you path integrate, keep track where you think you are. And then every once in a while you look at the configuration cues and you see whether they're consistent with the internal one. If they are, you're fine. If they're not, then you may have to update that-- your internal positional estimate.

And this use of external sensory information, internal odometry has been used in models of synthetic, robotic navigation and the very influential SLAM models. For instance, simultaneous localization and mapping use a similar kind of principle and strategy. Keep track of internal map based on self motion. But that's going to be-- it's going to be error prone, as your path integrator is going to accumulate errors. So you've got to correct that or update that every once in a while.

And so to using the evaluation of what you expect to see at certain locations, the configuration that you expect to see, then you use it to update that. And so you have the internal model of, what do you expect to see when you're at a given location? That is, position predicts expected visual cues. But also, external visual cues can be used to predict position, as position is a function of cues and cues a function of position.

So this idea of directionality is really important, because when you look at-- so if you're just thinking about representations in the static sense, you say, oh, animal at location. It fires function position and direction. So there's some-- 1D, 1 and 1/2 D, 2D representation. But again, if you think about it in time, think, oh, that directionality gives you the capacity to encode temporal sequences. And so how would you encode these temporal sequences?

I'm just going to-- this is a little movie just showing you place cells. This is just raw data. It'll just illustrate how pronounced this spatial receptive field property is. And that is, that doesn't require a lot of processing. You can essentially see it in the raw data.

And so here, this is going to have some audio as well. And so we're recording raw activity, spiking activity. You're seeing, in the left panels, those spikes as they're being detected.

[CRACKLING]

And there's this color coding, again, based on the amplitude profile. The axes here are channel 1, channel 2 amplitude.

And so you see, for instance, the blue cell fires in the animals here. Light blue cell fires here. This is the [INAUDIBLE] in the raw data. [INAUDIBLE] you see is that when the animal stops-- if you just listen briefly here, the animal stops. You see-- hear that burst of activity? You heard and saw this burst of activity.

It's another important property of the hippocampus, and that is the spatial [INAUDIBLE] as it relates [INAUDIBLE] but really expressed as they're moving. When they're moving, activity is coordinated by this macroscopic rhythm, the theta rhythm, at 10 Hertz [INAUDIBLE]. So this is something you can-- you can hear it. [INAUDIBLE] hear it.

And so what this is, it's a temporal organization to the population activity which you can hear. And that seems to be tied in some way to the expression of spatial correlate. And when the animal stops, that rhythm goes away and you seem to lose the spatial coding properties of the cells.

But we'll see when we look at it more closely that the spatial coding property doesn't go away. It's just expressed in a different form. And that is, when the animal is moving, it is both representing and expressing a temporal sequence code of what the animal is doing. When animal stops, it expresses a temporal sequence code of things that it has done and could do. And that is that it re-expresses spatial sequences that are not necessarily tied to its present state, but might express previous or future states. It's reactivating or replaying spatial memory sequences.

And just to illustrate how we can take that raw population code of hippocampal place cells and then convert it into-- or decode it into an explicit spatial code is illustrated here, when we just apply a simple sort of Bayesian decoding algorithm asking-- in the previous slide, we were saying-- illustrating the probability of firing as a function of the animal's location. But you can use that-- instead of asking probability of firing as a function of location, you can ask, what's the probability of animal being in given location given the firing? So its position as a function of the probability distributions you generate by recording place cells. So if you see place cell-- for instance, the blue place cell firing, what's the likelihood that the animal is at any given location on the track?

And if you apply that, in this case, every 200 milliseconds-- you say, every fraction of a second, you ask which cells are firing and then estimate the probability based on the firing if the animal's at any given location on the track. And then you represent that probability as a triangle. Sides of the triangle represents the magnitude of the probability. And then the direction of the triangle represents the direction, in addition to the position. That's the-- incorporating the directionality, the direction the animal would likely be facing in order to get that activity.

So when you do that, you get, now-- again, the green circle just represents where the animal actually is. Here, you're decoding the probability. And this just says, yeah, there really is a place code. Because look, the ongoing activity when the animal's moving corresponds to the animal's correct location. But the interesting thing [INAUDIBLE] say, the hippocampal code no longer actually-- can no longer decodes an animal's current location. Instead you see, oh-- you see triangles and they seem to jump around the track. And you can actually slow that down and look at in more detail to see if the triangles [INAUDIBLE] expressed in a random location along the track. They're being expressed as sequences of locations [INAUDIBLE] the track.

And that occurs not only when the animal stops, but also when the animal goes to sleep. So here you can see an example in real time [INAUDIBLE] loop [INAUDIBLE] decoding at about half a second-- for each episode of about half a second of activity in which you're getting the expression of a sequence. You follow the triangle sequence, see them running along this path here.

And they're also expressing another property, that reactivating sequential memory, that's interesting. And that is, they're replaying a sequence that is being replayed in reverse time order. And to revisit what would be-- the computational benefit be of having the capacity to re-examine temporally structured events [INAUDIBLE] time order?

And one working hypothesis is that you can actually use this to solve [INAUDIBLE] limited-- with limited samples [INAUDIBLE] reinforcement [INAUDIBLE] problems involving temporal credit assignment, so figuring out exactly how to follow the gradient of expected reward in order to achieve some sort of [INAUDIBLE] that can be facilitated by evaluating sequential events in forward and reverse time order.

And you can also imagine that this evaluating sequences and [INAUDIBLE] might allow you to infer-- not just expect a reward, but to evaluate causal interactions. Did A really cause B? Well, if A caused B then B should be contingent on A occurring before it, not after it. And so evaluating both forward and reverse causality, and also [INAUDIBLE] that-- that's using it not just to evaluate reward, but to evaluate causal interactions that you can use for planning.

And that-- the expression of those reactivated events, so the transition from the theta rhythmic to the non-theta rhythmic state you can see here in the macroscopic field potentials, expressed here. These are recordings by [INAUDIBLE] years ago. And what you see when animals are walking, or during locomotion, you see this coordinated rhythm, which is again expressed as a local field potential which reflects the synaptic inputs coming into large populations of cells.

So we're not looking at spiking activity here. So there's rhythm. And very rapidly, the animal stops. Within about a half a second, you get a transition. The rhythm goes away.

And now you see these so-called sharp waves, that is, these big deflections in the local field potential. They're big deflections because they represent highly synchronized bursts of synaptic input. A lot of synaptic input coming into a lot of cells all at about the same time get these sharp waves.

And then if you were to focus on-- if you could zoom in on one of these, you would see, on top of these sharp waves there's a high-frequency oscillation known as a ripple. So that's the characteristic of what I refer to as offline behavior, when an animal is either asleep or it's awake but inattentive. It's engaged in behaviors that don't require that it's evaluating the external world.

And that could include things like eating. It could be eating, grooming, copulating. All these things are referred to as consummatory behavior-- consummatory behavior, which is behavior which is satisfying internal state of-- as opposed to external state. So instead of-- you don't have to worry about what's going on in the outside world. You just have to worry what's going on inside world.

And so that's this inattentive state. That's when you get these sharp wave ripple events. And of course, sleep is the extreme manifestation of that. That's when you're really not attending to the outside world and it's all internally driven.

And so here is an illustration of the combination of the local field potentials and the spiking activity. And this is just about a second and a half of activity in red, from-- recorded from the hippocampus. And in black, record from another structure, the prefrontal cortex, as I mentioned.

And so you see very pronounced theta rhythm in the local field potentials shown in red, here. This is the hippocampal theta rhythm. And then in black you see the prefrontal cortex. And you don't see much of a rhythm. And so there's been a lot of debate over the theta rhythm itself, whether it's really that relevant, because you only see it, let's say, in the hippocampus or maybe associated structures. So how widespread is the theta rhythm? Maybe it's just something idiosyncratic, not really that important.

But if you look at the spiking activity in the prefrontal cortex-- that is, the black ticks here, so recording from cells in the prefrontal cortex. And you notice, the dotted lines show alignment to the peaks of the theta oscillation recorded in the hippocampus. And what you see here is that spikes generated by cells in the hippocampus, the red ticks-- they have a preferred phase.

And that is, not surprisingly, the theta rhythm, which reflects, again, the coordinated synaptic input of the hippocampus. All cells spike at certain points in that synaptic-- that time bearing synaptic input. They fire at the peak, generally. But cells in the prefrontal cortex are also locked to that. That is, they're phase locked to the hippocampal theta rhythm.

So even though you can't see the rhythm in the local field potential-- and that could result from a lot of things. Prefrontal cortex, unlike the hippocampus, being this sort of a six-layered structure, the dipoles are more complicated. And so you might not see the field potential. But you can see in the spike timing that the prefrontal cortex clearly knows about and cares about the theta rhythm and the timing of activity with respect to the theta rhythm. So it says there's something important about timing of spikes with respect to the theta rhythm both in the hippocampus and the prefrontal cortex.

And this broad data illustrates another phenomenon that John O'Keefe discovered in the early 1990s. And as you look at the timing of this-- of the spikes here, here low frequency, and that is one spike generated here-- and this is in time. You can imagine the animals actually running into the place field of this cell as the firing rate is gradually going to be increasing.

Here it's entering the field. Here it would be nearing the center of the field. And the timing of the first spike here, occurring at the peak, and then here, occurring earlier and earlier relative to the peak-- and that is this-- is a relationship between the phase of firing and the firing rate or the position of the animal within the receptive field.

And that property is illustrated here, in this cartoon which overlays, in red, the presumed timed varying or theta rhythmic modulation of excitability as reflected in time varying inhibition. And then the spatially-graded excitation and as the animal moves through the field. And again, you imagine-- there was the question about sensory input, and how does that influence hippocampal representation? So you imagine that the sensory input coming from, for instance, configurations of visual cues as an animal gets closer to, let's say, a set of visual cues-- the magnitude of that input increases.

So you imagine that excitation has this spatial component. Inhibition has this temporal component. And then if you apply this biophysical constraint, a simple biophysical rule that says spikes will be generated when excitation exceeds inhibition, and then you ask about, when would those spikes be generated in this simple model? And that is when excitation is low, you have to wait until inhibition drops all the way over to here. And that is, with low levels of excitation, you're going to have late-phase firing. And as excitation increases, the cells can fire earlier and earlier. And so that posits a relationship between magnitude of input and then the timing or relative phase of that, of the subsequent output.

And this is raw actual data. This is a cartoon, but this is actual data-- cartoon or a proposed model for how you would generate what's illustrated here. And that is the property known as phase procession. So what John O'Keefe discovered is, the timing of spikes in the hippocampus relative to the theta rhythm change as a function of position. And this relationship was so pronounced that John O'Keefe, when he discovered this, argued that the actual spatial code in the hippocampus is not carried in the firing rate, but rather in the phase.

And that is, there's a temporal code for space. That is, if I want to know, is the animal in, let's say, the region of space between 210, 280 centimeters, I can deduce that based on the overall firing rate. Yeah, the cell is firing here, so there's a probability that the cell is in this location. But if I want to know exactly where the animal is in this field, I can simply look at the phase. And the phase of firing-- if I see it here in the late phase, I can say, not only is the animal in this receptive field, but it's in this position-- relative position with respect to the receptive field. And that is, it's in the left half of it as opposed to right half.

Now, while that idea of temporal coding seemed to be consistent with this property of phase procession, if you actually look at the phase procession empirically, what you see is there's-- here in this late phase, or in the early spatial receptive field, the correlation, the linear correlation between phase and position's pretty good. But if you look at, here in the late-- either the early phase, and that is the first half, and then the late field, that relationship breaks down.

And that is, if you were to just forget about it-- let's just look at the place field, and the second half of the place field versus the first half of the place field. Here, you see lots of spikes. But they have a very poor correlation, phase correlation. First half of the field you have few spikes, but they have very good temporal correlation.

So this suggests that you can actually separate-- split the place field into two piece-- two parts. There's a firing rate, high firing rate, but low phase or low temporal code region, and then a low firing rate but high temporal code region. And that is that not only is there a phase by position code, but there's a phase by rate code and phase temporal code-- different coding regimes. So different phases-- early phase versus late phase carry different kinds of information. Late phase carries sequential information. The early phase carries rate information.

And so following up on that idea-- and I-- we're getting to the-- we're getting close to the end. So probably not going to get to the OL sleep stuff. That's OK. I think getting these temporal coding issues down is important.

If you think about how phase procession and the temporal coding portion of the phase procession now not just in single cells, but across a population of cells, would be expressed. And so this is like two place cells with place fields one before the other. As an animal moves through these place fields, activating place cell 1 and then place cell 2, and then you apply this phase procession or temporal code, what you see is that the spatial order is transformed into sequential order, phase order in the theta rhythm.

And as animals move through place fields 1 and 2 on this behavioral timescale-- it might take 100 milliseconds. It might take a second. It really depends on how fast the animal is moving-- what you get is, this variable long timescale sequential place field activation is transformed into a very consistent, short, or biophysical timescale spike sequence.

And as you capture spatially-ordered events, you transform them into temporally-ordered events in each and every theta cycle. So as the animal's going through these fields, each and every theta cycle, these cells will fire 1, 2, 1, 2, 1, 2. And that is, you preserve and express compressed spatial sequential-- spatially-- sequentially ordered events through this temporal or phase code with respect to theta rhythm. So this very simple biophysical mechanism can transform spatial code and temporal code, and also transform it from variable behavioral timescale into a fixed biophysical timescale, which might be useful for things like spike timing dependent plasticity downstream.

And so that's the cartoon. Does this actually happen? I just illustrate some of the raw data here showing populations of cells, preserving. Here's individual theta cycles. And what you can see-- individual theta cycles, indeed, you get these little sequences. But you can also do this using this Bayesian decoding instead of looking at spike sequences. You can ask, what's the spatial information that is expressed during individual theta cycles?

So here, now, we're decoding, not at 200 milliseconds but at 20 milliseconds, hippocampal spiking activity. When you decode at 20 milliseconds-- and now, here, the-- this positional probability code is illustrated here in gray. And so the likely-- the probability that an animal is at a given location given the population spiking that's occurring at these successive 20-millisecond bins is illustrated here.

And so what you see is, the dotted line is where the animal actually is. The gray shows the decoded positional code. And you see, the decoded positional code sweeps repeatedly every theta cycle from a position just behind the animal to just in front of the animal. And so if you then look at the average positional code, relative positional code across theta cycles-- and that is, you just-- you take the position here, you take the relative decoded position as a function of theta phase, and you represent that here-- this is the actual hippocampal code for space. It's a sequential code that sweeps from just behind the animal to just in front of the animal every theta cycle.

And so you see this theta-modulated temporal code. Is it meaningful? Is it is it really a code? Because a code requires not-- it's not just a representation. It's not just that there's information there. Code posits that somebody is actually decoding it, that it has some influence downstream. Does anybody downstream care about it?

So here, you can look at the hippocampus prefrontal cortex. We're running short on time, so I'm just going to-- I'll just quickly point out, again, hippocampus, prefrontal cortex-- both at the top of the processor hierarchy. Damage to both the hippocampus and prefrontal cortex will produce deficits, high-level cognitive deficits like in spatial navigation, perhaps for different reasons. But if you look at a simple task like this T maze, damage to both of them, you're going to get deficits.

They communicate with one another. And if you record from them simultaneously-- and in primates it's been-- the prefrontal cortex has been talked about as having the property of working memory and its-- that its prefrontal cortex using immediate task-relevant information to direct task-relevant behavior. And that can include holding in memory information about recent experience, so-called working memory.

If we record in the hippocampus and prefrontal cortex in rodents, again, you see here this property of spike phase locking. That is, prefrontal cells lock to a certain phase of the hippocampal theta rhythm. And we record field potentials and spikes in both structures and then ask in a simple task, where the animal's performing a task involving starting from one of these two locations here, running down a central arm, and then having to choose an arm based on where it came from. So if it starts in the upper arm, it runs down here, it has to go to the upper arm.

So it has to remember where it came from and then use that, transform that into the rule, if I start on the upper one I go to the upper one. That's the spatial working memory component. And so you imagine that that spatial working memory is executed here, in the central arm, where it has to-- the animal has to quickly remember, where did I come from? And then what am I supposed to do based on that?

And so when we look at activity in this region and compare it with activity, then the animal's basically doing the same thing. Turns around, comes back down this arm, goes down here. But in this direction, instead of the animal having to remember where it came from and choosing, we just have a little door here that just forces it into one of these arms. So the task is behaviorally symmetric in terms of what the animal does, but it's cognitively asymmetric in why it does it. In this direction, the animal chooses. In this direction, we choose.

And so we can now compare the locking, phase locking to the hippocampal theta rhythm during this task under a couple of conditions. So we can ask, do prefrontal spikes care about hippocampal phase? And where do they care about it? And what you find is, they care about, are they phase locked in that central region when the animal is moving in the choice direction and the animal gets the task-- or gets the choice correct?

If the animal is going in the choice direction but it gets it incorrect, you don't see phase locking. And if it's going in the opposite direction, the forced direction where it doesn't-- there's no working memory demand, again, no phase locking. So theta phase locking seems to correlate with the accurate use of working memory. And that is, presumably, effective communication between the hippocampus and prefrontal cortex.

And you don't see any difference in the hippocampus. The hippocampus spikes are phase locked regardless of whether the animal's choosing or not choosing or getting it right or getting it wrong. It's only the hippocampal-- the prefrontal interaction that seems to be selective and is selectively engaged just in that central region, just before the animal chooses, and just when the animal actually gets it right. So it's an indication that the theta rhythm can be transiently used to synchronize or coordinate communication between structures that need to communicate in a task-dependent fashion.

And you can also see this not looking at spikes, per se, but looking at the field potentials, that is, just looking at the local field potentials that show this selective coherence. And as they tend to phase lock under conditions when the animal is choosing-- that's in red. You see this is the correct choice behavior. And then in gray is in the forced direction. And so you see a selective coherence in the theta band, this [INAUDIBLE].

So there was a question about event-specific rate remapping. I think that it's a great question. And so the-- I didn't talk very much about remapping. And that is, so why would you want to change the code as a function of experience? And so one idea is that you want to be able to use-- you want to be able to separate representations, separate memories based on things that might reflect distinct experiences.

And so like the map itself, every time the animal goes back to a particular location, you get the same basic activity. So you could say, well, that's so that you can actually recognize that-- previous events. And so there's this idea that recognition is a certain capacity that requires [INAUDIBLE]. Yes, I want to remember or recognize something that I've already seen before. But recall, and that is being able to recall specific experiences-- every experience may be distinct or unique. And so I need to have a separate code for that.

And so there a couple of ways in which you could have, simultaneously, something that is stationary or consistent across revisitations-- every time you go back you get the same activity-- but also is distinct, and that is that every time you go back you can capture or encode the unique experience with that. And so that can come from this-- the distinction between the temporal and the rate code. That is, you can have a common rate code, same rate activity at a given location, but distinct temporal code. And that is a different sequence at that location.

So that's thinking about experiential coding. But it may also be the case that different apparently similar locations or environments or contexts you might want to separate so that when I go back to that location-- oh, I recognize-- even though it looks the same, this is actually not room A. This is room B. And so that is this idea of completely remapping the representations, or what's referred to as global remapping.

And so this idea of suddenly changing the sequences like you have common rate where the cells fire is similar. But how they fire might be different. And you could use that for episodic encoding. That's so-called rate remapping, changing the firing rate over time or based on subtle changes in the environment. And then global remapping is like completely reconfiguring.

And so I think that that-- the idea of these different coding, rate and global remapping events as well as the triggers for those remapping events, which is-- that's sort of the question here, event-specific remapping. When you actually trigger remapping rate or remapping of the global configuration, it would be a longer-- that would require a little bit more. But I think the idea that rate and global remapping are subject to these triggering constraints-- so there's different ways in which you could trigger that.

We have a paper, actually, just recently published along with Honi Sanders, a post-doc in my lab who had worked with John Lisman, that we'd done along with Sam Gershman. And it posits this-- it's sort of a Bayesian inferential framework that can predict, when should you remap? And that is when cues and conditions suggest that information is being drawn not from a single distribution, but from multiple distributions. That is, there's not just one thing. There are actually two predictable, distinct things that can and need to be distinguished.

And so let's say, oh, hippocampal entorhinal semantic space navigation-- yes. Semantic nav-- this is like-- so there's a lot of recent work. And I've always thought of the hippocampus and things like languages, like semantic navigation-- so that becomes-- provides a framework for spatial navigation.

But the same kind of computations can be applied in non-spatial contexts. And so [INAUDIBLE] work by Aronov and Tank looking at-- and this is-- so Dmitriy Aronov, who's-- he was kind of a graduate student in brain and cognitive sciences working in Michael Fee's lab. It was songbird coding. So there's similar sequential encoding, but in a sensory system.

And he applied that in the hippocampus, looking at animals performing a frequency detection task. That is, they have to respond when they hear a particular tone. But he did an interesting thing, and that is, he presented these tones not just as discrete tones but in these frequency sweeps. So very much like the ramps that I showed you sequences of locations, he gave them sequences of sound with variable frequency.

And he found that when you do that, the hippocampus creates these sound-receptive fields like place fields but in a sound space, and that cells that could be place fields could also be these sound fields. And so it's the idea that when you have predictable sequences, be they spatial, visual, spatial sequences or sensory, auditory sequences, the hippocampus is engaged. And so you can think about, it's not a big leap going from, well, if you can code sound sequences how about complex auditory sequences that might be present, for instance, in speech or in other contexts? And how might the constraints of spatial navigation, thinking about the properties like remapping and different contexts or how you actually encode short trajectories, the rules that are-- that you have to apply to be able to predict how you would move in space-- could you apply the same thing, apply rules for navigating in a semantic space? I think those are all very interesting and deeply insightful.

Yeah, closing statements. So for the rest of the talk, it's like, yeah, phase matters. And these sequences? They get re-expressed during sleep, suggesting that this code is not just-- not only is it something that you can see, but it's something that the system is actually capturing and leveraging.

So sequences clearly matter. And thinking about how sequential encoding and the generalization of sequential encoding across different kinds of tasks-- for instance, the spatial and non-spatial tasks, I think is deeply important. And it's one of the areas in which I think concurrent machine learning, artificial intelligence has not yet caught up with the biology. And that is, there's very little in terms of really structured, sequential processing, temporal coding in synthetic networks.

There is-- time and time order is incorporated, but not really in the way in which it's captured in the biology. And so I think it's really important and useful to think about how biology has actually leveraged internal state representations as a function of time in its processing. Time's important. Hippocampus figured it out.