Multidisciplinarity in Systems Neuroscience (26:03)
Date Posted:
January 6, 2017
Date Recorded:
November 11, 2016
CBMM Speaker(s):
Carmen Varela
Description:
Carmen Varela, a research scientist at MIT, highlights the importance of interdisciplinarity in the study of intelligence, focusing on the hippocampus and its role in the formation of new memories and encoding of spatial information to support navigation. Dr. Varela demonstrates how place cells in the hippocampus of rats encode spatial location as they move through an environment, and briefly describes her work on the role of the thalamus in gating the transfer of information from the hippocampus to the neocortex.
Resources:
CARMEN VARELA: Hi, everybody. Thank you for coming. We're moving now into a talk that will take us into the other side of the story, which is the biological systems and how they learn. And so I am a neuroscientist. I'm interested in how neural networks, networks of cells inside your brain, encode information and how they implement behavior.
How do they implement memory, learning? How do they change you from going to sleep to being awake? And neuroscience is a field that benefits from and contributes to a wide variety of fields, and you can see some of them there. And we neuroscientists see these in very different ways.
We often, for example, take methods from many of these disciplines, and we-- in my opinion, one of the reasons that this multidisciplinarity can be so impactful is that, at an abstract level, all of these fields think about similar problems, but they come up with very different solutions. And so by talking to each other, we can exchange ideas. And this is one of the things, that Julian put very nicely, is happening these days between artificial intelligence and neuroscience.
And starting with this picture, which was also mentioned in the previous talk-- so I like this picture. It's quite interesting because, first of all, it gives you a very good example of some of the very complex functions that the brain can perform. You can see this go player. He's very, very focused on the task in front of him.
He's processing very complex visual information, all of the stones placed on the board right now. He has to remember the rules of the game. He also has to keep in his memory the moves that he's considering for his next step, and so he has to process all of this information and come up with something meaningful.
And another reason why this picture is interesting is the story behind it. Like Julian said earlier, so this is coming from one of the games between Lee Sedol and AlphaGo just earlier this year. And so the adversary here is not another human. It's a computer program that is facing exactly the same challenges that Lee's brain is facing, and it's coming up with solutions that are just as effective. In fact, the computer won four out of the five games that they played.
And when computer scientists think about how to build machines that learn, they end up thinking about some of the challenges that biological systems also face. If you think about it, how would you do it if you had to write a computer program that learned how to play go or how to play some other game? Well, the first challenge that you're going to find is, how do you even represent a board in some sort of language that the computer can understand?
And you could start with relatively simple math, actually. You could use what you already know about linear algebra. Take a matrix of zeros. Put a one where the stone is currently on the board, and that would be one way in which you can represent a particular move or a particular position in that game.
And now for the next step, now you want to make your computer program learn. How do you do that? Well, you could also do it like humans do. You could go by trial and error and consider all possibilities.
And you know, Julian already said that this is not a very efficient way to do that. But for the sake of reasoning, and to get us started with playing with these ideas, you could say, OK, so given that this position, that my opponent put his stone in this particular position, I'm going to see what happens if I put one stone in this position. And so you could play a number of games and see how many of them do you win. How effective is it to occupy that particular position early on in the game?
And then you could do the same with these other positions. So what happens if I do it here, if I start by putting my stone here, given that the other position, the other stone, the opponent put his stone here? You could play a number of games and see how many you win.
So essentially, what you're doing here is calculating probabilities. You're calculating the probability that each of these positions leads to you to winning the game, and you can use that as a learning process. You can use that to update what you know about the game.
So some positions maybe you won't consider any further because they don't seem to be so helpful, and others you will keep using them because they seem very helpful. And really, the point is that by trying to build something that learns, you're bumping into some of the basic mechanisms of learning. You can identify some of the principles of learning.
And so for example, some of those would be, do you have to represent information? You have to store information, obviously. You have to assign value to those representations to know what's useful and what's not, and you have to be able to update the value of those representations.
And Julian's playing this very nicely. The way that machine learning algorithms try to solve these problems, these challenges of learning, is by setting up interconnected layers of computational units that are connected by weights that can change. The strength between the connections, between these units, can change. And by doing so, they can learn how to, given a particular input, provide a meaningful output. And so this is very similar, and it's actually borrowed from what we know about neuroscience.
So here's a diagram of one of my favorite brain regions. This is a diagram of the hippocampus, and this is a brain a structure that, like many other brain structures, like all brain structures, is formed by interconnected networks of cells, and those cells are connected by different strengths. And by changing the strength of their connections, we think that this system can learn. And so changing the strength of connections between networks of cells is one of the basic mechanisms of learning.
And so for this second talk, we are going to focus on this system on the biological side of things, and we are going to try to figure out how it's capable of learning. We're going to talk about how we represent memory traces, and how do we keep them? How do we store them so we can use them for a long period of time?
All right, and so we can look at this at very different levels. And in neuroscience, you will see all of them. You could study molecules. You could study what compounds within the brain are important for learning and memory.
You could look at synapses. How are the weights actually changing? The synapses are the points of connections between cells in the brain, and we think that the update of weights, the update of the connectivity between cell networks, happens at this level.
We could also look at the cells themselves. We could think about them as computational units, and we could study how they perform computations. And we in systems neuroscience are interested in explaining what happens in the brain at the level of networks, at the level of interconnected cells, and so we often study them in groups.
And one feature that is very important for us is that cells are electrically active. They can produce impulses. We call them action potentials or spikes. They are very brief. They last about one millisecond, and they are very important because we think that this is the language that the brain is using, that the cells are using, to represent and exchange information, to process information in the brain.
All right, and so we're going to do this as a little exercise on a learning experience that we all have in common. I think for many of you this was the first time that you were in this convention center. It was the same for me. And you know, in the last couple of days, we've been able to figure out our way around, navigate this convention center with no problem, just with a few examples of you walking around here and there.
Now we can do pretty well. I mean, this is quite amazing. This is quite a complex problem that we had in there.
And so think about the first time that you went into the exhibitor's hall. So you go in, and you don't know anything, so you just kind of walk around randomly. And so you may turn to the right. You may turn to the left. You may turn to the right again.
And you just kind of browse around and stop in some places that you may find interesting. But let's say that now you bump into something that you really find quite interesting, like the CBMM booth in 308, and so now you may want to remember how to get there, right? And this trajectory led you to something that was important for you for whatever reason, and so you may want to not only remember how you got there, but you may want to improve.
And so the second time that you try to get there, you may still get lost. You are not quite sure where you saw it. But then eventually, you can be very effective and generate a navigational path that takes you there very efficiently.
So how did this happen? What was going on in our brains all these days? How is our brain representing this information in a meaningful way, and how are we learning to go from something like this to something that goes like that?
And if we are going to look into the brain, where do we even start to look? And a good place to start would be to simply go through the literature and figure out if there is any evidence out there, any scientific evidence, that suggests that any brain region is important for memory, and for spatial navigation in particular since that's the problem that we are dealing with. And one clinical case that you are for sure are going to come across if you do that is the case of Henry Molaison Probably many of you are familiar with it.
So Henry had epilepsy as a consequence of a childhood accident, and to treat his epilepsy, he went through experimental surgery back in the '50s. They removed his hippocampus. And after the surgery, he could no longer form new memories. He had a severe case of anterograde amnesia.
You would walk into the room, talk with him for a while. You would leave the room, come back a couple of minutes later. He had no idea who you were or what had happened just a couple of minutes ago. And so, OK, so the hippocampus sounds like it might be a structure that we want to look into given that there is some evidence that it is important for memory.
There is also evidence that it's important for spatial navigation, and some of you may also be familiar with this study. This was done a bunch of years ago now in London, and so they were looking here at the different structures. They were imaging the brain at different points in time as people prepared to get their taxi driver license.
So it turns out that in London, which has a very complicated map of streets. There are like 25,000 streets there, and it's very, very difficult to prepare to get the license to be a taxi driver there, and so people take years, literally, to prepare for this exam. And Eleanor Maguire, she was a student at the time at UCL.
She was a graduate student there, and she decided, OK, something must be changing in the brain of these people, and I'm going to find out what it is. And sure enough, what she saw is that the hippocampus of these people, the people that actually passed the exam, was getting larger as they prepared the exam. And the people that did not pass the exam, interestingly, did not have this enlarged hippocampus. And so these are just a couple of examples of what the hippocampus may be doing, but there is plenty of evidence right now that the hippocampus is involved both in memory, and in particular, types of memory that involve the spatial navigation, that involve moving around in the environment.
All right, so now we have a candidate region that we can explore a little farther, and we wanted to understand this navigational memory challenge by looking at the cells in this structure. So how are hippocampal cells encoding information? And remember that we said that cells in the brain are electrical devices. They can produce impulses, and so we want to understand-- our question essentially becomes, how are hippocampal cells representing spatial information using spikes, using the language that they know about?
And so one technique that we can use to answer that question is electrophysiology. We use this a lot at MIT in our lab, and what we do is we put very thin, very tiny electrodes that we bring very close to the cells so we can literally listen to the brain activity, to the spikes produced by individual cells inside the brain. And so we're actually going to do this now. We're going to look into one of the experiments that was run in Matt Wilson's lab at MIT.
This was an experiment that Fabian Kloosterman, who was post doc there a few years ago-- he ran this experiment. And he also put together this very nice video that we're going to see in a moment. And so essentially, what's going to happen here is you're going to be looking at one of our subjects, which is a rat, that it's going to walk all the way from this start point all the way down to these other points.
So the camera is in the ceiling of the room, so you're going to be seeing everything from above, and the animal just has to do that. When they get-- if they get to the end, they get some chocolate there, so they really like it, and they are very motivated to walk around. And so what you're going to see on the other side, this plot is always going to be there, and this plot is going to show you the spikes of cells in the hippocampus as they happen. And so each dot here represents a spike produced by a cell in the hippocampus.
So we are looking at seven cells in the hippocampus. They are color-coded, the different cells, and we're going to see them as the animal moves around. And so what I want you to do is take the place of the person that was doing the experiment. Remember, you're after this question of, how is the hippocampus, how are the spikes of hippocampal cells, representing the environment? And see if you can find out anything interesting.
And I'm just going to play it if it loads. OK. Sound--
[STATIC SOUNDS]
So that sound is the sound of the spikes.
[STATIC SOUNDS]
OK, cool, so it's a pretty cool video. I don't know. Did you guys notice anything? There are a bunch of things to be noticed in this video.
I like our junior students to look at it because you can learn a lot. If you read about hippocampal physiology-- and it's just by looking at this video alone you can hear the theta rhythm that is typical of the hippocampus. You can hear ripples, so there are a bunch of things that can be learned just out of this video.
But the one thing that I was hoping that you guys noticed is how spikes from particular cells happen at particular locations in the environment. So it's not that these cells are producing spikes randomly all over the place, no matter what the animal is doing. The cell in blue produces most of the spikes when the animal is seen this particular term.
And there was a cell-- the yellow cell, I believe, was only finding the spikes in this little, in a very restricted area of the environment. And so hippocampal cells are place cells. They encode space.
And if I can get out of the video now, let's go to the next slide. All right, so they encode space and location in the environment, and this is another way of representing it. So what I did here is I-- so this is a diagram. I'm representing spikes as vertical lines now, and so what happens is that if the animal is in this area of the environment, this cell in green is more likely to fire spikes.
And instead, if the animal moves to this other region of the environment, the cell in purple is more likely to fire spikes. And so these are the place fields of these two cells. And now it gets even more interesting when you start thinking about, OK, so what if I had multiple cells? Right? What if I record from a bunch of cells that may represent a particular area of the environment, like this one here?
So now we have overlapping place fields in this particular region of the environment. So what's going to happen now, it's quite cool. So when the animal moves from left to right, he's activating those cells as he walks around, so the cells are going to be activated in a particular sequence.
Even more interesting is that when the animal then stops, and when he goes to sleep, the same cells that were activated when he was moving around through particular spatial trajectories, they become activated again in exactly the same sequence, in the same order, as when he was moving around. And so we think that this reactivation of a previous behavioral experience is a representation of a memory trace. Because also, if you interfere with them experimentally-- people have run that type of experiment in which they mess this up experimentally, and they find that they can interfere with memory.
OK, so now we have a way to interpret our behavior, our learning behavior spatially in this convention center, in terms of spiking in the cells in the hippocampus. So right by now, all of us have hippocampal cells that map the environment that we've been exploring these last few days. We have cells that represent this area. We have cells that represent every single spot that we went over in these last few days.
And so when we were going through this particular trajectory, a specific sequence of cells in the hippocampus was being activated that represents that particular trajectory. And when we move through this all trajectory, another group of cells, another ensemble of cells activated in a particular sequence, can represent this other trajectory. Cool, so how do we make sure we don't lose that information? How do we store these memories?
And this brings me back to the case of Henry Molaison because another observation that scientists made when they talked to him was that it was very clear that he had this anterograde amnesia problem. That was very, very strong. But interestingly, when they asked him about things that had happened before the surgery, he had no problem remembering those.
He could remember long-term memory. He could just not form new memories. Not all of all of the memory function was gone, and so that suggested to scientists that the hippocampus may just have a temporary role, and that over time other brain regions take over for the storage of memories, and we think that neocortex is a big candidate for the storage of long-term memories. And in my research, we've also been investigating the possibility that a third brain region called the thalamus is also important for the storage of long-term memories.
And this is a structure that is in the center of your brain. It's connected with the neocortex, strongly connected with the neocortex, and some parts of the thalamus are also connected to the hippocampus. And so one of the first things that I did when I came to MIT was to study the wiring between these three brain regions-- the hippocampus, the neocortex, and the thalamus-- in more detail, and we found that in the thalamus there are cells that have branches that project both to the hippocampus and to the neocortex.
And so this is an example from one of our brain sections from a rat. And so the cells in green are cells that project to the hippocampus, the cells in red are cells that project to the neocortex, and the ones that are pointed with the arrows, which are a little orange, they have collateral branches that go to the two structures. And this suggested, or gave some anatomical substrate, to the possibility that these cells could be influencing, or perhaps forcing, a coordination, a temporal coordination, between the hippocampus and the neocortex.
And so we are still investigating that possibility, that the thalamus may be important in coordinating the exchange of information between the other two regions, and now I'm using electrophysiology. So I record the electrical activity, the spikes of cells, in these three regions as animals behave and as animals sleep. And one of the things that we are finding, that is quite interesting and was quite unexpected is that the spikes in some thalamic cells, they drop.
So you don't have as many spikes in the thalamus when the hippocampus is reactivating a potential memory trace, and so that's what these plots indicate. These ticks here are spikes in the thalamus. They are aligned to the occurrence of reactivation of memory, potential reactivation of memory in the hippocampus, and this is just a histogram that adds up the information in this roster.
And essentially what you see is that there is a gap here, so a decrease in the number of spikes in the thalamus, when the hippocampus is reactivating a memory. And so a new hypothesis that we are testing now is the possibility that maybe the thalamus is gating the transfer of information. It could be that the thalamus is shutting down so there is some more reliable transmission of information between the hippocampus and the neocortex. You don't have extra spikes in there that could be interfering with something that you need to make sure that you're copying correctly.
And so that's all I had to say, so I'm going to just quickly summarize what we saw, how representations of space and spatial trajectories occur in the hippocampus, in place cells in the hippocampus. And we saw that the hippocampus has to coordinate activity with other brain regions like the neocortex and the thalamus to make sure that that information is going to stay there for a long time. And so, obviously, this is just a couple of things just to give you a flavor of current research and of the questions and challenges that we like to think about.
We haven't talked at all, for example, about, how do we discriminate between the different trajectories? How do we assign value to these trajectories to then distinguish it from these other trajectories? Is this one more valuable because it gets you there faster, or is this other one more valuable because you get to see other things that are happening in the convention center?
And also, how do we change? If we determine that this one is more valuable, how do we change from going from this trajectory to this other trajectory? How come we are not just going over and over again through the first trajectory that got us there? Right? That could be the case. And in fact, that happens in some psychiatry disorders, but there is something that is making the chains, and that's learning as well, so I'll leave you guys with those questions.
But just to go back also to the idea of interaction between fields, between neuroscience and artificial intelligence, something that we hope that comes out of answering and studying all of these questions is that we will understand this biological system there better. But like Julian said, understanding the biological system is going to help us understand and come up with better machine learning algorithms, and in the case of the hippocampus, with better navigation algorithms, which are becoming very popular as well in artificial intelligence. And in the same way, we neuroscientists can learn a lot about the challenges and the principles behind learning by reading and keeping in touch with people that work in artificial intelligence because these can provide us clues on how do we even get to start to understand this very, very complex system that is the brain.
And well, OK, so we're going to be at the CBMM booth this afternoon. Now you know how to get there, and I'll take any questions see if you have them. Thank you.
[APPLAUSE]
Associated Research Thrust: