Building and evaluating multi-system functional brain models
December 5, 2022
November 4, 2022
Guangyu Robert Yang
All Captioned Videos Advances in the quest to understand intelligence
Robert Guangyu Yang - MIT BCS, MIT EECS, MIT Quest, MIT CBMM
PRESENTER: So let me now introduce our next speaker. And it's going to be Robert Yang. And Robert joined the MIT faculty in 2021 with appointments in BCS and EECS. In fact, if you remember Dan Huttenlocher mentioned we had two shared hires with the College of Computing.
And Robert was actually the first hire, and is leading the way there on our interfacing across the College of Computing to BCS. His group develops integrated computational models of elemental cognitive functions. And it's kind of an example of the interdisciplinary approach that we're taking over all here in the quest. So go for it, Robert.
ROBERT YANG: Thank you, Jim. I'm really excited to be here, both at Quest, at this event, but also at MIT. And I'll show you why. I'm kind of new here. I've been here a year. So today I won't tell you a ton of stuff that we have done, but I'll show you a vision of what we're interested in doing, and some results as well.
So like Jim said, we want to build and evaluate multi-system functional brain models. And I'll explain to you what that means. And you will see that a lot of this echo what, for example, Leslie, and Nancy, and others have already said, but from more of a neuroscience angle.
So how can artificial neural networks help us understand the brain and minds? So a lot of people have already talked about that. But I'll just share you one thing that is really personally important for me. Besides helping us to build models that can, for example, explain complex behavior and complex activity observed in the brain, artificial neural networks also allow us to think about the brain from an optimization perspective. It allows us to ask why the brain is the way it is besides what it is.
And so if you think about the most prominent optimization perspective in science, it's evolution theory. It tells us why we have the diversity of biological organisms in the world. Even though we can know everything about their appearance, we don't necessarily know why they come about.
So this is an argument for using artificial neural networks. But today, really what I want to tell you is that we need to integrate neural networks for multiple systems. So if we look at the history of computational neuroscience-- I'm a computational neuroscientist, and we build models of the brain. And people used to build models of a single neuron. And then they start to put some neurons together. And then only in the last decade or so, people are starting to build models for individual systems, really led by a lot of people here at MIT.
So Jim already talked about this. We can have models of, for example, the ventral visual stream, which is really important for the first 200 millisecond of vision. And now, I argue that we should-- and again, echoing what Leslie and others have said-- try to put these systems together by studying multi-system models.
Now, like I already mentioned, so these multi-system integrative models will try to incorporate models for individual systems so that we can really have a model that can potentially capture what everything that the brain can do. And you already saw this picture earlier from, I think, Leslie's presentation.
Now, why should we study these integrative models? There is a clear point from the embodied intelligence argument. But even for neuroscientists who are studying even seemingly simple tasks, integrative models can still be important. I'll give you an example.
These folks from Cold Spring Harbor-- Anne Churchland led-- recorded activity from the brain of a mouse when the mouse is doing a very, very simple visual decision-making task. So the task is essentially just the mouse sees either some flash on the left or on the right, and they have to look to the left or look to the right accordingly.
But what you see here-- what you saw here is a video of 20-something areas from the top of the mouse. And you see when they're doing this task, almost every area you can see here is activated at different point. So even a very simple task would require integrative computation across a large part of a mouse brain.
And besides trying to explain these data, there are many, many interesting questions that emerge when we're studying multi-system models that we won't get to study when we're focusing on individual systems. For example, now let's say you have a model component for a little bit of everything that you want to build, vision, cognition, everything. How do you put them together? That's not straightforward. Sometimes they don't even have the same-- essentially, you can't just simply hook them up together sometimes. And I'll show you some more examples later.
And another thing is, how do you constrain the huge design space of these models? Let's say you have five ways of building a vision model. You have five ways of building a cognition model. Now, suddenly, you have 25 ways to just-- when you combine a vision and a cognition model. How do we constrain that?
So how do we even evaluate such models with diverse brain data? Once again, if you're putting together vision and cognition models together, so what data should you compare to? Should you compare it to only the visual data, or you should you compare it to only the cognition data, or both?
And finally, hopefully we can find parsimonious design principles across these systems, so that it's not just a combination of 20 components. Because if you look at the cortex, it has more or less a unified architecture with some parametric variation across areas. And hopefully, we can find some principles to describe, for example, the entire neocortex.
So this is one reason why we should study these models. But the timing is very important. Why should we study them now, not 10 years ago or 10 years later? So one thing that people have already talked about is that-- I think Mehrdad already showed this-- that multi-system neural recordings are becoming widely available.
And so the number of neurons that neuroscientists can record from the brain simultaneously have been doubling every six years. But this has been happening forever, for the last 60 years. Why should we invest in multi-system models now?
So something that really happened just in the last three or four years is that the technology has advanced to a point where we can simultaneously record from tons of areas. For example, this paper reported result-- reported neural activity from 42 areas. Not all simultaneously recorded, but they can record from many, many areas together. And Mehrdad also can do that in his lab, as well as other people.
So that's one reason, we're starting to have multi-system neural data. Now, another reason to study these models now is that we're starting to have reasonable models for individual systems. So for example, you have already seen lots of plots like this. I've showed them. Jim showed them.
On the Y-axis is similarity to brain data. On the X-axis is models. And we can finally have models that are almost as close to the brain as, for example, one brain is to another. And we have such models for the visual system, for some cognitive system, language systems, and the hippocampal cognitive map system. So it's not that these systems are perfect, but they're finally gotten to a point that they're reasonable, and people are fine with putting them together.
Now, I want to make a case that it is important to use neuroscience to constrain this vast design space. We can learn from many things. And I want to show you why neuroscience is important.
So like I talked about, there are many different ways to, for example, put this system together. There are different ways you can build a sensory system, a cognitive system. Then there are many ways you can combine them. And then you can train each part on different tasks. So this is really a huge space.
Now, how do we constrain them? So I'll show you one example where we can look at neural data that people have recorded from different parts of the brain. And they have done some analysis on it. I won't really explain this. So they have done some analysis from the visual system and the cognitive system. And then you can build different models that can do the cognitive task that the monkeys are doing equally well, in this case. But they have fundamentally different way of solving it. And then one way is actually a lot more similar to the brain than the other way.
So this is just one example where we can use the neural data to constrain the model space. But this one data set only constraints-- only constrains the design space so much. So how do we go forward? A simple idea is to just scale this approach. Rather than just constrain it on one data set, why don't we constrain it on many more data sets?
So these-- again, I won't explain this-- but here is about 10 data sets-- or 15 data sets that our lab has been looking at in the past year since we started here. And we pretty much can explain the major findings across all of these data sets recorded from many different labs across the world. And if we can do that in one small lab in one year, the hope is that with the help from Quest, CBMM, McGovern, ICoN, all of these wonderful resources at MIT, then we can scale this in five years to above 100 data sets.
So our goal here is to have a single model that can explain major findings across 100 data sets recorded in animals. And if we can do that, then hopefully that would give the whole community some confidence and some methodology that we can maybe hope to explain main results from most data sets recorded in animals specifically-- in mice and monkeys-- that are commonly used. So this is our hope for multi-system models. And I'll take questions after. Thank you.