Quest | CBMM history and future
Date Posted:
December 2, 2022
Date Recorded:
November 4, 2022
CBMM Speaker(s):
Tomaso Poggio All Captioned Videos Advances in the quest to understand intelligence
Description:
Tomaso Poggio - MIT BCS, MIT CSAIL, MIT Quest, Co-Director of the Center for Brains, Minds, and Machines
Jim DiCarlo: So now I'd like to turn the floor to my colleague and friend Tommy Poggio, Professor Tommy Poggio. And Tommy is the founding director and co-director at MIT's Center for Brains, Minds, and Machines. He is a mentor to many faculty, students, and researchers who are affiliated with CBMM, including a mentor to me. And he's really a giant in the field. And he'll share some of CBMM's background and accomplishments, and also some of our plans going forward and supercharging those efforts under the Quest. Tommy.
TOMASO POGGIO: Thanks, Jim. Yes. I will not speak really about CBMM history, its past, and its future because CBMM was the past of the Quest, and the Quest is the future of CBMM. So that's it for the history. And you've heard it from Jim. I will speak a bit about the vision that we had, and how this has changed over the last 10 years in terms of science and engineering of intelligence, which was our goal at the beginning of CBMM in 2015.
And I will also speak briefly about the possible role for foundational theories of intelligence. And so when we started CBMM with Josh and many other people here, we were making a bet. We were
betting that understanding the brain is first important in itself, and second, a path to AI. Now this was before the enthusiasm, the fashion, of machine learning and deep learning started.
So as I will explain more later, the first part of the bet, there is no discussion about it. The science of intelligence is, I would personally argue, the most important open problems in science. But the bet about AI is a different one. If the only goal would be to realize intelligent machines is then quite a legitimate question to ask, is it better to first understand how the brain works in order to build intelligent machines or forget about the brain, go for the engineering, and then possibly understand with the help of machines how the brain works?
And the latter approach is a bit what, given the circumstances, companies like DeepMind, which is part of Google have been actually following, what are following now. So the reason why I thought, we thought that it was a good idea even if you are only interested in AI to have an understanding of the brain, it's because history told us that was successful, as Jim already said, the success stories over the last 10 years in AI say Mobileye vision-based autonomous driving systems. It was the first one the Tesla used until we [? find ?] Musk.
But and the Demis AlphaGo success. So this is, I'm not so sure Mobileye Demis Hassabis AlphaGo. Those success stories, autonomous driving and playing games better than humans, were based on two key algorithms which are still keen in machine learning today. One is reinforcement learning. And the other one is deep learning. And they both came from neuroscience.
So that was what history told us. And the inference was some of the next breakthroughs will come from neuroscience as well. Now as it turned out, in the last four years, several of the success stories, more recent, don't really have their roots in neuroscience. Of course, they use the neural network model, the reinforcement learning models, which have, as I said, their roots in neuroscience.
But success stories like transformers, the ability to compose sentences, or even programs at the level of the average human programmer, those architectures don't directly come from neuroscience. So where the discussion, it's already two years ago, at the Advisory Committee meeting of CBMM. And in particular, Demis Hassabis, who is one of our advisors, has been over the last 10 years.
And the observation there was that yes, engineering approaches by themselves seem to have a higher probability of success without neuroscience than they had 10 years ago. However, 50% is still pretty good. And in fact, we as Quest now are doubling down on this approach based on not only engineering, but on science being at the core of the engineering. And this was also the advice from Demis. We should go all in. And there are several reasons for this.
The first is, as I said, that's still the goal of doing great science, solving the greatest problem in science, is completely valid. And you need to do computational neuroscience in order to do that. The second, I think, is just a risk reward computation that any investor or hedge fund could easily make. There are many billions of dollars that are being invested, or have been invested in the last few years, in AI, or are related companies and startups.
So if you have just one billion and not a few hundred, and you want to make a bet that may have an impact, you have to be contrarian. You have to invest in an area where few other people are investing, not Google, not Microsoft, and so on. And that's the area of neuroscience based or neuroscience motivated AI.
So and a couple of more words about this. Let's call it contrarian bets, or betting on neuroscience based AI instead of engineering based AI, is that as Demis said, that the engineering only approach may hit a wall, then what do we do? Maybe neuroscience will provide the answer, a way out.
I believe there are an infinite number of intelligences, not just one. The human intelligence is one of them. It's very unlikely just by engineering by chance will hit solutions that are similar, equal, to human intelligence. So the only way to get some system that is like our brain and mind is to complement engineering with neuroscience. And then there is all the kind of great things that you can get, and Jim spoke about it, if we understand the brain, and the mind, and not just have intelligent machines.
It will be great to have intelligent machines. But we can have both. And by the way, if we want at some point to develop the next generation of us and the next generation of machines, interfaces between our brain and computers' interfaces between our brain and the future of internet, we need to know where to put the plugs, how the brain works.
It's not enough to have the intelligent machines. And this, by the way, I think, is a much more practical way than others that people are pursuing to fly to the stars, or to live forever, or to make yourself much smarter, which by the way, it was my dream when I started admiring Einstein when I was a teen.
What about solving extended time space problem, solving the problem of intelligence, and then solving all other problems? Yeah, I want to make a final point in the last five minutes, which is that it's important to explore models of the brain and the models of intelligence. But it's also important to understand why these things work. This is, again, back to that paragraph from Patrick Winston.
If you study
how birds fly, you may understand the signs of aerodynamics, which allow you not only to understand the birds of flight, but also to design airplanes. But if you just try to replicate, have models, of how birds fly, you may end up with a little thing that flies like a bird and you don't understand why.
And so that's fine, especially if this little thing is understanding the brain, that would be good. That would be great. But it's still less than understanding the principles that make these things fly. I think it's important to consider we may never find the equivalent of the double helix for the brain. Maybe there is. Maybe there are some important principles underlying intelligence in brain and machines.
So an example is the following. In the meantime, and this is, again, something that came up in the last three, four years, there are not only one deep learning architecture that works well. Until four years ago, I would say it was only convolutional networks, which effectively came from studies of visual cortex. But in the meantime, there are architectures like MLP mixers, and especially[?] transformers.
And so the question is, is there a common principle across this different architecture that make them work? Why do they work? We don't really know the answer to this question. And maybe the answer will then explain why deep networks work independent of this various architecture, perhaps even how human intelligence work.
And as an example of some of these principles, here is a conjecture that I don't know whether it's going to be true and useful. But it is the-- conjecture actually about the world is the fact that all tasks and functions that we learn, or we're interested in, are like-- are probably, we think are actually sparse compositions.
In other words, they are made up of other functions that it's a composition of functions. Each node here is a function. And as you can see, each node in this graph and this graph has only a relatively small number of inputs. So I think this is the structure of the world that's a big axiom. And if this is true, then what deep networks can do is reflect this power structure. And it is the fact that there is a small number of inputs, simple modules, that makes approximation work.
We have a theorem. And a more recent one that shows that also generalization work under these conditions. So for instance, if you want to predict the market, not tomorrow because it's Saturday, but Monday, you may have to have a model that learns from many inputs, which could be thousands of stocks and other financial markets, and boundaries around the world. If you have a graph of the underlying function that puts these things together, you will have a much better chance of using machine learning to learn what's going on.
And I think this assumption may also explain two situations, the past one, convolutional network where you know the graph of the function and you have a network that reflects that, or transformers where the underlying graph, which variables are important at each stage, is actually selected flexibly at runtime by attention modules.
Anyway, this is just an example of how a principle that could be generally applicable across different architectures may be important. I don't mean this one, but something like that. And with this, let me go back to our main slide about the Quest. Thank you.