What would it mean to understand intelligence?
Date Posted:
December 2, 2022
Date Recorded:
November 4, 2022
CBMM Speaker(s):
James DiCarlo All Captioned Videos Advances in the quest to understand intelligence
Description:
James DiCarlo, Director of the MIT Quest for Intelligence , Co-Director of the Center for Brains, Minds, and Machines
JAMES DICARLO: Nergis, thank you for those inspiring remarks. And really, thank you for all your support. So really fabulous. So, folks, friends, my thing to do with you now next is really, first, to remind you really of why we're all here. And Nergis already did an inspiring job of that.
I titled my talk What it Would Mean to Understand Intelligence. But if there's really only two things you need to remember from my talk, they're this. The first thing is that an understanding of human intelligence is really the greatest open scientific question of all time. It's right up there with the origin of the universe and the origin of life. That's in line with what Nergis just said. This is a longstanding question.
The second thing you need to remember is the MIT Quest for Intelligence and the Center for Brains, Minds and Machines, its science core, this is the organization at MIT that is pointed directly at this question. It's the only organization at MIT pointed directly at this question.
So now we're going to say come back to earth, if you will. This is the goal, and we're excited about that. But let's talk about it. Could we actually do it? What would it mean to understand intelligence?
Now, this idea of what would it mean has kind of two ways to read that sentence. What would it mean in terms of impact? Why would we do it? And what would constitute an understanding? So let's try to unpack that a bit.
So if you just think about what do we know about intelligence, maybe we already understand intelligence. Well, one way to easily ask that is to ask what we know about today's AI. How good is today's AI?
And there have been amazing breakthroughs. You're probably reading about AI or hearing about AI every day. But these systems, even with their power, are still quite limited. And the reason we can know this is just using your own eyes, your ears. You can say, my frustrating interactions with Siri or Alexa. The fact that we don't have robots helping us in the kitchen or caring for loved ones, for some reason those don't yet exist, why is that?
And maybe what you don't know-- these are just examples. You may not know that the systems that we do have take incredible amounts of data and power, things that are not really sustainable on their current paths. So these are technological problems that reflect the idea that we don't really have an understanding of intelligence. And I like to quote from Feynman, "What I cannot create, I do not understand." So in other words, if we can't yet build technologies, we must be missing some kind of understanding here.
Now, on the other side, you can just ask more directly, what do brain and cognitive scientists know about intelligence? What's really known about human intelligence?
And I think I can speak to this with some authority, having served as department head here for nearly a decade of Brain and Cognitive Sciences, that even though we have a great deal of knowledge, we have many papers, we have many textbooks, we know brain regions that are involved, we know a lot of the elemental parts, we really don't yet have much of an understanding of this phenomena.
And part of the reason I can say it is not just out of some authority as department head. But again, look around as to what we're not yet able to impact. So if you think about things like brain disorders, whether they're neuropsychiatric or neurodevelopmental disorder, we think about learning, difficulties with learning or learning disabilities, or even just social conflict among individuals. these are things that I would submit to you that if we had a scientific understanding of intelligence, we would have much more impact on than we're currently able to do.
Now, the quote you might say here is, what I cannot repair or don't know the limits of what I can repair or enhance, I don't yet understand in the same sense. We don't have that scientific understanding. Both of these, if you will, inabilities to make impact are due to really an underlying scientific weakness, which is we humans don't yet have a scientific understanding of intelligence. And that's what the Quest is about. It's a community on a quest to understand intelligence.
Now, let's unpack that a little bit more, when I say all this. Well, intelligence, I especially mean human intelligence. Of course, we use animal models and others to try to guide our work. And understand, I mean in engineering terms. And that's important. And community, of course I mean, faculty, staff, students, engineers, scientists, and so forth, and our supporters. Let's unpack this.
Now, human intelligence, why are we talking about human intelligence? Well as brain and cognitive scientists, this is something we know that is quite amazing. We study that intelligence system every day. We study this again in both humans and animal models. And really, the machine behind your eyes that is your brain, when it's coupled with your body, can do amazing things-- navigate new situations, learn with minimal instruction, infer what others believe, use language to communicate, write poetry to express how it feels, collaborate to build bridges and devices. And together, collectively, we built civilizations from nothing.
So in short, this is an amazing system. And we can do this with very little data and power. Somehow, this is things that we can do that yet, for example, current AI systems cannot do. This is just a remarkable feat.
Now let's talk about, OK, human intelligence is impressive. We grant that. But what does it mean in engineering terms to understand that? What would that even look like?
So I like to start with this quote here. This is a quote from Francis Crick. "You, your joys, your sorrows, your memories, your ambitions, your sense of personal identity and free will, are, in fact, no more than the behavior of a vast assembly of nerve cells and their associated molecules."
He called that the astonishing hypothesis. It's astonishing because it's a statement that the mind is an emergent thing from the brain, that intelligence emerges from the product of a machine. Now, this is astonishing to some people. It's the working hypothesis of our field of brain and cognitive sciences that this could be understood as a machine.
To get, to fix ideas by analogy, here's another machine that many of us may have in our pocket or some version of this. And the idea here is there's a similar quote. "Your remarkable favorite app, all its performance and amazing user interface are, in fact, no more than the behavior of a vast assembly of transistors." So that is, of course, also true. It's a statement that this is a machine. But unlike the machine on the left, the emergent capabilities of how those transistors are assembled and what kind of ways to give rise to the emergent phenomena are understood in engineering terms, yet are not understood on this side here.
So the upshot of this, what I'm trying to tell you, is that it's possible to think about that intelligence, intelligent human minds, they're the products of machines and thus can be understood in engineering terms. So for instance, you could ask, how does the mind work in engineering terms, just like you could ask how my phone app works in engineering terms. That does not mean your mind, your human intelligence is going to work like your phone. But the idea of the kind of understanding we seek is there.
You could ask, how does perception work? How do movement and planning work? How does language work? How does memory work? These are all questions that we can ask and answer in engineering terms.
So if you think about from the phone app, of course, the answers don't look like a map from transistors to apps. But of course, an understanding of all the intervening levels that exist that allow those transistors to be assembled to give rise to the emergent property of the remarkable app.
Now, similarly, if you think about the emergent intelligent behaviors as reflected in overt behavior, you can try to think about the underlying cognitive processes that are represented in states and algorithms, models of what's going on, revealed as what we refer to as cognition, that then are implemented somehow in brain regions, implemented by neural circuits, that are ultimately implemented by billions of neurons and trillions of connections. And an ideal engineering-level scientific understanding would allow us to bridge down to all these levels.
That doesn't mean that we need to get all of these levels right from day one. But this is the goal is to make all those connections eventually grounding in biophysics to say we've linked the science from one to the other. So that's the longterm view of what this kind of understanding might look like.
So really, on this quest to understand intelligence, the applications are things like new AI possibilities, new ways to teach our children and ourselves, and new avenues to treat brain disorders and augment our brains. We'll see new ways to intervene that are different than the ways we think about today and more deeply understand ourselves and how we interact with each other and how that might improve our interactions with others.
So let's talk about our community here. As I mentioned, it's a broad community. MIT has been working to prepare for this quest. This community didn't just emerge last week. This is something that's been being built up. So the community actually-- these are faculty hired over the last 10 years or so in the interface of natural and artificial intelligence. Many of them are part of the Quest. You'll hear from many of them, actually, today. I as department head played a role in hiring many of these faculty. And there are many others that aren't even listed here. So MIT has been preparing at the faculty level.
An important preparation for this quest has really been an effort over the last 10 years, the Center for Brains, Minds, and Machines, which was started in 2013 by my colleague Tommy Poggio and Josh Tanenbaum. You'll hear from both of them next. The Center for Brains, Minds, and Machines has allowed us as an NSF center to build up an intellectual community, again, a broad community of faculty, postdocs, students, staff, and supporters. That community started-- here's an early version of CBMM community-- that has been growing each year, getting bigger and bigger, and extending well outside of MIT. And that's in part through our training programs and our summer training programs. And you'll hear about that also later today as how we're trying to grow that community to fuel this field.
We built teaching and outreach programs, one of which I just referred to. And also, we've been training the next generation of students at even the undergraduate level. So as department head, I was involved in helping to start a new MIT undergraduate major, computation and cognition. And these are the number of students at MIT that are in that major. And you can see, this is actually one of the fastest-growing majors of MIT, reflecting enthusiasm for working at this interface and helping us to nurture what the future will require to make these kind of advances.
Now, just organizationally, the MIT Quest for Intelligence and its science core, CBMM, we sit as a research arm within the Schwarzman College of Computing. The goal of the College of Computing-- it's called a college, and not a school-- is to integrate across many disciplines. So especially across, in this case, the natural sciences and the engineering sciences, but also as it brushes up against the humanities and social sciences and the School of Management, for example.
And that Quest serves this unique role of infusing computing, especially in the natural sciences. But importantly, it also, as we assemble these ideas of how human intelligence works, that will change the way computing happens in the future. So it feeds back to computing as well.
We have some provisional space to be working on this already. And this space, the new version of this space will exist-- and I'll show you right here. This is the Brain and Cognitive Sciences building. We're all sitting right about that location. This is looking down from the top. There's the new College of Computing building that's coming up right here next to us. If you'd like to see it when you go out for lunch, you can look out this window.
The Quest will have space within this new building with a bridge to the building we're sitting at now, the Brain and Cognitive Sciences building, which is the premier brain and cognitive sciences building, really, in the world. And here's the bridge going up a few weeks ago. And actually, it's more progressed than that. So this is both a physical and an intellectual connection between computing, as represented by the college, and Brain and Cognitive Sciences, as reflected, especially, in Building 46, but broadly across the MIT.
So we're trying to bring these things together, not just philosophically, but bring them together in a real way to make real progress. The way we do that is this is the strategy that we've adopted. So the idea is that we take measurements and discoveries and results from neuroscience, cognitive sciences, and other natural sciences field. And we try to bring them against theory and model creation and synthesis, often driven by things like computer science and robotics, and bring these together around what we refer to as computational models or integrated computational models of intelligence.
These models serve two things simultaneously. They serve both as new hypotheses for the mechanisms of natural intelligence that drives further refined experiments. They also serve as new computing and engineering possibilities, again, feeding back to computing. This is the cycle that we're aiming to charge.
And this also then-- these models can be understood in more deeper levels by more theoretical analysis to understand both their limits and where they can be applied in new ways. The goal of all of this, these integrated cycles, is to, over time, build a science of intelligence, as Nergis outlined.
So why do we believe that strategy I just put? Why do we believe it can even work? Well, because we've seen it work in the past. So many of us have witnessed-- myself included-- firsthand how the strategy has paid off in the area of visual processing.
So for instance, you may recognize this person. You may know, first of all, that it's a woman's face. You may know it's Marie Curie. You did that very quickly. How did you do that?
For many decades, it was not clear how that worked. Brain and cognitive scientists were busy measuring the behavior of humans and others, other animals. And many of us were toiling away, measuring neurons and their connections and the multiple layers within the brain and how individual neurons were maybe participating in the computations to give rise to this amazing behavior of just even saying, oh, that's a person. And I know who that is. That ability was something that was quite mysterious. Even though we had a lot of data and results, we weren't able to say exactly how is that happening in engineering terms.
Similarly, computer science and other fields were busy trying to build models that could do this. And they were struggling for many decades to get this to work as well. At some point, researchers said, hey, let's learn from each other. Let's start building models that are inspired or even actually very close to, anatomically, the kinds of things that you see in the brain sciences. Let's optimize them with ideas from cognitive science, then amplified by optimization techniques for engineering.
This led to new models, which are now the new intelligent model systems, which lead to key advances in both science and engineering. Let me unpack that on this slide here.
So I'm just saying the natural sciences brought in these kind of data and measurements that I referred to. Together with engineering, we build these integrated computational models of fast sensory intelligence. You now know these as deep architectures. They were originally inspired by a vision. They are now deep architectures used for deep learning and what people generally refer to as AI today. Of course, that's not AI should be, but that's often what AI is today. And that was inspired by just this early sensory processing.
And they're also, by the way, not just technology payoffs. They are now the leading hypothesis of the mechanisms of the first 200 milliseconds of visual processing. They're not perfect hypothesis. In fact, the ongoing work is to refine these hypotheses evermore, feeding back to better models, which feed to new engineering, which also helps feed back to better models. And both of these things are helping both sides of this, the engineering and computing and the natural sciences.
We've seen this strategy work in the natural sciences, although it did take decades. So the lessons of this, how do we speed it up? If it took decades how, what lessons can we take to make it go faster? So lesson 1-- integrate efforts from natural science and computing engineering. If we do that, that yields big payoffs in both areas, both disciplines, that neither could achieve on its own.
And don't just wait for the bottom-up approaches to somehow self-assemble. Being bold with building and testing integrated models has these kind of payoffs. And it gives better understanding of the underlying components that would not have achieved by studying those components in isolation.
We see that in the visual science system, where we are studying the neurons. We didn't know how they individually worked. We started to assemble them together into what the whole should do. That gave us deeper insight into how that actually works.
Lesson 3-- we do not need a perfect understanding of all aspects of the brain components to meaningfully understand intelligence. The models that we use today, they are approximations of what neurons, for instance, actually do. But they already are giving meaningful movement in the technology space and meaningful impact on the science space as well.
The lesson 4-- the first 200 milliseconds of visual perception is certainly not all of human intelligence. And by comparison, the models that come out of them, even though they're powerful, we can already see that those models are not able to do many of the things that you would like in intelligence systems to do. And you'll hear more about those limits and how we're trying to go beyond them later today.
How do we take those lessons and then supercharge this strategy? So we're enabling teams of people to make big interpretive bets that individual groups, labs could not otherwise make, learning from those lessons. This requires new organizational approaches, which we refer to as missions, and significant engineering resources and personnel to enable those.
Those enablers are two types. One are things like benchmarks and platforms for evaluating models, for bringing natural sciences data up against models, alternative hypotheses. You'll hear about that from Katherine Fairchild.
Also, engineering platforms allow us to build new types of models and scale them up. You'll hear about one of our bets on that from Vikash Mansinghka later today. And so you're going to hear about our big integrative bets broadly, but also our elemental bets that we have to keep fueling that fuel all of the underlying research as well. So what picture you should have in your head is going from where we are today to the future, which is a science of intelligence.
So let's think about that future. So to step back and say-- think about what would the future look like, I want to point out a quote from Patrick the late Patrick Winston, who was the director of the Artificial Intelligence Laboratory here at MIT for 25 years. And this quote is still inspiring to me today, and I hope is inspiring to many of you.
He wrote, "Imagine a world where human intelligence is truly understood. Instead of useful but narrow systems, such as Alexa and Siri, imagine systems as smart as we are that would change the world." Now, that sounds great. But let's think about this a bit. Change, change is great, possibly. But change could also be scary and bad. So the goal of understanding human intelligence has risk to society that must also be considered.
So for example, many of these risks are in common with AI technology today. So some of these include things that you might already be thinking about or hear about-- privacy invasion, bias and discrimination, possible job loss, social or physical weaponization of technologies that result from these kind of scientific gains.
And so the way we think about this in the Quest is that we have at least three ways we think about these kind of issues. We consider these issues through the lens of our unique scientific approach, that is, trying to understand human intelligence, which leads us to think about the pros and cons of these under this kind of understanding. So all forms of scientific understanding tend to have pros and cons, and this one does too.
So for example, the ability to read minds might be something that is emergent from a deeper understanding of how human intelligence works. That has potential risk to society, but it's also what enables compassionate care of understanding what people need. The ability to maybe understand intelligence more deeply has the ability to possibly augment our own intelligence, which could seem that some cases have negative connotations. But that's the same kind of understanding that could allow us to ameliorate learning disabilities-- again, pros and cons.
Human jobs might be replaced, especially things that humans are good at are the things we might be especially expecting these understanding to give rise to. But on the flip side, there are many jobs that people really just don't want to do or we need people to do, so relief from unrewarding work as a potential positive of these things.
So this is how we in the Quest think about these things. And we're aware of these pros and cons as our research goes forward so we can discuss and guide policy around possible technological uses of the science of understanding that we seek.
So the Quest is also out of a university, which allows us to leverage, in this regard, the social and ethical responsibilities of computing from the College of Computing. And also, our work is fundamentally about trying to understand human intelligence, which leads to a strategy that leads us to already want to understand human ethics and how it arises, unlike needing to bolt on some ethics that might arise from a standard AI approach.
Just to close things up here, I'd like to return back to the quote I said from Patrick. Now, he said this, and I took you down this path here to say, let's think about that, change the world. But this is still inspiring to me. And why is it? It's because I think these things are something we need to pay attention to.
But let's think more, the rest of Patrick's quote. Instead of knowing what works, if you have this understanding, instead of knowing what works in K through 12, researchers and educators would know why it works. We could revolutionize the education of people with special needs and provide compassionate care for the aged and challenged.
And systems that recognize how culture influences thinking could help avoid social conflict. And mental health could be understood on a deeper level to find better ways to intervene. This is, again, is all from Patrick Winston. I find all of these things still inspiring today.
For the rest of the day, you'll hear about, again, many of our bets and research going on. And my colleague Josh Tanenbaum will unpack some of this for you later. What I would like to end you with is just to have you try to imagine a world where human intelligence is truly understood. Thank you.
[APPLAUSE]