CBMM10: CBMM after 10 years — the Quest now
October 31, 2023
October 6, 2023
All Captioned Videos CBMM10
MIT Boris Katz,
MIT Andrei Barbu,
MIT James DiCarlo, MIT
TOMASO POGGIO: We are here, as I said, because it's 10 years of the NSF funding. And so I want to tell you a little bit where we come from, kind of the history of CBMM, and where we are going. And I want also to remember and thank all the students and postdocs, staff, faculty that has powered our scientific adventure over the last 10 years.
CBMM started with a vision that I personally share with many of my colleagues. And this is a vision that has shaped my scientific career. I always believed that the problem of intelligence, and especially human intelligence, is not only one of the great problems in science, but is, I think personally, the greatest of all. And we can argue why, but that's my personal view.
And so CBMM was kind of an embodiment of this vision. And it was 10 years ago with Josh Tenenbaum and others, we realized that things were starting to change and managed to get funding for trying to understand intelligence by leveraging new contribution from these areas of cognitive science and computer science, machine learning, and neuroscience.
And we wanted to create a new field, science of engineering of intelligence, dedicated to developing a computational understanding of human intelligence. Now, I want to tell you about the people who are involved that started this. The trigger was Rafael Reif-- Rafael Reif, who was then Provost at MIT, who came to BCS to our department and challenged us trying to come up-- to tell us to come up with some new research vision for BCS.
And then Mriganka Sur, who was the chair of the department, then suggested to Josh Tenenbaum and myself to try to meet the challenge. So what happened was that together, we started what we call the intelligence initiative. And we had a very successful workshop at the American Academy with all the faculty of MIT and were kind of impressed how many that were interested in the fundamental problem of intelligence. And after that, we distributed some funding that Marc Kastner, the Provost then, the Dean of Science at that time, made available.
But then what happened was that there was 2011, 150 years of MIT birthday. And there were five symposia organized for that. So we organized one of the five symposia, and we call it the "Brains, Minds, and Machines."
And this was the proposal we had for that symposium that was accepted in 2011. And then this was very successful. These are images from the past that I'll let go as a slideshow. You recognize-- several of you recognized themselves, maybe a bit aged. It was impressive how many people came and were enthusiastic about it the first day that we call the golden age, which included Marvin Minsky and Noam Chomsky, first time they met together, on a podium.
And other people, like Sydney Brenner and Patrick Winston and Emilio Bizzi. And so the last day was a marketplace for intelligence in which we had IBM and Google and Microsoft and Demis Hassabis, Amnon Shashua, Coby Richter. And so on there was enormous interest. And so on the heels of this, we made a proposal to NSF, convinced them that it was the time to do something.
And one of the people were really instrumental in this part of the story, so getting funding from NSF and formulating the principle of CBMM, was Patrick Winston, who died, unfortunately, a couple of years ago. He was a great man, and we are really missing him, but I wanted to acknowledge his contribution.
Now, we had a lot of great students and postdocs and faculty over the last 10 years. Yeah. There are people from NSF. You have seen it-- John Cozzens and Dragana, our wonderful advisory committee, faculty. Don't try to memorize everything because it will be run in the break again so you can check it and reinforce memories if you want. But our research was turbocharged by all the great people we had.
And we ended up with what Philip referred to-- quite a trail of published papers. And we had collaboration with a number of institutions, and also with academic, non-academic outfit companies like DeepMind and Intel through Mobileye, and so on. And the next slide shows some of our research. It's difficult to speak about research for 10 years among so many different great people, but as Philip mentioned, we are about 700 or 800 papers.
And it's not so much number, but many of them are very important milestones and results and progress in the science of intelligence. So our research plan in the first five years was really about trying to make progress on some of the broad questions you can ask about intelligence. You can ask how it develops. What is the hardware of the mind, the circuits, so to speak? The equivalent of transistors and gates?
What is the connection between vision and symbols? And also added a theory thrust, this thrust 6, trying to think of a theoretical explanation that would glue these different but related projects together. And there was a set of seed projects in audition, decision, face recognition that were important for injecting new blood in the center and opening up new areas of study on different aspects of intelligence.
The development effort in the meantime has become a separate major project, led by Josh, and thanks to the generosity of David Siegel, that will join us later. Now, for the second five years, we tried to address the amazing ability of human intelligence, the fact that we can answer of an infinite number of questions about the surrounding visual environment without any pre-training.
This is closely related to what really is a very useful illusion, is the one that I would call the visual equivalent of cogito ergo sum. I would call it video ergo sum. It's something like visual consciousness that you experience when you are in an environment and can move around it, and as I said, answer any questions about it.
And some of this comes from a neglected part of human vision, which is in the next slide gives a glimpse of it, is that we really see, at any one time, only a small part of the image. And what we really think we perceive in terms of the surrounding is a computation, an illusion, fabricated by our brain-- of course, very useful.
And this kind of-- in the last five years, we pushed an architecture for vision next slide that shows how one could address the different parts that this kind of a visual consciousness will require. Here is essentially a list of papers, just to give a feeling for the many topics we have worked on, and the results we got.
And the next slide shows for the different modules. In that previous slide, you see some of their papers. What we have done is on the science of human intelligence, we have showed that some deep neural network system are surprisingly accurate models of the mechanism of visual cortex in primate.
And on the theory side, we have developed a theory of invariance in visual object recognition, and also shown that deep networks, but not shallow networks, can escape the curse of dimensionality can lead to feasible computations, unlike shallow network. And then in the science of human development, we produced the first step of computational models of infant core common sense knowledge.
And then on the engineering side, CBMM investigator demonstrated that compositional neural networks enable robots to better and more efficiently generalize to new scenarios. So this is the past, now the future. So where do we go from here? And I want to reflect for a moment about exactly the past and ideas about the futures.
So in the past year, we expected that progress in the science of intelligence to also have an effect in engineering. And this happened. You can ask why. One example is that some of the success stories in the last 10 years-- take for instance DeepMind, AlphaGo, playing better than human champions in chess or other games.
And Mobileye, which is Amnon Shashua, and there the problem is a system that is an intelligent driver, like humans can be. And both these success stories were based on algorithms at reinforcement learning and deep learning that have their roots in neuroscience or cognitive science.
Reinforcement learning is work by Hebb and others, and deep learning is really work originally by David Hubel and Torsten Wiesel recording from the primate cortex of cats and monkeys at Harvard. They provided the basic hierarchical architecture that is still what deep network are using today. So in fact, we should continue to try to understand human intelligence. And the main reason is, of course, what I said already, is the greatest problem in science.
But then there are also long-term applications that you can think. And the one listed here is kind of science fiction, but not so much. We may want and may be able to expand our intelligence and our memory at some point by more directly interfacing with machines. And of course, in order to do that, you need to know what are the protocol of communication, where to put the plugs, and so on and so forth.
But there is something else, which is looking around how the situation is today, is that for the first time in our history, the history of mankind, we have other systems in addition to us that pass a Turing test. And so thinking about GPT-4 or so, we can discuss how really intelligent they are and so on, but think the Turing test conceived by Turing is passed by this system.
And so there is a wonderful opportunity. These are some of the systems we have the human brain, of course, but then we have transformers, and maybe some of these others that are reaching levels sufficient to be helpful to humans, say in programming, for instance, in writing. And in general, I think it's, in some vague sense, are Turing intelligent.
Now, the fact that we can study and compare our human intelligence with these systems, we can look for similarities and differences. I think it's a wonderful opportunity for a comparative study of intelligence. Are there common principles or not? And I personally believe there are. It's a question whether there will be many, like in physics, like conservation of energy, or of mass, or just a few, like in biology, like the helical structure of DNA. That's to be found out.
But I have a number of reasons to believe that there are fundamental principles, fundamental in the sense of physics. I don't want, because of time, to bore you about the one such potential principle, but just say that this is a wonderful opportunity that we have.
And this suggests a kind of strategy for research in this era in which we have AI systems that is comparing the cognitive level, comparing them with each other and with human intelligence about what they can or cannot do, potentially going more in details, trying to understand the differences between the circuits, logical circuits and more simple instantiation of the behavior, just looking inside transformer and inside brains, and ideally coming up with some theoretical fundamental principles.
So you can ask, why do I need a theory? OK, here I want to make a little story, this story about Alessandro Volta. He was professor in Pavia, and he was made count by Napoleon, 1800, because of the discovery of the pila, the battery. It was an accidental discovery, and he did it for a typical academic motivation. He wanted to show that a colleague of his, Galvani, was wrong.
But the point is that this was the first time there was a source of continuous electricity in the world. Until then, electricity was sparks. And with Volta, suddenly there was continuous electricity so people could study it. And within a few years, the whole electrochemistry was done. There were electric generators. There were electric motors. Volta himself designed the telegraph line between Pavia and Milan, 30 kilometers away, which was not built, but he designed it.
Information until then was going at the speed of a horse. After Volta, speed of light. So it was very important. And there was a lot of applications, even if people did not understand how electricity worked. There was no theory and that's the point I want to make. So it's only 60 years later that Maxwell appeared.
At that point, there was a theory of electromagnetism. And of course, we the evolution of electricity grew exponentially in terms of computers and the internet and AI and so on, and everything in between. So this is just to say that that's one reason we want to do it. Theory in next slide shows other reasons. For instance, if we would understand how transformers really work in detail, we could have, really, an explanation how transformers work, not this deep explainability of large language model is one of the big problems these days for applications.
So this would be a side effect of a theory of these intelligent machines. I want here just to say that there are many fascinating questions, but I think in the race for intelligence, we should push for science rather than short-term engineering. We as academic researchers, it's fine that companies do the engineering. It's important. It's useful.
But I think developing some fundamental theory of learning and intelligence is a compelling and urgent need in this panorama of other intelligence appearing around us. I don't think we want them to tell us how our brain works, which is not impossible. Now, let me pass to the next step, which is, we know that we have genes and we have memes, ideas.
And the evolution of mankind is due to the evolution of memes as much as to the evolution of genes. And so as Richard Dawkins was saying, the common aspect of both genes and memes is that they replicate. They mutate and replicate. And so the way memes replicate is, unlike DNA and so on, the main engine there is in part, especially in the old days, maybe even today, educational institutions like big university had a key role in replicating and spreading memes.
And so I think CBMM has been trying to do the same. The research part is mutating the ideas, improving them. And then there is the part of dissemination of the education, outreach, tech transferred. And so let's turn now to three of these other things, in addition to research, important that CBMM has done.
One is education efforts. Ellen Hildreth is not-- she's not with us today. And Susan, who took up the duties from Ellen, is here. Thank you, Susan. There is a great education hub that we have developed, Ellen has developed, and Susan, with online courses and organization between various institutions for teaching courses on the science of intelligence.
And then the outreach effort Mandana was in charge of. And she could not be here, but she has sent a video that we can show now.
MANDANA SASSANFAR: Good afternoon, everyone. My name is Mandana Sassanfar. So I'd like to give you an update on what we have been doing in the last 10 years, trying to diversify the science of intelligence and the field of computational neuroscience. We wanted to increase the number of women and minorities in the field, and we needed to bring students who are usually not exposed or don't have access to computational neuroscience into this field. So what we decided to do was to have a quantitative method workshop to expose students to the skills that they needed.
And then the next step was to bring those students for 10 weeks to MIT, place them in CBMM labs, and really have them do some research. We also had academic partnership with some of the minority-serving institution. That worked very well. We had collaboration with MITx, which basically, in 2014, the first year we had CBMM, we created an online quantitative methods workshop, which to date is offered and has about 200 or 300 students every semester taking the class.
The workshop for the past 10 years, in total we had 726 participants from 15 participating institutions. Most of them were minority-serving institution or urban schools. And 90% of the students who attended were either female or underrepresented minority or both. And 121 of those ended up also going to the next step, which was summer program at MIT or in the CBMM.
For the past 10 years we had 144 funded slots, but of those, 124 were unique students because some of them came back. And then of those 124, 109 have currently graduated from college. And of those, 59 are in a PhD program or finished their PhD. Some did M.D. PhD. A few did master's. Some are in medical school. And a number also are software engineers.
At least eight of them have received NSF GRF. One was a Soros fellow. At least two have had Fulbright. Four had Goldwater, and probably the numbers of these are underestimation. We have 13 students. And the reason why I'm showing you these 13 students, they are all in PhD tracks, but all of them at MIT. And we can go and show you more. I said 59. There is not enough space for 59, but these are some of the students, just to give you an idea of the diversity of the students we have.
I also mentioned that a number of students decide to join the workforce. And so the students that we have here have done quite well. So it's important for CBMM to actually-- the outreach to really work to have our students come back and actually teach in some of our workshops. This is really a way to build a community that gives back.
Just as important is the partner that we have at our institution for broadening participation. We couldn't do this without having people on the ground who are actually committed as much as we are on the other side. So for example, this is the group at Hunter led by Susan Epstein. This has been a fantastic collaboration. Here is faculty, Dominique who is at Howard. So Dominique was not there at Howard when CBMM started, but he's now the most important faculty that we work with for CBMM.
These are three faculty in Puerto Rico. And again, we work with three or our campuses. Here we have a partner from University of Central Florida and a great partner at Florida International University. And again, this is why our outreach works so well.
We also have to depend entirely on the faculty at CBMM. And if we didn't have committed faculty who actually did what they needed to do to make this outreach successful, it will not work. So what do they do? They give lectures at the workshop. They give lectures during the summer. They host students. They have roundtable discussion. They talk to them about their career. And it's really an inspiration for the students.
Tommy had three summer students in his lab, and all three of them are actually currently graduate students. And this is two students that Boris and Andrei mentored. One of them has already graduated with a PhD from MIT. She's currently at Facebook. And the other one decided to just go in industry right away. We also have an example of, for example, two postdocs here who mentored a student in Bob Desimon's lab. She's now in a PhD program. But Diego here is also going to start his own lab.
So now I just want to shift gears because the workshop with our partner institution is very important. This is a way for us to really broaden the participation. And we have really concentrated on Puerto Rico because that's where we think the need is bigger. This is really what I want to focus on because that's also what we hope to continue in the future with some money from the NSF supplement that we have received. The idea is, you train some students, and then they train other students, and you basically expand from there.
We work both with the teachers, because what we realize is that students are not prepared for math when they leave high school. So one of the now initiative is to really train teachers, high school teachers, in teaching math and machine learning, introduce them to this, make this really easy set of lectures or material for them to use. And then you have to partner with the education director in the country.
Finally, I want to thank Tommy and others for this opportunity to be part of CBMM. I'm a biologist and biochemist by training who didn't know anything about AI. And I have learned so much in these 10 years, and it has been a pleasure to work with these people. I really want to thank Ellen Hildreth, who has done a phenomenal job, both at the workshop, and with all the education curriculum that she developed.
And I want to also thank Joel Oppenheim, who has always come to every single CBMM poster session at the end of the RU and talk one on one with the CBMM students to help them navigate their future. And finally, I want to thank Kaitlin and Chris Brewer who work behind the scenes. We have a great website thanks to Chris. I'm talking to you right now because Chris is videotaping me. And then Kaitlin, who is behind the scene, and has made this CBMM just a pleasure to work with. Thank you.
TOMASO POGGIO: OK, and now about the summer school that has been one of our flagship initiatives since the beginning of CBMM. Boris?
BORIS KATZ: Well, good afternoon, everyone. It is it is wonderful to see so many of friends here today who came for the 10th anniversary of the Brains, Minds, and Machines center. Over the next two days, you will hear about CBMM's progress towards the creation of the new field of science and engineering of intelligence.
More importantly, we will explore the future. You will hear great ideas from the world's most brilliant scientists about where we are today and where we need to go tomorrow. But in order to achieve these remarkable goals, we need a continuous influx of young, brilliant scientists, people to carry the torch to build on these foundations that you created and to take them into the future.
And 10 years ago, when CBMM just started, we decided that the best way to do this is to educate young people from all over the world in order to make them equally comfortable in neuroscience, cognitive science, and artificial intelligence. And to do that, we decided to create a summer school. We call it Brains, Minds, and Machine summer schools. And it takes place every year in Woodsville, Massachusetts.
And it has been fantastically, remarkably successful. I would like to show you a short, 2-minute video of some of our students, past alum of that school, talking about summer school.
LEYLA ISIK: I had a really amazing experience at the summer school. It's just such an intellectually stimulating environment. So I think the combination of the amazing course instructors, the great talks they give, all the fantastic students all being together in the summer camp environment in lovely Woodsville really makes it a fantastic experience.
IGNACIO CASES: I am an autistic person, and I have to say that there were no barriers. The contents are incredible. The people are incredible. One of those few cases where I felt no barriers in my learning process. That was something very important in my life.
COLLIN CORNWELL: It was this world of people coming together with all sorts of interdisciplinary backgrounds all there to address the question of how we could use machines to better understand the mind, and in vice versa, use the mind to better build machines.
IGNACIO CASES: It definitely changed completely the way I look at research, and it also changed my career in a very, very positive sense.
LEYLA ISIK: But this very directly led to new research projects, one of which even resulted in a Journal of Neuroscience paper. But then, even at a bigger picture, being in the summer course really changed my thinking and hearing from faculty that I didn't directly work with and that worked on different areas than I did, I think really shaped my future research program.
GEMMA ROIG NOGUERA: I think it was a wonderful experience for the students themselves to have this hands-on experience, but also for me. And with one of them in particular, we continued the project and we published the result in one of the most prestigious conferences in computational neuroscience and machine learning, which is NeurIPS. So I think it was just a great success.
COLLIN CORNWELL: Almost single-handedly, CBMM summer school opened the door to the vast majority of the opportunities I consider open to me now.
IGNACIO CASES: Oh, my. If somebody asks me about going to the summer course, you cannot miss that. This is a life-changing experience. And you learn a lot, and it's absolutely fascinating. You get new friends. It's unique. It's incredible. It has a very strong and positive impact on the research life of every student that has gone to the summer courses. It's just that good.
BORIS KATZ: All right. Thank you. Well, many of you in this room actually teach at our summer school, and I'm very grateful to you for coming to Woodsville every summer, for giving lectures and interacting with students. I'm especially grateful to Gabriel Kreiman for his leadership.
But in addition, our school has an important secret ingredient, one that makes it different from other summer schools. And that is teaching assistants. They spend enormous amount of time, pretty much all waking hours, almost 24/7 with the students, working on their projects.
And they eat breakfast, lunch, and dinner with students together. They go on biking expeditions together. Much learning in school happens during these informal interactions between students and TAs. And now I would like to actually introduce Andrei Barbu. Andrei is the head TA of our school. He is the person who makes it run year after year, and he will tell you more about the school's past and the school's future.
ANDREI BARBU: Yeah. So the school's been going on for 10 years. We've had 400 absolutely amazing students. See, nine of them were in person. One of them was the unfortunate COVID year. But even that year, we used that opportunity to try something completely new. We put the school online. We didn't have admissions. Anybody could enter. And we really had hundreds of people join.
And even later, the next three sessions after the COVID one, there were people there that said they had been to the online version and really came to the in-person school because of what they saw there. Like Boris said, the main focus of the school is this project. About one third of the school at the beginning is about basic tutorials, just getting people up to speed.
Just because people come from neuroscience, cognitive science, and computer science, you can't assume that they have any kind of shared background. And you have to spend about a week lecturing in 12-hour days or so to get people to the point where they can even understand the talks in other areas. Then for about a week or so, there's lectures. And in the last week, the focus is really on a project.
And we've had absolutely amazing projects. For example, we have projects neuroscience that try to bring them closer to computer science, projects that attempt to use ideas from theory to understand neuroscience better. Projects in cognitive science just happen to work with us. And very often these are projects, like you heard in the videos, are things that might not have happened without the summer school.
Many of these projects ended up being supervised by multiple TAs that otherwise might not even talk to one another or know about one another. And I have to say that there are many people in this room, many people in this building, that being across the street in CSAIL, I wouldn't know and I wouldn't be working with if it wasn't for the summer school. It's really a wonderful way to build a community.
So every year, we have about 35 students, about 13 TA and 30 faculty. You can see, the student TA ratio is very high just because it takes so much work to supervise someone doing a project that's completely outside of their area. Really encourage students-- and the vast majority of students do this-- that hey, if you're going in being a computer scientist, do a project in cognitive science. Do a project in neuroscience, and try to do something new.
So we've had about 400 students go through the summer school so far, not counting the folks that were there online during the COVID year. About 60 TAs. Many students come back as TAs. So this is a multi-generational opportunity. In particular, many TAs come back when there's more senior students. So if you're a new TA, you might work with a more senior one in order to supervise projects, get your hands dirty. You get a feel for how it is to work with students.
And that really builds confidence, in particular for the URM students, particularly for students that might need a leg up, a confidence boost, to see, oh, I can supervise students. This is something that I actually enjoy, I can be productive at, and then build up your way towards, for example, being a faculty member. We've had many TAs that have moved on to be faculty members.
We've had countless papers, postdocs, jobs, collaborations come out of the summer school. Many people in this room are postdocs in part because they found their advisor through the summer school. And I think the part that's most amazing is about 90% of the students that go to the summer school report that it's a formative experience for them, that it really changed the direction of their research in a fundamental way, which is really an astoundingly high number, far higher than we ever expected.
I want to give you an example of one such student. I think Mandana mentioned Candace. Candace came originally from Howard University, but even before that, she was at the Naval Academy and had some health issues. So she ended up going to Howard, and then she did a summer internship with Gabriel. And then she did another summer internship with us.
Fortunately, we won the competition and she ended up working with us. Sorry, Gabriel. With us, she worked on machine learning in optical flow. But then as a PhD student, she became interested in problems that have to do with language and vision. She went to the summer school as a student. There, she really tried out crazy ideas like, how does embodiment help you acquire language? Then as a TA, she supervises a student working on social biases in vision language models, which at the time, there were no papers on this. Now this is a massive field.
And now she's a research scientist at Meta. And she's working on exactly these kinds of ideas. How do we build fair NLP? And I think this is a nice example of using the summer school to export the values of MIT, the values of the quest and CBMM, to the rest of the world. The students that go on to work at Meta change the direction of research there. And if you don't have such students, you won't have that kind of research.
So this is the basic interaction between students, faculty, and TAs at the summer school. And there are a number of things that we want to do in the coming years as part of the quest. One is double down on our DI efforts. We make sure every year that we include people that are underrepresented, women, et cetera as students, faculty, and TAs. We want to do much more of that.
One part of that that's always difficult is keeping the course free. We tried once to tell people that we will have needs-blind admission. It turns out that immediately, people self-select. You end up getting applications only from people from the US, from well-funded labs, et cetera, and only from a few labs in Europe. You don't get applications from Africa, you don't get applications from South America when you do this, no matter in how many places you put this on the website.
So that's important, and we have to work on that in the future. We also want to give students an opportunity to spend time at MIT. Very often-- you heard some of these projects are amazing. We've published many papers with students at the summer school. But a lot of them, as you can imagine, it's really hard to write an entire paper in two weeks when there's three weeks at the summer school and send it off for publication.
This really happens because individual labs have funding to bring those students back for a semester and really get those projects ironed out. We have, in the past, organized a few workshops at NeurIPS and other conferences. I want to do that more in the future. It would be nice to be able to take some of these projects we have at the summer school and package them up so that folks at home can try their hand at doing so.
And you heard from Mandana, we've made various course materials available. Chris has worked with us to publish many of the wonderful talks at the summer school, as well as many of the tutorials. And we've had corporate sponsorship for fellowships. But going forward, we want to expand the range of topics. I think it's very easy, these days in particular, to be very exuberant about the progress of AI and to forget about neuroscience or forget about cognitive science.
But those will play an important role in the future, particularly because if you look at the scaling laws of models in the coming years, people will reach the limits of all the data that's available on the internet, practically. So we want to make sure that we include neuroscience, we include cognitive science. And we want to bring in embodiment in future years, robotics. And I think Jim may talk more about new faculty members that will be part of this effort.
And we also want to think of this entire enterprise as a way to build quest community going forward between students, and as a way to onboard students into the quest as a whole. A different way to think about the summer school is if you to disseminate the ideas of the quest widely at many universities, you're going to see things like the quest, things like CBMM, begin to appear in other places. So we can explore this idea, not just keep it at MIT and at Harvard. So with this I want to turn it over to our leader of the quest taking us forward, Jim DiCarlo.
JAMES DICARLO: I want to add my welcome too to Tommy's and Dan's. Thank you all for coming. This is an important time for us, 10 years of CBMM-- important time to celebrate what we've done and also think about where we're going. And I hope that's what you're going to experience over the next day and a half or so. I am going to now say a bit about the future. But before I do that, I want to motivate why I'm even here standing before you.
So with part of the CBMM history, the NSF encouraged us to be thinking about this wisely. So it's like, well, you're going to end in 10, years as Phil said. And what's going next? So what's the future beyond this center grant that's now at its end? How do we maintain those programs you heard about from, for example, from Mandana you just heard from Boris and Andrei? How do we take that forward? How do we take the vision forward on the research and to keep the DNA of that going in the future?
And that's what we've been working on here. And I'm excited to share that I think we have a path forward. And that is really a credit to both the NSF and all of CBMM in making that happen. It has permanently changed MIT. If you don't feel that, I just want you to know that it's obvious to those that have been around 10 years before and at 10 years now, permanently changed MIT.
And you've heard about a little bit of this from Dan, and I'm going to highlight for you some of the things that I think maybe you do or do not know about. So one of the critical things that's changed is we've hired a bunch of new faculty, basically at this cog neuro computer science interface. Brains, minds, machine interface.
These are some of their faces, just some of them. And I don't want to toot my own horn too much, but I was a department head for 10 years in BCS during this time. And it was a big part of what I saw of what an opportunity we could do and an opportunity that we needed to do. And so many of these folks, I am proud to say that I had some at least minor role in helping to bring them here. And they have already helped transform from where we are to where we are now and to where we're going in the future.
We built up, as you heard from a number of people, a new intellectual community of faculty and students, both through the summer school, the research that's going on by the faculty that are here, and by these new faculty that we've brought in. You heard about briefly from Dan, there's a new undergraduate 6-9 major. 6 is EECS at MIT. 9 is BCS at MIT. So so-called 6-9 undergraduate major.
I would say this is like-- I toot my own horn. It's like, this is what I did as department head, except I'd be wrong in saying that. All I did was I assigned Michael Fee, who might have been in the room here, who's now our current department head, could you help make this happen? And Michael went to work with our EECS colleagues to build up the curriculum, work with CBMM faculty to design courses to be part of that curriculum.
And as you can see, in terms of number of student interests-- these are the course 9 students, BCS students. This is, at least for some time, was one of the fastest growing majors at MIT and easily outstripping the number of students in course 9 reflecting the interest in this space. And to us, the excitement that we are training people that will be the future scientists in this area, the science of intelligence that we're trying to build.
We've built new training and outreach programs. And you heard about those just for the last 15 minutes or so from both Mandana and Boris and Andrei. And I think one of the most important changes, at least organizationally at MIT, you've already heard about, is that MIT in its wisdom decided to launch a DLC called the MIT Quest for Intelligence. So DLCs are essentially almost permanent structures at MIT Department Lab and Center that is trying to carry forward and expand these efforts.
It's called the MIT Quest for Intelligence, and you've heard us talk about the quest. And now it's essentially inherited the vision of CBMM. And that is the way we're carrying this forward, both on the research front and these program fronts. So I'm going to tell you a bit more about that going forward. Tommy kind of said this. I'm going to show you graphically.
We had an intelligence initiative funded by our then provost, Rafael Reif, that really got us started. Tommy and Josh, to their credit, were the ones who really got this going, Josh Tenenbaum around 10 years ago, again, at the start of this center, this vision of, we can bring these fields and integrate them around this vision of brains, minds, and machines. And as Phil said, very hard to get those grants, and they were successful in doing so. And that nucleated all this effort.
MIT launched the quest for intelligence around 2018. And then soon after, as Dan mentioned-- and Dan is now the dean of our College of Computing-- College of Computing was announced. And the building is going up right next door. I'll show that in a minute. And then soon after that, the quest was able to focus its vision around the intersection of brains, minds, and machines to essentially make that core of CBMM what the quest is really all about.
And it also launched some missions that I'll tell you about that launched those missions and began to scale those missions about a year and a half ago. And so this is now where we are where we have a DLC called the Quest for Intelligence-- brains, minds, and machines-- as a research center within the Schwarzman College of Computing. So that's an organizational overflow of how it's-- overview of how it's changed MIT.
Now, this is something that has physical infrastructure associated. With two here at MIT, we're all sitting in this building 46, top-down view-- we're actually right about here-- that houses most of the Department of Brain and Cognitive Sciences, a lot of the empirical sciences around neuroscience and cognitive science. And there's the College of Computing building, as I mentioned, coming up right here.
It's, I think, no accident that it's right next to us. And in fact, we are-- whoa, I'm sorry. And one of the other things I tried to do with the department is, we should have a bridge from this building to this building because it's going to be a really impactful thing. And in MIT's great wisdom, they decided that was a good idea. And we are actually the only building connected to the College of Computing.
It's nice that we have EECS, computer science, robotics, and others right across the street here. But this quest will have space within this building. And this is the future physical home of the quest for intelligence, brains, minds, and machines. And so it's not just physical bridge and physical space, but I would say more importantly, an intellectual bridge between science and engineering of intelligence in the original vision of CBMM, brains, minds, and machines.
This is how I think about that vision operationally, how things are actually unfolding in the research front. And I'm going to say a bit about this. And many of you have seen me say this slide, but I want to say, this is more than just these three things, but the way they interact together in a productive manner. So natural sciences, which are especially cognitive science and neuroscience. This is a big part of what this building is about and other parts of MIT are about, are gathering discoveries, data, measurements, findings about the brain and the mind.
Those things inform ideas of how we should be thinking about intelligence. They have to be coupled with ideas like theories and principles of how an intelligence might work, builds of computational systems that mean to embody those ideas, and to bring these two things together into what I refer to as integrated computational models of intelligence. And that's the comparative. They build those ideas and then compare them with what we're seeing in the natural sciences.
And you heard a bit of this in Tommy's slides. These things simultaneously do two things at once. They serve as new hypotheses about the mechanisms of human intelligence. Like all models or all understanding in human history, they're not they're not ever going to be perfect on their first round. But they serve as hypotheses of what might be going on in various domains of intelligence.
Those then drive new experiments to say, well, maybe these models aren't quite right here. Here are the other places we can test that, and we can iterate that back in to further improve these models, essentially running the pure scientific loop here to understand. These also, because these are formal models, they're not just vague ideas. They are built systems or formal models.
They can be used in near term to drive computing and engineering possibilities. And so keeping those grounded on the idea that these can drive technologies is an important part of this overall activity here and how these things inform each other, because those things can also then be analyzed, as Tommy pointed out, for deeper theoretical ideas. We have formal models that then can be-- those ideas can then sharpen the builds that we do over here that then again inform, for instance, the hypotheses of how aspects of intelligence work.
So I hope you can see, this is the virtual loop that we are excited to execute-- in fact, we have been executing. And I want to say the goal is to build a science of intelligence by running this kind of process and do this in domains of intelligence that I'll tell you about in a minute. So when I turn and say, well, Jim, that's a nice picture. Why do you even believe that could work?
And the reason I believe it could work is because I've been part of seeing that work in an area that I know well. My lab works on visual processing in humans and non-human primates. And I could say, for decades we had a lot of measurements of what might be going on in certain areas of the brain, neurons measured across a bunch of visual areas, behavior that we can measure-- a rich set of phenomena from the natural sciences.
But we didn't quite have a formal understanding of what was actually going on. What changed is that the parallel development of models-- again, think about built systems that could serve as hypotheses for what might be going on-- started to mature, in part informed by these ideas-- think of the original work of Hubel and Wiesel and others that Tommy mentioned-- informs these model builds with new engineering to optimize parts of those models that neuroscience couldn't quite measure, but led us to now this interplay between models that now serve as some of our leading hypotheses about at least ventral visual processing in primates.
And these also served as some of the leading models of computer vision applications. And so this is an example of how these fields interacted with each other. And this, I could say, has really transformed at least how we think about visual processing in non-human primates. And this is just one domain of intelligence. Now, that's an example of how it could work. This is now mapping that up to show you just this one area that I know well.
But of course, our ambitions are far grander than just vision. And so that's where we're headed next. So this, just to map that for you-- integrated computational models of fast sensory intelligence came out of this, extended to other sensory domains, like Josh McDermott's work in audition. These kind of deep architectures became, as I mentioned, some of the leading things in computer vision, and they were generalized with deep learning to apply to other types of data then became deep learning in general and became what we currently think of much of AI over the last decade or so. At least, I'm going to say, "AI."
And they are, as I said, some of the leading hypotheses of at least the first 200 milliseconds of human visual intelligence. OK, and I'm going to come back to that in a minute. And this loop continues today and continues to drive this understanding forward. But there's lessons here. This is an ongoing process, but there are some lessons that we take from this that we are trying to carry forward as we take this work forward. So what are they?
So the first one is that if you can integrate efforts from the natural sciences and computing engineering, that that can yield payoffs back to both fields that I would argue that neither field could have achieved on its own. And so that's bringing the fields together to the benefit of both. And we saw that in that example that I just showed you. And we think that will happen again in other examples. It is happening again in other aspects of intelligence.
The other thing is, we can't just wait around to build bottom-up approaches from the measurements of the elements of the system and hope that they will self-assemble. What changed was being bold and building and testing integrated models that were meant to capture entire domains of intelligence-- properly scaled. We didn't try to build a human and get it to stand up and walk.
Get it to do aspects of intelligence that were challenging but possibly doable. And that led to a better understanding of the integrated components and how they work together, better understanding of the components themselves, studying it by that essentially top-down approach with guidance from what you're measuring in the system. And again, that vision is a great example of that, but that's happening in other parts of the field, as well.
Also, we didn't need-- actually, it turned out we didn't need a perfect understanding of every aspect of biology. We'll talk about this on our panel tomorrow and how much that might evolve going forward. But even with approximate models of the neural elements-- for example, we had some more meaningful understanding of aspects of, for instance, visual cognition.
And we'll talk about this on a panel tomorrow and see how people feel about that lesson. And probably the most important lesson I would say here, and I've already alluded to this-- the first 200 milliseconds of visual processing is, of course, not all of intelligence. All the models that I'm talking about here, they're powerful in explaining vision, and they're actually powerful in the AI space when generalized, as I said. But we still see they're limited in lots of ways.
And so trying to go beyond today's models that were motivated in that way to tomorrow's models, which will explain much richer sets of data and be much more powerful in the AI space, is what we're most excited about here. So that's a big lesson, as well. OK. When we say that this is motivating, I hope, to all of you and why we think about these things, but now what are we actually going to do?
Well, we have to try to take some of these lessons and then supercharge this to make this real. And so going forward, we need to enable teams of people to make these big integrative bets to say, don't just wait and do the thing you usually do, but take bets on integrative across fields, building systems that can try to cross domain-- can span entire domains of intelligence, that individual groups and labs could not otherwise make.
So we're trying to make integrative bets. And this requires organization around-- we're doing that around something we call mission teams. It also requires significant engineering resources to try to enable and support that, for instance to build platforms that allow us to compare and contrast how current models relate to current data, something that Tommy also mentioned as an important forward-going direction. In both the behavioral levels and the neural networks.
That's just one example of where we need to and are deploying engineering resources. And what we're trying to do is take the DNA that we started with together, all of us here at CBMM, and amplify it with these kind of approaches to reach this future of a science of intelligence. Now, we are not starting from zero. We've had ideas of where we should start. And Tommy showed this slide here a little while ago. These were the original thrusts of CBMM.
They still stand importantly as the motivating thrusts of what we're doing. And in fact, here is how we take these things and the work that's gone on in them. And you heard about all the research and the papers that have come out of that, and now trying to organize what we're learning here into mission areas. So these are thought out in the context of those different thrusts that we started in CBMM.
I'm going to list them for you. These are the core mission areas right now where we're trying to make integrative bets. One is a development of intelligence to reverse engineer the common sense core knowledge and it's learning algorithms. Another is embodied intelligence to try to understand how physical agents can understand and interact with the world. Another is language mission, we call it, understanding the relationship between human language, and really, human cognition.
That's a deep question in cognitive science, and current models are allowing us to now engage with that question. Collective intelligence. We often forget that our intelligence does not arise just from the individual but how we work together as groups. How do we do that in ways that make us more powerful as a group than as individuals? And another missionary called scaling inference, which is developing platforms for building these kind of next-generation models that we think are not only going to possibly transform AI, but will transform how we think about how brain processing is working, as well.
And we believe, therefore, we'll do human-scale inference. And these things are-- I'm listing them as they're almost separate things, but of course, that's just scoping things to make progress when we get started. We believe these things are set up to also interact with each other in a meaningful way going forward.
Of course, we need theory to wrap all of this. And Tommy mentioned this, as well. If we want a deep understanding of the principles of intelligence, that will touch all of these different areas. So this is how we're going forward from where we've been. And now, again, the goal is to try to take this from where we are to where we need to go in the future.
I should emphasize that every one of those missions involves multiple PIs from multiple areas. It involves multiple CBMM PIs, and it's really driven off of multiple CBMM thrusts, as I mentioned, and now imbued with engineering resources, as well, to help support that, as I also mentioned as what we need to do. So again, this is our goal. This is how we're working for it and how we're organized. Today is not the day that I'm going to talk about all of the missions and what they're doing in detail. We will have events later that you get to hear about that.
Today I want to also just remind us in the short time I have of why we're actually here. And this is really hearkening back to the original vision, and Tommy articulated this at the start-- that natural intelligence and understanding its mechanisms is really one of the greatest questions of all time, the greatest scientific question.
As my colleague Nancy Kanwisher would say-- and there she is in the audience-- this is right up there with understanding our position, our role, in the universe, understanding where the origin of life on Earth and the evolution of life on Earth. It's one of those great scientific questions. But beyond being a great scientific question, it will change the world as we get successes and victories along the way. Those successes and victories are and will change the world.
So the way to think about this-- I like this quote here, and I'll give it in a minute. Imagine a world where the mechanisms of human intelligence are truly understood. Try to imagine that world. Now, this is not my quote. This quote is actually from the late Patrick Winston, who was the director of the MIT AI lab for a number of years.
And paraphrasing Patrick, to go a little bit further, he went beyond this. Imagine that world-- as he would say, instead of useful but narrow systems such as ChatGPT, imagine systems that are as smart as we are that could change the world, that instead of knowing what works in K through 12, researchers would know why it works, and that would allow them to do things even better.
Systems that recognize culture, how culture influences thinking, could help avoid social conflict. Again, think about that collective intelligence and how we can work collectively, and also how that can go awry, leading to social problems in our culture and how we might detect and possibly avoid those. The one that's near and dear to my heart as an MD-- mental health could be understood on a deeper level. And we might see new ways to intervene.
We're already seeing the vision models showing us new ways to think about intervention that we would have never been able to think about without these models. And we think that will be impactful even more as we get these other aspects of cognition modeled at these more formal levels. Again, Patrick saw the future in his statements here. And I'm just here as an ambassador to channel back his original vision.
Now, these are, again, great ideas. This is very aspirational. But walking the walk is hard. Actually taking the steps toward that world where the mechanisms of human intelligence are understood in engine terms-- that is hard. And what does it take? It takes a lot of things. First of all, it takes supporters-- the NSF, again, $50 million has been huge to make this happen. And Phil, thank you for being here. Thank you for all your support and your colleagues. You've brought us from where we started to where we are now.
Other supporters in the room, Tommy's mentioned David Siegel and others have supported us along the way. It takes a community of supporters financially to get this done. Now, dollars is one thing, but what it also takes is people. This has also been mentioned-- not just today's people, but future people. It takes a community of people that they want to walk the walk. I know many of you are in this room. This is why you're here.
We were trying to grow that community into more people that want to do this because this is not an individual lab project. This is essentially a human-level project that we're trying to lead here. And these are, at least, two of the important things I take, and they've both been highlighted. But another thing that it takes is it takes people with vision and leadership to help us go this way.
And here, I want to highlight my colleague and friend, Tommy Poggio, who was really the one who got this all started for us here with Brains, Minds, and Machines. And thank you, Tommy, for everything you've done to bring us from A to B and point us toward the future. I would like to give Tommy round of applause, if it's not going to embarrass him.
So hopefully, I've whet your appetite of where we're thinking of headed. And the rest of this time, both today's panel, and later will be discussions about the ideas of where the field stands and where it might be going. And those will be fun discussions. So now that you're oriented about where things sit at MIT, we're going to have a break-- now it's 4:00-- for about 25 minutes. And I think we're back here for a panel at 4:30. So thank you all. See you at 4:30.