Explore MIT Today: Reverse Engineering the Mind: Brain and Cognitive Sciences (1:21:19)
Date Posted:
June 8, 2018
Date Recorded:
June 8, 2018
CBMM Speaker(s):
James DiCarlo ,
Josh McDermott ,
Joshua Tenenbaum Speaker(s):
Leslie Kaelbling, Antonio Torralba
All Captioned Videos CBMM Special Seminars
Loading your interactive content...
Description:
James DiCarlo, Head of the MIT Brain and Cognitive Sciences Department, leads a panel discussion and audience Q&A related to how the study of the mind and brain, merged with the creation of engineered systems, enables a deeper understanding of human intelligence. This wide-ranging discussion touches on topics such as analogies between neural processing in the brain and deep learning networks, innate knowledge and how infants learn, the challenges of building general purpose robots, human-brain interfaces, technological advances in the treatment of brain disorders and sensory deficits, and ethical issues that arise in brain science and AI. Panelists are MIT faculty Antonio Torralba, Josh Tenenbaum, Josh McDermott, and Leslie Kaelbling. This event was hosted by the MIT Brain and Cognitive Sciences Department and the MIT Alumni Association.
Additional Resources:
JAMES DICARLO: OK, folks. Good morning. Or good afternoon, I guess. How is everybody doing? Great. Hi. My name's Jim DiCarlo. I'm the head of Brain and Cognitive Sciences. I just want to give you a heads up that this, I think, is being recorded, if I'm not so-- I think I'm just supposed to tell you that, since you're in the room, it's being recorded.
I also want to say, I'm actually the head of Brain and Cognitive Sciences, but really, this is an event that's broader than brain and cognitive sciences. There's a lot of-- you're in the Brain and Cognitive Sciences complex, but this is really an event about intelligence and efforts at MIT broadly, including both science and engineering. And you'll hear some of that from my colleagues. But I get the chance of introducing the event.
And so I'm going to just try to give you a 10-minute introduction about what MIT is doing at the intersection of science and engineering of intelligence, and then we're going to have a panel here where, really, the experts in the field-- and we have four great ones here today-- are going to be telling you a bit about their work, but mostly answering your questions about what MIT is doing in this space.
So again, welcome to all the alumni. And this really should be a fun time for you, I hope. Thank you for coming to join us. So I'd like to start by pointing out that we-- again, I'm a brain and cognitive sciences. And our community is really on a quest, even here in BCS, a quest to understand human intelligence.
And this quest is really one of the greatest problems in science, and it's aligned with an engineering goal of, really, artificial systems or developing artificial intelligence systems. So this is, as I mentioned, a unique time in MIT's history. Many of you may have heard about the recent MIT Quest for Intelligence that was recently launched. Because we know this is a very unique time of bringing together aspects of the science of intelligence and the engineering of intelligence and to make new progress. And I'm going to try to set that up for you here today.
So everybody's heard about AI. AI is like a big thing in the news right now. AI is taking over the world. AI is-- but rest assured, we don't-- as you'll hear from my colleagues, we don't really have real AI yet. We have a lot of AI technologies, but real AI is a very, very hard problem. And you'll hear that better from my colleagues.
But we know, as brain and cognitive scientists, that some notion of very advanced intelligence is possible, because all of us are operating that behind our eyes right now, with a 20-watt machine that is doing amazing things. Things that machines-- current machines cannot yet do. Write poetry, communicate, do math, to create civilizations. These are all things that machines are still not able to do. Really, just have common sense of even an 18-month-old, as you'll hear from some of my colleagues.
So as brain and cognitive scientists, even though we don't really have AI yet, we know that some really advanced intelligence is possible, because we study that system every day. And our brain-- and I think, as you may know, is really a remarkable device. Some people find this an astonishing hypothesis, that this 20-watt machine that lives behind your eyes is really responsible for who you are. Not just your intelligence, but really who you are.
And I think that's best captured by this quote from the late Francis Crick. "You, your joys, your sorrows, your memories, your ambitions, your sense of personal identity and free will, are, in fact, no more than the behavior of a vast assembly of nerve cells and their associated molecules." So somehow, the assembly of these billions of neurons gives rise to intelligence, and really, you.
And we like to, in this building, often contrast that with current machine systems. And here's one that's not nearly as intelligent as we are, but it's pretty cool in the stuff it does. And this is a quote. Anonymous is actually me. So I wrote this quote.
[LAUGHTER]
"You, your remarkable favorite app, all its performance and amazing user interface is, in fact, no more than the behavior of a vast assembly of transistors." And so of course you know this is true and this is true, but the magic lives in between. Right? So you can think about that here as like, this device is ultimately built on transistors, and it gives rise to an amazing user interface, and somehow, those are assembled in interesting ways, and hardware, and there's software layers that are really engineered in very clever ways to give rise to something amazing.
Now, still not fully human intelligent, but still quite amazing. And so we, as brain scientists, we often draw analogies from our colleagues in engineering and current engineered systems to think about, we know about the elements of this machine, the billions of neurons and their trillions of connections. And we study the complex behavior and cognition, what appears to be remarkably intelligent of these. These systems are often human systems or animal systems, and we study this every day.
So we know these parts. We can study this. But what goes on in between is really where most of the efforts of the science in this building go on, is how do these things assemble into circuits that live in brain regions that are running some sorts of algorithms that underlie what we see as intelligent behavior. And really, that's really the mission of this building.
But what's exciting about the thing I want to tell you now is that there's been a lot of advances of intersecting between ideas from engineering and ideas from science. And that's really the foundation of what the MIT Quest for Intelligence core is about.
So here's a picture of what the core, or the Quest for Intelligence core, and how I like to think about this. And that what we as brain and cognitive scientists have been doing is trying to make discoveries and measurements and understanding of what's going on in the mind and the brain that you might call generally in the fields of science that informs on building models or algorithms that you could call intelligence algorithms or attempting to be intelligence algorithms, ways of assembling those parts that are in the brain and different motifs to say, do these-- if I assemble things this way, does it give rise to interesting, intelligent behavior that looks like the intelligent behavior we observe here.
So we spend a lot of our time in this building trying to do this. And what's really exciting right now is that we have, really, one of the best computer science and engineering, computer science departments and engineering schools in the world that is essentially trying to do the same thing. To say assemble parts in various ways to give rise to intelligent behavior. And what's quite exciting is that both fields have gotten-- made a lot of new connections, especially around the ideas of using certain styles of neural networks, that I'll show you in a minute, to give rise to intelligence behavior. But there's lots of other potential synergies.
So we're united around the idea of building things that serve as both models for this field, but serve as technologies for this field. So both fields are contributing to this, and the payoffs go in both directions, here. And so this is what the IQ core is all about. And the hope is that by doing these things together, we will replenish the well of AI algorithms that can be disseminated through the MIT Quest for Intelligence Bridge, which is another aspect of this larger project.
So what we're going to focus on, our group today, they're all people that are working in areas of the core or how the core is going to proceed. And so we're going to try to talk about that today.
Now, to tell you if this is all-- this all sounds cool and looks pretty and so forth, to say this is actually real, I want to give you a little bit of history to say there's been a lot of advances in AI recently. And you might have-- many of you may have heard of a thing called deep learning. And I want to give you a little sense of where that came from, just very briefly.
So does everybody know who this is? That's Rafael Reif. Right? That's our current president. And how did you do that so quickly and effortlessly? That ability to recognize a face, even, was a very hard, long-standing problem in both computer science and brain and cognitive sciences. And for many decades, brain and cognitive scientists spent time measuring the brain areas that are schematized here. So a series of neurons in a so-called deep neural network that are schematized here. Millions of neurons that process this image up to behavior, to say, ah, that's Rafael Reif.
It turned out that by making a lot of these kind of measurements and so forth, we gathered a lot of data, but we really didn't understand-- we didn't have working algorithms of how this worked, how this took place. In parallel, engineers were trying to do computer vision to say let's recognize faces. And they were largely working with various approaches that were-- some of them were brain-driven, but most of them were distant from what the brain did.
And what happened very recently is that some scientists began taking an intersectional approach between science and engineering, building networks that look like this one here, which are now called deep convolutional neural networks, that were styled after the brain's neural network and used learning rules derived from cognitive scientists to train up these kind of networks. And by great engineering coupled with constraints from brain and cognitive scientists, they got to what are modern, deep, neural networks for doing things like computer vision. And you hear about those especially from my colleague, Antonio Torralba, who's here.
And that is really quite remarkable. And then others, like those of us in my field, were able to show-- and my lab in particular showed that these internals of these networks look a lot like the internals of this brain system, here. So this is an example of an engineered system that is now informing our understanding of how the brain works. But it, in itself, was driven by studies of the brain itself. And so these matches can be found at all levels, which are shown by these areas here, and that's the ongoing science that we at my lab and others are doing right now.
You can also see that this system doesn't yet have-- see these forward arrows-- it doesn't have these feedback and these recurrent arrows, things that we know the brain has. And this is the frontier, we think, as to building deeper models that actually have deeper sense of seeing and understanding. But that's the edge of this visual science at the moment.
Zooming out a bit, what I want you to take from this, it was the intersection of science and engineering that led to great advances, and again, networks, for computer vision that are deep computer vision networks. But really, that then exploded into something more broadly that you referred to as deep learning that is now being applied to a range of fields. So when you hear AI today, really what people mean by that is mostly applications of deep learning, which is just one aspect of AI. It's not all of AI. It's just a very exciting area at the moment.
And again, it's even a small part of what the brain does, as I'll show you in a minute. But this is a great example of a success of these fields merging together. And so that's why we think that we could do even better than this as we go to other aspects of intelligence that I'll mention. Indeed, right here.
So this quest for intelligence algorithms about human intelligence that I introduced you with, I've just been telling you about what's called core recognition. Your ability to say that was Rafael Reif in about a couple hundred milliseconds. That's really, say, about-- we don't know exactly, but let's say about 5% of what your brain does. Right?
So there's a whole bunch of other things that your brain does, just even seeing and understanding, just bringing an audition that you'll hear from my colleague, Josh McDermott. These things are still things we don't really have systems that deeply understand a visual scene, let alone large, common sense intelligence, or the ability to move around effortlessly in the world and plan their activities in the world.
So there's this big sort of dark well of things that we know something about from the science side, and we've got a lot of great engineers working on this. And Leslie Kaelbling, one of my colleagues here, will speak to some of those things too. And Josh Tenenbaum, my other colleague that will be joining us, they speak, they work on these things. So we know something about them, but there's this huge opportunity to build something that's even more advanced than deep neural networks, what's current AI, which is really just stylized after a very small portion of the brain.
And so we as a group are quite excited of this intersection between science and engineering around what we refer to as moonshot projects. So we talked about recognition a little bit, which is really kind of our first moonshot project where there's been a lot of progress. But more generally, perception or learning, meaning, creativity, I'm not going to ask you to read all this, other than to say these all involve science and engineering. And you'll hear from my colleagues which ones of these they're most excited about.
But most of them-- one of the most important things about this slide is that we are still forming some of these moon shots. This is ongoing discussions about which problems we're working on next. But we know that the science and engineering joint approach is what's been so productive in the past and what we are pushing as we go forward.
So I want to conclude by saying these ideas existed in a broader field even decades ago. The idea of intersecting, ideas of human intelligence, from engineering. But what's quite exciting right now is that the science has progressed a very large amount in terms of ability to measure behavior, their ability to measure neurons, and the engineering has progressed to new algorithm styles that you'll hear about from some of my colleagues, as well as just even just enough computational power to run these things.
But what I'm personally most excited about is that both engineers and scientists, when you go to conferences, are building with neural networks, stylized after the brain, but driven for engineering approaches. So we're working on a kind of common model or hypothesis space, and you get that synergy between these fields that-- that really hadn't happened in either field, that synergy, for the last couple of decades, but is now renewed.
I want to end by saying that we've been talking about intelligence, humans and the science and engineering of intelligence, and with an aspect to how this might inform real AI or real engineering and technology. But if we can understand the brain in engineering terms, that has huge payoffs, broadly, for education, ameliorating, potentially, brain disorders, other ways to treat brain disorders, and also really just to understand who we are. What makes the billions of the neurons in your head. What makes you you.
And so what's exciting is that we're doing this not just in science, but we can't do it without our engineering partners. And so engineering has a chance to really help us to do these things that are classically missions of brain science, but I think come out of this science and engineering intersection.
So I'm going to stop my introductory remarks about the core and MIT's efforts of intelligence right there. And I'm going to ask my colleagues here that are listed on this slide to come join me here on the stage. So let's give them all a round of applause. And their names are here.
So before I ask them to start, I want to point out-- oh, that slide went away. If that slide could come back briefly, but forget it, Chris. There are logos on this slide that I want to again, say, you're living in the BCS building, but this is way bigger than BCS. This is--
AUDIENCE: Sorry, should I get up?
JAMES DICARLO: No, just stay. So forget the slide. So there are folks here from engineering, electrical engineering, computer science, folks representing computer science and AI, the [INAUDIBLE] cell. There are folks representing the Center for Brains, Minds, and Machines. And this is just even a small portion of a very broader thing across MIT that you're hearing from. So this is a broad effort. It's not just a BCS effort. And again, you hear that see that in some of my colleagues that we have here today.
So I'm going to start by giving each of them a chance to say a few words about their research and maybe how it connects to intelligence. And they can also say how-- they can tell you which parts of what I said was wrong and which parts they disagree with. This is meant to be an open, fun discussion for you to hear from experts in the field as to what's going on the ground. So let's start with Antonio. Antonia, you want to give us a couple minutes of your thoughts?
ANTONIO TORRALBA: Sure. Is this working? Hello? Hello? [INAUDIBLE].
JOSH TENENBAUM: It's kind of working.
ANTONIO TORRALBA: Yes? No? I'm kind of loud. It's probably not recording then, huh?
JAMES DICARLO: While Antonio is trying to figure that-- is it working, Chris? So can you give your titles? Because they were on this slide, but-- you go ahead.
ANTONIO TORRALBA: Now it's working. Now it's working. OK. Well, hi. Hello. I'm Antonio Torralba. I am a professor on the other building, the crazy one, the computer science and artificial Intelligence lab. I'm from Spain. You cannot tell, probably, because of my perfect American accent. I grew up in Mallorca. This is a little island in the Mediterranean, very warm and nice. And then I decided to come to cold Boston to work.
So I've been working on artificial intelligence for a long time. And I've loved it since I was a kid. And I-- at the beginning, I wanted to work on general intelligence, which is one of the goals of this quest. But that was too much. I didn't know anything at the time. So I decided to work on something smaller, vision.
Trying to understand how we recognize the world through our eyes. And that seemed easy, but of course it turned out to be just as hard as everything else. So it's still not working. But the hard part was actually to try to explain to my parents what I was doing. Because seeing seems so easy to us. You know, you see a picture of a chair, and you know that's a chair. How is it that the machine cannot tell that it's a chair? It seems silly. How do you explain to your partners, no, no, no, you know, I'm really working really hard, but it's still not working. There's just-- they couldn't get it.
So one of the interesting parts is that computer vision has evolved a lot over the years. And just very recently, there have been a number of breakthroughs through machine learning, some of the things that Jim was alluding to. And this is starting to work well. So well, I mean that you can actually put it on a product and sell it, and people will buy it, and it will kind of work. Which is always a good thing.
It's still-- there are still many open problems. So for instance, one issue is that machines can learn to see, but they learn to see in a very different way than humans do learn. So you need to train them with a lot of data. It's a very costly and long process to teach a machine to recognize a very simple object.
Humans just seem to do it so effortlessly, without you having to make a big effort into teaching your kid to recognize every single thing that they see. Now they just get to know all those things. So we are now trying to understand, OK, how can you do this unsupervised learning, where you can teach a machine to do something, but without the feature of having to make a lot of work.
Also how does vision relate to other senses? Now that computer vision is starting to work, what is the relationship between vision and audition or touch. So we have collaborations also with, like, Josh working on audition. So it's just a very interesting area of research right now.
And the bigger goals are OK, how do you connect vision with cognition at a more abstract level that doesn't necessarily correlate with the pixels that you see, but also try to interpret what the scene is about, what is happening in the world, can you make predictions about what is going to happen in the future, and so on. So that's my speech.
JOSH TENENBAUM: I can take this. So we have two Joshes here. I'm Josh Tenenbaum. This is Josh McDermott, and you'll hear from him, I guess, in a second. And it's cool. Actually, all of us work together in various ways on projects here. I'm interested in many things. I'm a cognitive scientist here in Brain and Cognitive Science. I'm also a member of CSAIL, so I spend most of my time in this building, but I go across the street to the other building a lot.
And to me, what most animates me, really, is the idea of learning and the idea that we might be able to understand how kids actually learn. Right? We have things that we call machine learning right now, but as Antonio was saying, it's not just about vision and intelligence more generally. They learn in a very different way.
I would say they don't learn. They are trained. And what training means is somebody comes up with a problem-- a human-- a problem. And then they build a data set. They either find it or measure it or get their mother to label a bunch of things. For a while, Antonio-- I think it was maybe a joke, but his parents always figure very prominently in his research. [LAUGHS] His mother was the main source of data in computer vision for a couple of years. [LAUGHS] Sitting on the island in New York. OK.
Or they generate it. These days, people often use, you know, graphics engine simulators or video game tools to generate data. But the key is that a human engineer or team of engineers comes up with the problem, designs a way to turn it into a machine learning problem, and generates a whole bunch of data and trains their system in a very, very painful way. But a child doesn't just learn to see, doesn't just learn how objects work, but learns everything they learn in a certain way for themselves.
So I'm really interested in trying to work on what is-- you know, in some ways, one of AI's oldest dreams, the idea that you could build a machine that actually learns like a child, that say, starts like a baby and learns like a child. Alan Turing talked about that. Marvin Minsky, who was the founder of the MIT AI lab. Maybe some of you folks here in red blazers maybe even took classes from him or worked with him or some of his students. Yeah. And it's really-- it's a great idea.
But it's one that the field of AI hasn't really been able to take seriously, I think, until recently. And I think that's mostly because we haven't had a serious science of studying how babies think and how their brains work and how children actually learn. Well, we've had one for a couple of decades, but AI as a field hasn't really known about it. You know, fields tend to pursue their effort, and when something's going along so great, as it has been in AI, you don't really worry about what anybody else is doing, you just go do your thing.
But in parallel, there's been great developments, many of them here at MIT, in this building. But also colleagues of ours. We have some great colleagues at Harvard in Developmental Psychology that we work with. People who have-- who study the minds and the brains of babies. And it's remarkable how-- even the-- basically, the youngest baby that you can study scientifically, a three-month-old baby, pretty much, two or three-month-old babies, they already have about their brains and their minds, much of the basic stuff of common sense understanding of the world that Antonio was talking about kind of built in.
So we're trying to understand, what form is it built in? How is it built? Through some combination of evolution and genes and early stages of development? And then what changes? A three-month-old already has a bunch of common sense. They already understand, for example, the notion of object permanence.
Like here's an object. Right? You're hearing a lot about objects. But by object permanence, what I mean is this thing right where see this object. And then I'm going to put it, say, somewhere where most of you can't see it, but you still know it's there. Right? Or if I were to put it behind this thing here or put it inside this-- well, here. I'll do it this way. Yes. OK. If I were to remove this, and the thing wasn't there, that would be a pretty decent magic trick.
And we know that babies, even young babies, understand this, because we do those magic tricks with them. And just like you, if I were a better magician, if I remove this, and the object is gone, the infants stare at it. They look longer than if the normal, expected thing happens. So from experiments like that, we've started to understand something about what three-month-olds understand about the world and what six-month-olds understand that three-month-olds don't, and what eight and nine-month-olds understand that six-month-olds don't, and so on.
So in the work that I do, we try to build computational models where we use the tools of engineering, machine learning, computer vision, and AI to understand how these babies are seeing the world and how they learn from their experience to get smarter over those first few months and years of life. And then we use those tools to build smarter, more human-like kinds of machine intelligence. And it's really very much in the spirit of that core that Jim talked about, science and engineering mutually informing each other and hopefully taking each other to bigger and better heights. Thanks.
JOSH MCDERMOTT: So I'm on the other Josh, Josh M. He's Josh T. We have a lot of this in the department. So I study perception. Right now, I mostly study how people hear. But I was trained as a vision scientist a couple decades ago, and I'm kind of-- I have pretty broad interests in perception. And what I'm most passionate about is in understanding how it is that people manage to solve these problems that have traditionally been very, very difficult to engineer machine systems to solve.
And I started my lab here about five years ago. And there were-- there were two things that happened right around that time that really profoundly changed the trajectory of my research. One was the big explosion of AI that we're talking about today.
And that has really changed the way that I do science, in part because at least in certain limited domains, like the ones that the Jim and Antonio have been talking about, we now have systems that can solve what traditionally have been really difficult problems. So that's given us a very exciting way to actually develop new models of what the brain does. And so that's become an interesting direction that we have pursued.
But the other thing that happened to me is that I had kids. And I have two kids now. They're 3 and 5. And watching them learn to perceive the world around them has really changed the way that I think about perception. And even simple things, like the fact that they both went through this developmental stage where they're-- one of the main things they would do is grab objects and whack them together or throw them on the floor. And so I've become very interested in how it is that children develop an understanding of the physical world.
And so if I take this bottle, I drop it on the table, you know that was pretty heavy. You can tell it was a plastic bottle. You know an awful lot about what this is made of. OK? And so of course, you're looking at it, and you get some information from your eyes. But you're also listening to that. And you could tell from the sound that it was pretty heavy. And you could hear this crunching sound that tells you about plastic and so forth. Right?
And so that's gotten me really interested in the idea that Antonio alluded to, that we really need to be studying perception as a whole rather than studying the sensory systems in isolation, as has been traditionally done in the course of perception research. Because when you're walking around in the world, and you perceive things, what you're interested in is what's in the world, the objects. And so you use all the information at your disposal. And as a child developing, you grab things, and you can feel them, and you can hear the sound that they make, and you look at them.
And so one of the long-term goals of this whole larger project that we're talking about is to be able to develop machine systems that can learn in that same way. That could interact with the world, and through the observations that come in through different kinds of sensory receptors, could learn what the world is made of. And so that's a long-term project that we're all interested in, and that's, again, something that is a new thing for me that was really inspired by watching my kids.
Now, another direction that I'm very passionate about that has also really been enabled in some interesting ways by some of the new technology developments is in trying to address the fact that hearing is among the most fragile of our senses. So on the one hand, we're incredibly good at understanding speech in a noisy environment. But many of you will know that particularly as we age, many people suffer from hearing impairment. And in particular, the situations that are most challenging for someone are the ones that are, in some ways, the most computationally interesting.
So you walk into a restaurant, and although your hearing aid may work fine if you're in a room like this, where there's not a whole lot of noise, people often usually just take their hearing aids out. It actually makes things worse. All right? So there's a lot of interest right now in understanding why that is and in understanding what we could do to actually build a device that would actually help people.
So all of you are now carrying around in your pocket pretty powerful computers. Jim put a picture up of one of those. And so I think that what's going to happen over the next few decades is we're all going to essentially have these personal assistants that will help us in all kinds of different ways. And definitely one of the major things that we hope we can do is build systems that will be able to process the sound coming into your ears such that as you age, and your hearing deteriorates, you can actually preserve a lot of these abilities that you have when you're young.
And one of the exciting approaches that has really been enabled by some of the technological developments is the fact that we now can, in some limited domains, achieve systems that match human abilities at, say, speech recognition in noisy environments. All right? So we can ask questions, like, well, if we force that system to actually work with the model of the human ear, and if we model the impairment that actually happens when people lose their hearing, what can we then do at the front end of that system to help that model do better. The idea being that might be something we could then put into a hearing aid and potentially help somebody hear better.
And so what's very exciting to me is the fact that, again, in these limited domains, we can now have systems that can do some of the things that people can do. And you have complete access to the system. And there's all kinds of experiments and games that you can play with that that you wouldn't ever be able to do with an actual human, because you don't have complete access to the wiring diagram of their brain. So that's just a little taste of the kind of stuff I'm interested in.
LESLIE KAELBLING: Hi, I'm Leslie Kaelbling. I'm also with Antonio in CSAIL. I work in-- robotics is my main application and area within AI. I started out-- my undergraduate degree is in philosophy. So I worked my way from philosophy to logic to computer science to statistical stuff and AI. So that's a little bit of an unusual trajectory.
But so I work on robots. And if you think about robots right now, if you think of the robots that actually are good for something, mostly either they weld things in a factory, which means there's no variability in the world, and they're very precise, but they just keep doing the thing. If an elephant were to come by, they would weld that too. It doesn't really matter. So no real connection to the world.
The other thing, which many of you may have, is like Roombas or something, right? So those things are actually, in some sense, very sensitive to the environment that they're in. But then they don't do anything very sophisticated, right? So they bump around, and maybe they get most of the dirt. Or I don't know.
I actually only got one recently, which is kind of odd for a robotics person. And most of the time, it just calls for help because it got stuck somewhere. So I spend a lot of time rescuing the robot. And it's supposed to be working for me. So I'm not so sure about that.
So but what I really want to understand is how to make a really general purpose robot. Partly because I think it would be awesome to have a really general purpose robot, but also because I think in order to do that, we'll have to arrive at a general purpose understanding of intelligence. I think it might be that we as engineers come up with a moderately different way of doing it than brains do, at least at some-- clearly, at the physical level, it will be different.
And I think a really interesting and important question to me in the science of intelligence is at what level do these things have to converge. Right? So in some sense, my robot is going to have to do the same information processing kinds of things that a human has to do in terms of going from sensing to acting, but maybe there are a lot of solutions. I mean, maybe evolution found one, but maybe there are other ones that are easier for engineers to find. So I'm interested in trying to think about this whole space of possible solutions to the problem of intelligence.
So I work also on all the pieces. If you look at AI right at the moment, the practice of AI, you might see other people who do vision, people who do manipulation, people who work on reasoning, people who work on learning, people work on estimating. But I think that the critical juncture right at the moment is figuring out how in the world you get all those pieces to work together.
So the details of how the vision works should affect how I integrate information over time, which should affect how I reason about the world, which will affect the details of even how I decide to do some kinds of jobs. So although I'm really, like my head is in the engineering view, like, I just want to make my robot work. I want to make a robot that could just come to your house and know what to do and help you out. Make dinner, clean up stuff, right?
But the thing that I really-- I'm really interested in collaborating and talking to people and bringing in cognitive science is because I really need to know some ideas about constraints. So learning is very important for us, as it is for everybody. And what we know, again, is that you can't-- that to learn when you have no constraints on the process means that you need an enormous amount of data.
But what we also know, this is the stuff that Josh mentioned, is that humans are born, and other animals are born, with an enormous amount of inbuilt stuff. So we as engineers have that job. Our job is to do for robots what evolution did for us. And I don't know what that job is. I'm trying to figure that out. And then figuring out how that can enable learning.
Because experience of a robot in the world is very expensive compared to experience that you can get by downloading images from Google. Because it takes work. It grinds the gears. It breaks stuff. Somebody has to watch. I mean, there's just all these reasons why for us, it's really critical to be able to learn from not too much experience. So I'm hoping to do general purpose robots, but with some amount of inspiration from human intelligence.
JAMES DICARLO: OK. Thank you all. Thank you all. So I would like to see if-- I have lots of questions I could ask, but I mostly-- I think it's-- I want to give you guys an opportunity to ask questions. So let's look. I see a hand here first. Yes.
AUDIENCE: It's great what you're doing. It's very exciting. I come from-- I'm a physician, so I take care of the kids. And we look at learning as such a multisensory process that it's integrated with output, with the motor output, and modified incredibly by the emotional content that goes on around it. So it's like all pieces of the brain working together in a coordinated way.
And if it does it well, the product or the end result comes out in something that we might expect or pushes us forward in some response. So how do you bring in that kind of humanness piece of it, which is the emotional content. We know that learning takes much-- is much more solidified in children if it's done within the context of a relationship. So that when-- and the experience of what you're doing.
And something as simple-- you can show a picture of a ball, but that doesn't help a robot. A robot needs to know what happens when the ball's coming at him. And how do you get in all those pieces of learning that comes together so that the next time you go out, if the ball has hit you, you know that you just have to step back a little bit more and judge it. And all those pieces are what we can do as humans and fine tune that it must be very difficult to go beyond just what a robot sees into what a robot does with it.
JOSH TENENBAUM: I could try to answer part of that, both from some of the work that we do and some colleagues of mine who aren't here, but who we work with. It's a great question. Or it's really a set of questions there, I think. Because you're asking about both emotions and their role in learning and intelligence, and you're asking about relationships and goals. And those are all really important.
And they're all things that, actually, I think we've come to understand recently are-- they're-- you might think that if you view the mind and the brain as a computer or a machine, that necessarily leaves out, for example, emotion and relationships. But we would say no. On the contrary, those are kinds of things that are the objects of computation. Right?
In some sense, just as Jim said in his overview talk, you could say your mind is nothing but, but your mind is all of these things. And they have to compute those things along with all the other stuff that computer scientists have traditionally talked about. So I would just point to, for example, there's some work that's being done here by Laura Schulz. So she's our main person who studies childrens' learning. And she's very interested in emotion and relationships.
She had a paper in Science Magazine, it got a lot of attention in the press, from her just recently graduated PhD student, as in like half an hour ago graduated PhD student, Julia Leonard. So if you want to look it up, it's a paper by Julia Leonard and Laura Schulz. And they looked at-- well, the way it was framed, which is a little bit misleading, in press coverage, was a kind of study of grit, basically. But they looked at how kids grit. Like, grit.
A key topic in a lot of discussion, right? But they looked at how kids basically learn about persistence or how hard to work to try to solve a problem. And they showed that in the spirit of what a bunch of us have all said here, kids could do very rapid learning from an adult who they were having a social interaction with.
Basically, how much-- how hard it was worth working to solve a hard problem. If an adult took-- they basically gave these-- both the adults and the kids these kind of puzzle boxes or just little things to figure out. And if the adult worked hard at this to try to solve a hard problem and then at the end solved it, as opposed to other cases, where the adult solved it really easily, really quickly, a few things, the kids learned from the harder work, and they actually worked harder themselves.
If the adult gave up, of course, they were also more likely to give up. So they learned this in a way that transferred not-- it wasn't just about this object. It was more generally about how much to value hard work, if you like, and solving problems. Because the kid had a different puzzle to solve. It was a lot easier than the adult's.
That was just one kind of study like this. And it was done, broadly, in the framework of some computational modeling work that we've done together with Laura Schulz and Rebecca Sachs and other of our faculty member here, where we study-- well, the way Laura puts it is it's what's called the naive utility calculus. But it's basically the way young kids calculate utilities, costs and benefits, of their own effort and other efforts and other people's goals.
And we have a whole bunch of studies in which we've shown, really, really quite intriguingly to us and a few other people at least, ways in which even babies-- not just kids who are solving puzzles-- but even babies are making calculations about the costs and benefits of action, a lot by seeing other people. Role models for them. Understanding how you pick up on what your parent values. Either right now, in the moment, what's worth doing, or more generally across your lifetime.
So these are things we're starting to be able to capture in the kinds of mathematical and computational models. The same ones that we're starting to use to capture how kids might learn about objects from just one or a few examples. That same kind of math is starting to help us understand these other aspects of more the emotional side of learning.
JAMES DICARLO: Let's see. Wow, we've got lots of questions. Let's see. I think this gentleman had [INAUDIBLE].
AUDIENCE: I have a question for the two Joshes. Your implication, a little bit, was that there is a path to learning. That it's, I don't want to say uniform, but it's a known path. But yet we all talk about different ways of learning. Some of us learn orally, some visually. Maybe if you're blind or deaf, tactilely. Do you see evidence for that, and does that adjust how you study these things? And secondly, is there some impediments for our own education of our own children that you might point out to us?
JOSH TENENBAUM: Go on. Take it.
JOSH MCDERMOTT: That's a tough question. I mean, it's a great question. Yeah. I mean, I would say I don't really have any evidence for that apart from anecdotal evidence at this point. It's like, we're not really-- I mean, I would say the science of that kind of thing is not quite to the point where we're studying individual differences on a very large scale. I mean, we'd like it to get to the point where that's what happens.
I mean, there's a lot of interest around here in developmental disorders and children who might have difficulty learning in various ways, which is an extreme variant of the kind of thing that you're talking about. And so there's, in fact, widespread efforts in this building to understand what happens when a child is autistic, for instance. But as far as the more fine-grained things that might happen in a normally developing kid, for those kinds of differences in learning, that's not something I know a whole lot about. Do you have anything you want to say?
JOSH TENENBAUM: Well, yeah. I mean, I think just as the other Josh said, for the most part, we don't, in our field, we have-- we've studied more than normally developing kids, but part of it is because if you want to study individual differences, you need a lot more kids. And normally, the way you study them is you bring them into the lab, and it takes a long time.
So another thing that Laura Schulz's lab has built-- this is Kim Scott, another recently graduated student-- she's built this tool that's called Lookit. Actually, if any of you have kids or grandkids, especially in the baby age, you could check this out. Just go to lookit, like one word, lookit.mit.edu. And it's basically this online developmental laboratory, where you can participate in experiments with your kid, infant, up to age five, over the web, using webcams. And it's secure and privacy respecting, and it's very well constructed.
And it's a tool that actually is designed to be used by the whole broad community. So this should allow us as a field to get a lot more data from different kinds of kids over longer periods of time. It should also allow you to study the same kids over months and years. So stay tuned on that. But it's a really exciting project. Yeah.
PRESENTER: I think the gentleman with the black shirt was next.
AUDIENCE: Thanks. Al Mink, '78, Computer Science. I took an elective when I was here at Sloan. It was a market research study. At the time, a major firm was looking at regulation that would decouple the sales of opticalware from medically licensed folks, basically so you could buy glasses without buying them from a medical doctor.
So fast forward. And since then, there's been a lot of innovation. We all know we can buy eyeglasses fairly cheap. When I took the elder care for my parents over the last few years, I was astounded at the cost and lack of innovation in this area. So Josh M, when you're talking about some of the research you're doing, how much do you think an impact of changing regulation, kind of decoupling the specialists from selling things that deal with ear will promote investment, entrepreneurship, and research in the areas you were talking about? What should we be thinking about here as MIT alum?
JOSH MCDERMOTT: Yeah, no, I feel your pain. I mean, I have parents that are also looking into getting hearing aids, and you look at the price tag on these things, and it's kind of astonishing. I mean, there's definitely-- I mean, there are a handful of people in industry that are really interested in low-cost devices, but I guess-- I mean, my sense of this is that the key to all of this is that the devices that we have just don't work that well.
And I think if we can get a next generation of devices that are actually effective and-- I mean, to be clear, they help people in some limited circumstances. Again, if you have someone who's very severely hearing impaired, and they're in a quiet room, the device can amplify sound, and they'll be able to hear more than they would be normally. But they don't help people in the situations that are really difficult, like if you walk out into the atrium, and there's a whole bunch of people talking, and there's reverberation. They just don't work well in that kind of a situation.
And as a consequence, there are just not nearly as many people who use them as should use them. And I think if we get to the point where the devices work, and the adoption is more widespread, I think some of the things that you're talking about will just happen. I mean, if you think about glasses, I mean, we all have glasses because they work. Right?
And I think if the hearing aid industry gets to that same point, where they can deliver something that everybody wants to use-- I mean, I could even use one. I mean, I don't hear as well as I did when I was in college. You know? And I think eventually, we'd like to be in a point where everybody basically can have these things. And some of that stuff will just happen, is my sense.
PRESENTER: How about some questions for Leslie or Antonio? They have not gotten--
ANTONIO TORRALBA: [INAUDIBLE]. His answer to your question, I'm class 2003. And one of my classmates, a startup company called Audicus. And what he is trying to do is to sell cheaper audio products, and you can make the tests online. So there is some entrepreneurship going on.
AUDIENCE: And I was thinking more about the innovation than the cost, really, but they're both applicable.
PRESENTER: Yes, the gentleman in the green. Yes, please.
AUDIENCE: I have a question for Leslie. I mean, you mentioned, and I liked the thought of not following the computational models based on human [INAUDIBLE] directly. The example I'd like to use is the concern that the aircraft industry or the aerospace industry was set back for generations trying to model their stuff the way birds could fly instead of understanding the concepts of flight, so that they can build machines to do the same thing that birds could do.
The question is how you avoid that pitfall of waiting generations till we better understand to build machines that can do things now that are based on human learning, but not necessarily the same way humans learn.
LESLIE KAELBLING: Right. So that's a good analogy. I think we're not in much danger of that pitfall at the moment. I think the danger that we're in right now, if you look at what goes on in AI, if you read proceedings and so on, is that most of what's happening is not that brain inspired. Or maybe the inspiration is at a fairly significantly abstract level. Right?
And what's interesting is that even the deep learning that's going on now, so there is one way to tell the story that's inspired by brains. And it was really kind of a long time ago. But you could just also approach exactly those same machines as an applied mathematician and say, I'm doing function approximation with a giant class of parametric functions, and I'm doing some kind of stochastic gradient descent to optimize parameters.
So I think for all kinds of things, there would be many stories, even to tell about the same thing. And I think what's interesting and important is, as you say, to understand what's the gist of that story, right? What is it that's the same about birds and airplanes. Right? And that's the aerodynamics, right? So I'd like to understand the aerodynamics. We're really far from, I think, understanding aerodynamics.
JAMES DICARLO: OK. How about the gentleman way in the back? Right. Stand up. Next. Yes.
AUDIENCE: Several of you, in your opening comments, made an allusion to young children have some kind of default learning, or they come in with some innate kind of ability. I was wondering what leads you to come to that conclusion that there is already some built-in capabilities versus some just rapid, accelerated, kind of punctuated learning that's taking place that we're not understanding.
JOSH TENENBAUM: So maybe I can take that. Well, it is true. It's hard to know. Like, if you study a two-month-old baby, it's hard to know, do they know what they know because of something that's genetically programmed, or do they know what they know because of what happened in months 0 to 2, or maybe what happened inside the womb. Is that the question that you're asking? Yeah.
I mean, the thing is if you look at two-month-old babies, they don't do very much. They really don't. In fact, if you compare humans with other animals, three months is more or less-- or more-- is when it's effectively-- or rather, the first three months of life is basically just gestation outside the womb. We have to be born premature compared to other animals, because our heads are too big, because our brains are too big, because they're presumably very powerful. OK.
So I mean, it's possible that there's something that's going on there. But when you look at-- if you do the right kind of studies, like the kind of looking time studies I talked about, you can show that two and three-month-old babies, especially three-month-olds, already understand about objects, even when they can't see them. They also understand about people's goals. They understand that if somebody reaches for something, it's because they want it, and they move in efficient ways.
So this basic understanding of goals and actions is there even before those kids themselves can pick anything up. Right? A three-month-old, their idea-- they're really interested in objects. And they kind of reach for them, like kind of-- a three-month-old might see something like this and try to do this and maybe do that. Right? So you could-- but they don't have much experience, really, manipulating objects in any powerful way.
That's one way you know. It's just how much their knowledge seems to go beyond their experience. The other is you can actually look inside their brains. So Rebecca Sachs, here, has been doing a really exciting research program of functional brain imaging. So studying the networks of brain areas that seem to be selectively involved in different kinds of high level perception, like how your brain perceives other people or objects or faces or bodies or places.
And what she's found, she's published one study in four to six-month-olds, and now she's studying three-month-olds and younger. And she's again found these very young babies, that they have the sort of functional profile, like the way the computations, which parts of the brain seem to do which kinds of things. They look very similar to adults. Not the same. They're less specific. So there's definitely-- what her research suggests is that there's definitely some kind of tuning that goes on. Right?
But much of the large scale connectivity and the function that goes along with that already seems to be there in place as early as you can look. That's-- you know, stay tuned. That's a developing story. You know, on the neural net-- yeah, she has some, actually, beautiful photographs of this that were published in the Smithsonian and other things.
On the neural network side, this is something I wonder-- this is something I've been thinking about a lot, and Antonio, I'm wondering if you think about this, or Jim. People have found that you set up a deep neural network, like the kind of models that Jim talked about, and Antonio has been a leader in for a while. You set up the architecture of the network, and then you just initialize the weights to just small, random values.
And it almost-- it's not as good as when you train it, but it does many of the same things. Like Alex [? Kelsho, ?] your student, or like with Dan [? Yemens, ?] right? Where the profile of behavior is very similar, just when the weights aren't even trained. And then training definitely makes it a lot better. It tunes it.
But it's amazing how much we're also seeing on the machine learning side. How much the architecture that you wire the thing in for really builds in a lot of the basic functionality, and just trying to understand how design and learning interact as an experience is a really interesting project. But the more we, I think, understand on the science and engineering side, we understand that both of those are part of the picture.
LESLIE KAELBLING: Well, and brains are really not homogeneous, right? I don't know that much about brains, but I do know enough to know that it's not all the same kind of setup. Right? So when we talk about knowledge or something that might be innate, it could be all structural, in a sense. But there's definitely innate structure in brains.
JAMES DICARLO: Antonio, do you want to add to that?
ANTONIO TORRALBA: Well, yeah. I mean, connecting with what Josh was saying is that it's true that even if you just have the structure, the structure could be the constraint that you need in order to learn all these other properties that seem to have emerged in an inane form. But it could be that just by observing the world, the structure itself is constraining enough the solution that the only possibility is to actually learn those concepts.
JAMES DICARLO: OK. So let's see. I think this gentleman on this side was next, and I'll come back over here. Yes.
AUDIENCE: This is a question about brain disorders. You mentioned autism briefly. Where is AI, currently, on the convergence with Alzheimer's, autism, applying AI to some of those brain disorder issues?
JAMES DICARLO: Well, I would say those are two things that-- I'm not-- my lab doesn't work on Alzheimer's, but I'll say, one way that machine learning techniques are using right now is just mining existing data waves, genomic data, or sometimes now getting into more phenotypic data to analyze the data to try to find patterns in the data. And so that's using current tools to basically do sophisticated data analysis if you have large enough data sets. Some of that's going on.
The deeper view that we hope for is that Alzheimer's has big effects on memory and other things, right, as an example. Right? So we don't really know how memory works yet in an engineering sense. Right? So if you can understand how memory works, you may come up with other ways to intervene in the system. So that's the engineering view of what we hope that it will be.
Imagine going to your car mechanic and saying, I'm going to bring my car in, and they don't know about combustion engines, but they're supposed to fix your car. Right? So I'm an MD PhD, and that was what it was like in medical school as a doctor. You really have limited power, because we have very limited understanding, in an engineering sense, of how the systems are actually functioning.
There are things that can be done to ameliorate, but we have-- that's why I dedicated my life to research. Because it's a long road to getting that understanding that will open those gates. In the short term, people are doing things they can. There's some exciting work from Li-Huei Tsai's lab here in the building about modulating gamma frequencies that shows effects in a mouse model with Alzheimer's. That was-- it turned out that they found out that they could-- we don't know if that will work in humans, and they're testing that now. But it turned out that that was-- they thought they had to do something invasive, and they figured out they could do it externally.
I often think that some of these kind of engineering understandings of how things work-- and Josh mentioned this-- if you understand it, we may have new concepts of how we intervene in the system. The way we played the sounds back, in his example, or in my lab, we're trying to say now that we have a model of the visual system, we think with the right image, we can set the brain into a current-- a state, because we have knowledge of the internals of the system inferred from the studies of the system. And that connects to the-- we do that in normals right now, but we think that could connect to ideas of when something is disordered.
But again, the deeper issue there is an understanding. And it's still a long road to go. So I hear you that that's really a big core of what this building is trying to do in the long run. There's a long game approach, and then there's some short game approaches that are being played. I don't know if anybody wants to add to that. No? OK. So let's see. We've got-- let's see. I said I'd go over here, so let's be fair. So the lady in the back. Yes.
AUDIENCE: Hi. My name is Jeannie McKayla. I was wondering if any of you could speak to the enteric nervous system brain connection?
JOSH TENENBAUM: Is that the gut, or--
AUDIENCE: Gut.
JOSH TENENBAUM: Yeah.
PRESENTER: This is not probably the best panel for that question.
JOSH TENENBAUM: So you may not know, [INAUDIBLE] that we have a lot of-- a lot of our nervous [INAUDIBLE]. Yeah, that's what she was asking. What's the connection between the brain and the gut. It's an active area in neuroscience research.
JOSH MCDERMOTT: [INAUDIBLE].
JOSH TENENBAUM: What?
JOSH MCDERMOTT: We should punt on this.
JOSH TENENBAUM: You don't want me to just bullshit something? Sorry. OK.
[LAUGHTER]
But-- no, but people work on it. Right? I mean, some serious--
JOSH MCDERMOTT: It's a very hot area, just none of us really know much about it.
JOSH TENENBAUM: Yeah. And to be fair, it's not something that MIT researchers are currently working on.
JAMES DICARLO: Well, there are a couple, but they're not in this room. So.
JOSH TENENBAUM: Who's working-- someone-- who's working on this? Do you want to mention them?
JAMES DICARLO: I think Feng Zhang is interested in that area, for instance.
JOSH TENENBAUM: OK. Good question.
JAMES DICARLO: Let's see. I'm sorry. I'll start going back this way. So the gentleman in the back.
AUDIENCE: OK. My name's Lee Lintecum. My academic background is primarily in linguistics and philosophy of science. And so the question is have you all gone back and looked at-- you know, human beings have been writing about cognition for centuries. Have you all gone back and looked at any of the writings on human cognition, historical writings, and if so, have you found anything that might be useful in guiding you in the way you proceed with the modern, more technical investigations.
ANTONIO TORRALBA: So I might say something. So yeah. We haven't got that far into the past, but we are moving into that direction. So one of the things that is happening is that while nothing was working, everybody was just extremely focused with whatever was happening in that precise moment. But things have started to work right now. So the students are starting to look at all their papers that contain really interesting ideas. Because in the past, if you had to write a paper, you could only write about great ideas, because you couldn't do any experiments. Nothing was working.
Now it's the contrary. Now you don't need to have a great idea. Now experiments work, so you can fill your paper with results. So if you go into the past, you get all these really interesting insights of how things work and hypotheses and a number of things that are extremely exciting right now to put to the test. We can actively work on that. So people are actually working backwards. And it just takes a while to convince their students that they need to read this paper that was like 20 years ago or 40 years ago, or you know.
AUDIENCE: Immanuel Kant.
ANTONIO TORRALBA: That's the next generation.
JOSH TENENBAUM: Well, but in cognitive science, I mean, it is funny. What Antonio is referring to is that nowadays, things are moving so fast that students will refer to "classic" papers published in 2015. I'm not making that up. But yeah. I mean, so Kant's view-- I mean, so in our work, we're very inspired by the Western philosophical tradition and interested in other traditions too.
But the view that we were just talking about in response to some of the questions is possibly basically Kant's view, right? About the way the mind is built from the beginning to structure experience and space and time and with causality and certain categories of experience. I think actually, right now in machine learning, this is true in some of the stuff we talk about. But in robotics, or in machines that, say, are trying to play games, for example.
I know there's a few papers under review right now at NIPS with more or less the same message. That it turns out, if you build machines that structure their experience the way Kant suggested, to perceive objects from the beginning, they do a lot better. Like, they can learn to play games that previous, classic papers from 2015 might have learned some video game at human level after hundreds or thousands of hours, but a human learns at the human level in maybe 10 or 15 minutes. And now you can do that, using what is basically Kant's insight about how to structure the representation of these systems.
So in our lab, we do have some old books. We actually have some books. But it's just as Antonio said. It's a struggle. Students have so many pressures and so many opportunities, that for me, if I find a student who actually wants to read something that was published 20 years ago or even 200 years ago, I think that's great. I think it's a great skill to develop.
And it's very inspiring to see how not just Kant, but people like Boole, for example. You've heard of Boolean logic. And he's mostly known as a kind of, I guess, a mathematician or an inventor of logic. But his book was called The Laws of Thought. And you read that book, and he's talking about human cognition.
So many people over the years have been inspired by these same great questions. We're lucky right now to have the tools to actually make some things work. But I think all the good ideas have been had before. And the more we really understand what we're doing as part of a really long tradition, that's the one I think you're referring to, I think it just makes everything richer and is going to really drive some of the more creative-- quote, "creative" innovation.
JAMES DICARLO: OK. Let's-- who hasn't had a chance yet? Sorry, this gentleman. And I'll come back to you. Yes.
AUDIENCE: I was going to say, there was a book a while back called Thinking Fast and Slow that appeared to explain not only how we do some things remarkably well, but how we do some things remarkably badly, including people voting against their own interests and ways people could be manipulated.
And I find it interesting that humanity was essentially being defined as both really good at some stuff, but really bad in certain ways. And I'm wondering whether that duality or something like it impacts any current research.
LESLIE KAELBLING: I'll take this. I'll take this because it actually affects what I think about in a funny way. So I mean, it turns out that all the problems that we try to solve in AI, or that humans try to solve, are in kind of fundamental computational sense, intractable. Right? We know that it would be impossible to find the optimal strategy. I mean, effectively impossible to find the optimal strategy.
So and sometimes, I say, well, so AI, practical AI research is about figuring out which corners it's OK to cut. So we can't fit into a computer a program that will optimally solve the problem of controlling a robot to cook dinner at your house. Can't do that. So it's going to have to be bad at some stuff and good at some other stuff.
And another thing that I am interested in looking at humans for is these questions about, well, what can they be-- what is it OK to be not so good at? Right? Because obviously, again, I think evolution maybe sort of helped figure out where to be in the trade-off space. And you can pick different places to be in the trade-off space, but you can't be optimal at everything. So.
JAMES DICARLO: Yes. [INAUDIBLE].
AUDIENCE: Yes. I wanted to ask maybe the panel in general about their feeling of how the measurement of the brain or interfaces with the brain will have [INAUDIBLE]. We always think of the brain kind of acting human experience, but can we electrically, biochemically interface with the brain in a way that we'll learn more or motivate something grander than just what we're doing now.
JAMES DICARLO: Well, I guess I'm the only neurophysiologist on the-- my lab works in neurophysiology. So we actually make a lot of measurements internally of the brain. And I mentioned the deep networks. Some of what's exciting to us is that there are close connections that can-- internals of that network with the internals of what we measure in the non-human primate brain, which is a model of the human visual system.
And so I completely resonate or agree with the idea that those constraints can be used to then shape the algorithms that are developed. That's actually how my lab is proceeding. And we're trying to walk the line between feathers and wings, in effect, right? So because we don't want it to be an exact biomimetic copy, but we'd like to gain what are its intelligent aspects for solving vision.
And I think you can reverse that flow. You mentioned brain-machine interface. And that's where I sort of briefly alluded to. There are other ways to do brain-machine interface that might not be direct brain-machine interface. They might be-- image, the world of images, you know, it's such that you'll never, ever see that same image twice. There are parts of image space that it's so big, that you're ever going to explore.
So models might even tell us here's a way to place some images that are going to do something really interesting to the brain that we haven't yet discovered, or a combination of sound and vision that is an external inducer of the brain. So this is a kind of percolating idea in my lab. This is not clear whether that's going to go anywhere.
AUDIENCE: [INAUDIBLE].
JAMES DICARLO: No, it's not-- I don't think that's-- the hard part of interfaces is there's a lot of materials problems to be solved. So we can do this in animal models. It's been done somewhat in humans, paraplegic humans. There's been progress there, you may know about that, for some robotic control. Say if you don't have control of your spinal cord, yet you can still drive. Limbs, artificial limbs, from implanted arrays, electrodes in the brain, et cetera.
But there are multiple labs here at MIT working on less invasive ways to do that. Ed Boyden's lab is one of them. Ways to interface with the brain that are less invasive. And so those tools will keep being developed. And MIT's the kind of place that's sort of speaking of engineering, not about algorithms for intelligence, but just about tools and interface and what's going to happen.
JOSH TENENBAUM: We know people, right? Like is it Polina, right?
JAMES DICARLO: Polina Anikeeva. There are a number of folks who this is their life. And that stuff is happening, and new stuff's always coming out. And you know, we could interface, but you still want to know what you're interfacing with. So again, we're talking a lot here about what's going on in the internal combustion engine. But if you had new ways to interface with it, then all kinds of new opportunities open up, both clinically and for normal patients. Right? I mean, that will happen. It's just a question of when and exactly what that technology will look like. It's not really happening today, except for patients-- very special cases.
ANTONIO TORRALBA: I just want to throw out one thing. I'll say that I think that in the future, when we will be able to interface with the brain, there are a lot of things, really cool things, that could happen. We always talk about how great is human learning compared to machines. You know, that humans learn in an unsupervised way, from very few examples. But machines have another way of learning that is even more impressive than humans.
Like if I want to learn Chinese, I have to study Chinese like crazy. Probably I will never learn it. Well, maybe some of you already speak Chinese. Could you pass me your knowledge and implant it into my brain, so that I just know Chinese? And that's something that machines can do easily, you know?
If a machine already knows how to solve a talk, just transfer the weights to the other machine, and that's it. You don't need to train that machine from a few training examples. You don't need to train it at all. Just copy the code. So that's something that is great about machines. And humans cannot do this.
JAMES DICARLO: That might be a hard interface problem. Yes, so. Let's see. Yes, this gentleman here. Then I've got you two guys next. Yes.
AUDIENCE: So with your quest to replicate human intelligence, do you have a gut feel for what time period might we be looking at to get there? How many decades [INAUDIBLE], for instance. And in that journey, are there important markers you think of, in terms of unsolved hard problems that would represent major advances in that journey. And are there enabling technological factors that you have on your radar or think up, just like the 2012 was referenced in a write up about this year when Moore's Law essentially caught up with the demands of the field. Are there other similar enablers that you wish for and you can measure?
ANTONIO TORRALBA: Well, just-- so one of the things that has made progress possible within the last years, it's not just models. It's also the fact that suddenly there was access to a lot of data. And that allowed to make a little progress.
In order to build human intelligence, I think that one of the key factors that we'll need is embedded systems. Systems that are acting in the world physically, so they are built inside a robot. And that robot can interact with the world. And those robots do not exist yet. A robot that is here with us and playing with the world like we are, with the flexibility that we have, that doesn't exist. It might take us still 10 or 15 years to get there.
And then after that, I think it's very hard to believe what is going to happen. Because I think there will be an explosion. As soon as we start collecting data on embedded systems that can interact with the world, we'll be already in the setting in which we'll have access again to a lot of data with the physical world, and we'll be able to run really, really fast.
There might be models that are already good that we don't know. And just having the right data could work. So that will be-- for me, will be like what is the missing step that is still just not allowing [INAUDIBLE].
JOSH TENENBAUM: Can I just add another one which I think probably most people here would also agree with is power consumption, energy efficiency. And also just size. Right? So right now, we run the kind of networks that you run, or that you run, or that all of us might run. We typically run them on a machine that is somewhere between the size of, like, this table to, like, half of this room, if we're lucky, depending on where we work.
But we run a lot of our models on a supercomputer cluster that's in Western Massachusetts, because it's more efficient to put the big computers there and to have the high energy power lines going there. Power is cheaper. The real estate is cheaper. And we're simulating a small fraction of the brain in terms of both hardware and intelligence. A very teeny, tiny fraction, on a machine that's a lot bigger and sucks up orders of magnitude more energy.
So I think all of us are concerned about energy. But in any kind of weather, any mobile intelligence, whether it's a robot or a self-driving car or a phone or whatever, that's a key challenge everyone's working on is how do you pack in more computation density and not become the major cause of climate change.
LESLIE KAELBLING: But you can also relax the model of computation. Right? So I mean, another thing is people have been doing a lot of work, in particular for neural network models. It turns out that you don't have to be so exact in your calculations. You don't need 64-bit numbers to do this. Sometimes you can do it in a very small number of bits.
Sometimes you could do it with hardware that gets the wrong answer some of the time. In fact, we insist-- we put noise in on purpose when we do training. So it may be once we have a better idea, I think, of the kinds of computations we want to do, I think it's-- we've tried this. We in the field have tried this prematurely several times before.
People did try to make their own [INAUDIBLE] machines a long time ago, and people did try to make logical inference machines long time ago. I don't think we're quite ready yet to do that. But I think when we have a better idea, it'll change the kind of actual computational machines we want.
JOSH TENENBAUM: There's an MIT alum named Joe Bates. I don't know if any of you know him. He has a startup just a few blocks from here called Singular Computing. And he's just one example of someone who's doing really cool approximate computing, where they basically have designed very simple kinds of-- they're like 1980s CPUs, but where the floating point unit is just 99% accurate.
And as a result, you can make-- you could build something that's about the size of this table that has a billion cores and runs on a reasonable amount of energy. So that's just one example of people who are trying to innovate under the idea of the hardware software stack for very, very approximate computing. And I think that's very brain-like also.
Just on your issue of timeline, just that, I think it's important to say that first of all, I I wouldn't speak for everybody here or anybody other than myself. But I don't think-- first of all, I don't-- I mean, I don't know. We don't think we're trying to replicate human intelligence. We're human-inspired in some ways. Some of us are trying to make more human-like or human-level machines. But just to be clear, we're not trying to-- I don't think. Unless you are. Maybe. We all have-- there's some kind of interesting dream that animates a bunch of us.
But I think it's important to say that we're not trying to build artificial humans. We're trying to build smarter machines that are smarter in more human-like ways that are part of how we're understanding intelligence in the brain and mind in various ways. And as far as timelines, these are things we think a lot about. So when I talk about babies, three months, six months, for me, that's a great scaling group. Because I think that by following-- if I could build a machine that has even the commonsense, general intelligence, of a three-month-old, that would be more than we have so far, and it would be simpler than trying to start with an adult. So if I can go even in just two to three month timeline, that's just one example.
But it's really important, and it's something we think about, especially as we organize our larger activities around this quest. It's how do we measure-- how do we set realistic goals and realistic, practical, useful milestones and measurements like that. The science is really helpful for that.
JAMES DICARLO: The gentleman in the red shirt had his hand up about five times, and he's still.
AUDIENCE: [INAUDIBLE] the previous--
JAMES DICARLO: Oh, OK. But there was someone here. The-- yes.
AUDIENCE: Thank you. So I work in venture capital, and I'm investing in companies that are applying AI. And deep learning is very popular now and it being used. My question's around explainability.
Because a lot of companies we talk to are using AI, but in certain domains, like self-driving cars, or safety is a big concern, or security, being able to articulate how the system's making decisions is very hard, and it has a lot of liabilities associated with it. So I was curious on how you think about explainability as we develop these more sophisticated algorithms, and it's somewhat of a black box on how they're making decisions.
ANTONIO TORRALBA: OK. Well, I can say something about that, because I work on that particular area too. So that's a really interesting area of research, and there is a lot of groups working on that particular question. And so one of the first things is that just these machine learning algorithms, big black boxes, and sometimes they give you an output that it's very hard to tell what is the rationale behind that particular decision.
So one advantage is that despite that they are black boxes, you can actually look inside and tell what they are doing. This is not like a brain, which is a real black box. You cannot open it. As soon as you open it, it stops working. So this is not a box. The box has-- you can open it.
JAMES DICARLO: You can open the brain, too.
JOSH TENENBAUM: Jim spent 12 hours yesterday opening up a brain.
[LAUGHTER]
ANTONIO TORRALBA: It's not-- it's not the same kind of thing. It's not the same kind of opening. And then you open it, and you really don't want to put your hands inside. So. It's a different beast.
So there is a lot of progress being made here. So there are algorithms that the way that they deploy, that they compute the output is already in such a way that together with the output, there will be a rationale. And in particular in an autonomous driving, the algorithms that some of the companies are pursuing are algorithms that really try to be the representation of the input that has components that you can clearly identify, like trying to detect cars on the road, trying to decide what the 3D structure of the scene is or what the predictions of the actions the different participants on the road are going to perform, and then take decisions on top of that.
So at the end, a decision is going to be based on a number of things that you can instantiate, and you can provide a description and an explanation. There are other cases in which it's not so clear. If you want to have an algorithm that recognizes the content of a picture, it might not be so easy to tell exactly what is in the image that has produced a particular output. But we are working on that, and this an area that there is a lot of progress. So I think that right now it's not totally true that these things are black boxes anymore.
JAMES DICARLO: We have time for a couple more questions, and then we're going to have to wrap it--
AUDIENCE: [INAUDIBLE].
JAMES DICARLO: Oh, yes.
AUDIENCE: [INAUDIBLE]. Have you ever done or thought of programming such [INAUDIBLE], such sociopathically or psychopathically [INAUDIBLE]?
[INTERPOSING VOICES]
JOSH TENENBAUM: [INAUDIBLE] serial networks. Are you talking about, like, generative sociopathic networks? That'll be next year's NIPS, yeah. Sorry.
JAMES DICARLO: OK.
AUDIENCE: That's not [INAUDIBLE].
JAMES DICARLO: The gentleman in the jacket, yes.
AUDIENCE: Yeah, with emotion, it seems to me AI doesn't study the sense of smell that much. And didn't the brain and the gut evolve from the olfactory bulb? And the other question is, with respect to mathematical theorem proofing, is AI getting pretty good? Are mathematicians going to be unemployed?
JOSH TENENBAUM: Well, on the last, on the contrary, mathematicians are actually working on AI. They're really interested in it. Some of our computer science colleagues, like Adam Chlipala, or whatever, are-- you know, people in programming languages are working on new kinds of programming systems which are doing really impressive kinds of automatic-- they're improving or analyzing logical structures. And it's great that mathematicians are excited about AI. I think they see it as a tool that can enable what they do, and they want to understand how it works. They're not so worried about being out of a job.
JOSH MCDERMOTT: You know, I'll just say that artificial noses are-- that's an area where-- that a lot of people are excited about. So there is action there. I mean, not particularly around here, but--
ANTONIO TORRALBA: Yeah, so actually, so a couple of my students wanted to work on that, on olfaction. But it's really hard to get hardware to take any type of measurements. These are like-- the noses are not like it's small, like a nose. They are really like a big lab bench, where you have to put the product inside, and then it will give you the chemistry, the chemical components of that material. So it's really not very good.
Like in airports, you can see cameras all over the place, but still they use dogs, you know, to smell stuff. So yes. For some reason, it probably just-- not enough people working on it. I don't know.
JOSH MCDERMOTT: I mean, it will change, though.
ANTONIO TORRALBA: Yeah, yeah. It's going to change very rapidly, probably. Yeah.
JAMES DICARLO: OK. Maybe. Yes.
AUDIENCE: That was my question all along, [INAUDIBLE] is the ethics evolving? And [INAUDIBLE] lawyers [INAUDIBLE]. And I'm sure you all do. And I wondered, do you have courses you have to take on the ethics of what you're doing? And why do we have to invent a machine brain?
JOSH TENENBAUM: Oh boy.
AUDIENCE: What is the line?
ANTONIO TORRALBA: Why do we need to invent machines?
AUDIENCE: Brain.
[INTERPOSING VOICES]
JOSH TENENBAUM: Those are different questions. I mean, like--
AUDIENCE: Why. Because there's a lot of people out there in the world that probably don't agree and wonder why you think you need to--
JOSH TENENBAUM: You could say the same about any machines, and quite reasonably, right? Like, why do we need computers?
AUDIENCE: I thought we were sensitive about that.
JOSH TENENBAUM: So the ethical issues are important. And I think right now, in-- so there's different culture. I mean, what you're seeing here are people coming from different departments and fields and cultures together. So in the neurosciences and cognitive sciences, there is a strong ethical tradition that comes from having learned the hard way that if you don't train researchers in ethics, they, maybe intentionally or maybe just unintentionally, do bad things.
So all of our grad students have to spend a solid week. And that's probably not enough. But they spend a solid week just in an ethics course. And then they have ethics training all along the way in different places. I think we should do more, but at least it's part of the culture. I don't know what it's like in computer science. I mean, my understanding is that there isn't actually any-- yeah. So I think we-- I mean, I think we're all starting to confront the issues that you're raising and thinking that we should do that.
LESLIE KAELBLING: So when-- I mean, and ethics is an enormous-- there's a bunch of different ways that ethics are an issue, right? So one thing that people are worrying about now, actually a fair amount in the context of autonomous systems is how does the system make ethical choices that it might face. Right? So there are ethical choices, right? The classic one is if you're a driving car, you know, do you have to-- you might have to worry about to what degree do you protect your driver versus a person who's crossing the street.
And I've had some interesting conversations, for instance, with ethicists in the military who were thinking about autonomous, you know, the question of autonomous weapons. Or at what point should you draw the line and say, no, a human has to have control here, and what would that mean. And the question of what it means for a human to have control is complicated. It's complicated right now with the systems that we have.
And so I mean, there are a huge set of issues. People are concerned about them. I think their impact on the computer science stuff is not-- it's a different set of ethical questions in the sense that it's the ethics of what the systems we build will do when they're out in the world. And people are thinking about it. I think probably not with great sophistication, but beginning to try.
ANTONIO TORRALBA: But it's a clearly very important aspect. And like any science or any technology, it will have good things and bad things. And it's a mix. As you make progress on them, there is the door to more bad things. And the ethics is about putting-- we'll have to put the line on which things are OK and which things are not OK.
But the same science that will allow you to have a machine that will help a doctor to make a better diagnosis can be used in the military to take--
JOSH TENENBAUM: We should say as part of the Quest for Intelligence, it's an important part of the way we define things, right? That ethics-- we are committed to integrating ethics into all of our big projects and discussions. And we're still trying to figure out how to do that. Because I mean, I think, just to be honest, it hasn't been a big part of our communities. But I think we need to.
I don't-- I mean, personally, I don't think we should just outsource that to other people, but engage with, whether it's lawyers or philosophers or the broader public. Some of our colleagues, like Leslie was talking about, in the Media Lab, are really interested in that. We're working with some of them. People like Iyad Rahwan, for example, who studies this actively on a large scale. We have-- as part of the IBM MIT thing, we have some grants, even, for that. So it's something we're beginning to grapple with but need to grapple with more.
ANTONIO TORRALBA: And also what is the impact of intelligent machines on the workforce? What's going to be the effect that it's going to have? So all those things are actually things that we are studying.
JAMES DICARLO: OK. So maybe time for any-- I think I'm being told we're out of time. So I know many of you have other events you need to go to. If anybody wants to stay, maybe our panelists will stay up a little bit longer. Please give them a round of applause.
And thank you all so much for coming to this event. And enjoy the rest of the day. We've got great weather. So we'll hang around for a bit. So thank you.