Building Machines that Learn and Think Like People (28:21)
Date Posted:
July 26, 2017
Date Recorded:
July 26, 2017
CBMM Speaker(s):
Samuel Gershman All Captioned Videos CBMM Summer Lecture Series
Description:
Sam Gershman, Professor of Psychology at Harvard, discusses how we might build machines that learn and think like people, by combining insights from cognitive science, artificial intelligence, and computational neuroscience. Dr. Gershman elaborates on five key ingredients for a machine that learns and thinks like a person - intuitive physics, intuitive psychology, compositionality, learning-to-learn, and causality - reflecting on the importance of building more human-like machines and how close are contemporary AI systems to reaching this standard.
SAM GERSHMAN: Because I'm going to talk about the intersection between cognitive science, artificial intelligence, and computational neuroscience. And my goal here is not to really obtain significant coverage of those topics, but really to provoke discussion. So with that in mind, I'm going to talk for about 20 minutes or so. And then, we'll just open it up for discussion.
So this presentation is based on a fairly long article that I wrote with Brenden Lake, Tomer, Ullman, and Josh Tenenbaum. It's forthcoming in the journal Behavioral and Brain Sciences. And when it gets published, it's going to be published along with like 50 or 60 pages of commentary from other authors, and then, a response by us to the commentary. And it's quite interesting if you find that this is up your alley. OK.
So if we think about the flow of ideas from artificial intelligence into computational neuroscience, we can see a bunch of examples where there's been a really significant sea change in neuroscience due to the influx of ideas from AI. So I'll give you two examples. One is reinforcement learning. So the neurophysiologists in the 90s who were measuring dopamine neurons in the midbrain noticed some perplexing properties of these neurons.
They noticed, for example, that it would fire when a neuron-- sorry, fire when an animal is rewarded. But interestingly, if that reward was completely anticipated by the animal, then the dopamine neurons would no longer fire. So that seemed inconsistent with this idea that dopamine was reporting reward, and it seemed much more consistent with the idea that it was reporting something like a reward prediction error or unexpected reward.
And it turned out that that signal conformed very closely to the learning signal posited by what's known as the temporal difference learning algorithm, which is one of the sort of cornerstones of reinforcement learning theory that was developed by a computer scientists over several decades. So that was an example of where people noticed some puzzling properties from physiological data and reinterpreted it in terms of an algorithm that came out of AI.
Here's another example. So object recognition. There's a long history of people trying to understand the computations of the ventral visual stream and, in particular, how the culmination of that stream-- the inferior temporal cortex can support object recognition.
And the most recent incarnations of that-- for example, the work by Dan Yamins and Jim DiCarlo in this department-- showed how you could build these deep neural networks to very closely match the computations being performed by the inferior temporal cortex and even achieve human level object recognition. So that's an example where ideas from deep learning influenced computational neuroscience.
Now, it's interesting to think about that if you go back further into history, you'll see that actually, the originators of some of these computational ideas in computer science actually were inspired by biology. So, for example, Sutton and Barto, who developed a temporal difference learning algorithm, they were actually very keenly interested in Pavlovian conditioning and what kinds of algorithms would support animal learning in general. And so they were actually directly inspired in their development of the theory by biology and behavior. And then, it kind of came full circle when people started analyzing dopamine.
And likewise, actually, some of the earliest deep neural network architectures for object recognition and visual perception generally were developed by Fukushima, who was very explicitly interested in modeling the hierarchical computations of the visual cortex. So that just gives you a sense of the kind of flow of ideas, the bidirectional flow of ideas, between computational neuroscience and AI.
Now, it's actually kind of a tricky question. What exactly do we mean by bidirectional flow of ideas? So many computer scientists will say that they are inspired by biology in the sense that we can build neural networks, for example, that look vaguely like biological neural networks. But, of course, if you scrutinize them closely, they depart in many different ways. So any biologist can list lots of different ways in which artificial neural networks differ from biological neural networks.
And so the question is, if we're studying biology, what do we hope to gain? What insight do we hope to gain that will actually help us build better AI systems? That's looking at it purely from the engineering perspective. I'm an engineer. I want to build better AI systems. Can I actually learn anything from studying the brain?
And there's a kind of counterargument here, which was made by Russell and Norvig in their very influential textbook on artificial intelligence. And they said that "the quest for artificial flight succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics."
In other words, the early history of trying to build flying machines were dominated by this kind of imitative fallacy that people thought that they could build flying machines by imitating the design of birds. And it was only when the Wright brothers basically liberated themselves from that way of thinking that they started being able to build machines that could fly in ways that were really different from how birds flew.
So this would be the argument to say that we shouldn't slavishly imitate the brain. We should try to find sensible engineering solutions. And if they look like the brain, that's great. But we shouldn't be too concerned about that.
Now, on the other hand, the counterargument to that counterargument is that there are lots of things that humans can do that computers can't. And so, since the human brain is our one proof of concept that intelligent computing device does exist, then we should probably be able to learn something from it. The question is, how do you go about doing that? What exactly are the lessons that you should glean from studying the brain, as well as psychology?
So let me come back to this question of brain inspired computation. I said that at a very abstract level, artificial neural networks look like actual biological neurons in the sense that an artificial neural network will take some inputs, apply some transformation, and deliver an output, somewhat reminiscent of the way that dendrites might take a synaptic input, translate it into an action potential that gets transmitted down the axon, and then, culminates in release of neurotransmitters at the synaptic cleft.
And likewise, you could say at the systems level that the hierarchical organization of the brain, in particular, the cortex, looks something like the hierarchical organization of neural networks. And even things like convolution are things that we're pretty confident, at least in the early stages of the visual system, that's something that the brain does.
Now, you can take that, and then, look at it side by side with this observation that putting together these sort of vaguely brain-like components can build state of the art AI systems. So we have systems that achieve human level object recognition, human level game playing, human level speech recognition.
And I should just insert a caveat here that when I say human level, we have to be really careful what we mean by that because human level just means that there's some metric at which humans are performing that is achieved by the system. It doesn't necessarily mean that the system is actually computing in human-like ways or arrived at that point through human-like learning algorithms.
None of those things are really verified in the way that these systems are evaluated. And, in fact, one of the things that my lab has been doing is studying these systems in a way similar to how we might study a human subject and asking, do they actually learn in human-like ways? That's a whole other topic of conversation.
OK, so let's put these two things together. We have human level performance on important tasks. And we have brain inspired computational architectures. Maybe these two things put together is a kind of recipe for building human-like AI.
Computer scientists are somewhat more cautious about this, but plenty of journalists have made this leap already. So you'll see headlines like "This AI Learned Atari Games Like Humans Do-- And Now It Beats Them," or "Game-playing software holds lessons for neuroscience."
Or "Go figure! Game victory seen as artificial intelligence milestone," or "How Google's AlphaGo Imitates Human Intuition," right? So there's this presupposition that what AlphaGo is doing is human-like in some important, critical way, that there's something about how AlphaGo works that-- something from human psychology that is critical to the success of AlphaGo.
So in the rest of this talk, I want to try to sketch out the answers to three different questions. One is, what does it mean for a machine to learn to think like a person? The second is, how close are contemporary AI systems reaching this standard? And the third is, should AI researchers care about building more human-like machines? So this goes back to Russell and Norvig's concern.
And provisionally, here are the answers. So if you want more details about this-- I'm just going to zoom through this. But if you want more details, I'll refer you to that paper. So I'm going to list five key ingredients with strong empirical support from cognitive science. So these are things that we know from many decades of studying human behavior.
It's less focused on brain mechanisms because, in fact, for a lot of these things, we don't know that much about the underlying brain mechanisms, even though we've known a lot about the behavior for many years. And so we think that studying these ingredients kind of from a behavioral perspective, but really, it's about the underlying cognitive processes, that will aid us in building better AI systems.
Now, if we look at current AI systems, in particular, deep learning systems that are currently state of the art, we can see that many of them haven't incorporated these five ingredients. And certainly, they almost never incorporate all of them into a single architecture. So there's still a gap between what we think of as human-like intelligence and artificial intelligence.
And finally, should AI researchers care about this at all? And we think yes because there are many domains, many naturalistic domains, in which people are still much better than current machines. And so we should really think about what it is that humans are doing that makes them better. OK.
So here are the key ingredients. I'll just list them here, and then, I'll go through them one by one. The first two are really about representations of the world, or sort of models of the world, intuitive theories. So we have an intuitive theory of physics, an intuitive theory of psychology. And these aren't the only intuitive theories. We have lots of different intuitive theories. We have intuitive theories of biology, for example.
But these are two particularly important intuitive theories, which are often missing from AI systems. And what I mean by intuitive theory is an appeal to what we think of as scientific theories, in the sense that a scientific theory is a set of ideas or hypotheses about the world or a particular domain in the world that are organized into a set of interdependent relationships that make empirical predictions.
And so we can think about theory development in humans much in the same way that we can think about theory development in science, that people develop hypotheses. They go out and do experiments in the world to test them. They revise their beliefs about those hypotheses.
The third, fourth, and fifth parts are really more about kind of operations that happen on those intuitive theories, or sort of elements of those intuitive theories. So one is compositionality, how we can understand complex phenomena by breaking them down into simpler parts. Learning-to-learn. How is it that we don't just learn a task, but we actually learn how to learn whole sets of tasks more quickly? So we're learning at kind of multiple levels of abstraction. And then, finally, causality. So we think that causality is a really important property of how we represent the world and reason about our data.
OK. So here's some examples of why intuitive physics is important. When we look at these pictures, we don't just see piles of rocks. We get this eerie feeling that this pile of rocks is somehow unstable or that we can make a judgment about whether this is a good or a bad block to remove in order to prevent the tower from falling, or whether this is a good or bad position to knock that ball into the hole, right? So we represent these pictures in terms of both their surface features, but also, these hidden physical features, like force and mass, stability, and so on.
And we know quite a bit about this developmentally, so we know that this is-- at least some of the intuitive theory of physics exists, even within the first few months of life. So children expect objects to have certain properties, like cohesion, that the objects will move as connected and bounded wholes, continuity, objects move unconnected paths, and contact, that objects don't interact at a distance. So these are all things that we can see, even in the very first months of life.
And then, by their first birthday, children develop a whole very sophisticated repertoire of physical principles, things like inertia, support, containment, and collision. And this was really the work of Elizabeth Spelke, Renee Baillargeon, and their colleagues.
So why should you represent intuitive physics? Again, let's put on our engineering hats and say, if we want to build an intelligent machine, why is it useful to represent physics? And one of the key ideas is the notion of reduced sample complexity. So sample complexity is a technical term from computer science, which is basically how much training data do you need in order to achieve some reliable estimate about quantity.
So the way to think about this is intuitive physics is like a very structured kind of dimensionality reduction. If we can infer the physical properties of the world, those become kind of sufficient statistics for making all sorts of different predictions. And it's much lower dimensional than thinking about learning in the pixel space or some other kind of high dimensional representation. And so the fewer things that we have to learn, the more we can make use of data.
So it permits kind of deep and broad generalizations from a small amount of data. And we think that is the key property of human cognition and human intelligence, that we can learn from very few examples, and we can make very strong generalizations on the basis of those few examples, and very flexible generalizations. So we can apply our knowledge across many different domains that might look superficially very different from the domains in which we receive training.
So here's an example of intuitive psychologists. And we're moving on to the next intuitive theory. So when we look at scenes with other agents, we don't just see objects moving around. We see agents with particular mental states. And those mental states, namely beliefs and desires, are guiding those agents' behaviors.
And young infants can make all sorts of quite sophisticated inferences, if you think about it. So you can show an agent-- sorry, you can show a child this agent jumping over this barrier. And after showing the child this display, the child will be quite surprised if it then sees this agent jumping over empty space because it knows that agents are going to jump over obstacles in order to get around them, and they won't just spontaneously jump because that's physically effortful.
And by the same token, they will expect an agent to jump over a smaller block, even one that looks quite superficially different from the block that they were exposed to initially, just because the principle at work here is that they're going to expend physical effort to get around an object blocking their path.
Here's an even cooler example of this. So Kiley Hamlin presented infants with this display where there's an agent pushing another agent up a hill and compared that to a case where there's an agent trying to push this other agent down the hill, even though that agent is struggling to get up to the top of the hill. And infants will recognize that this is a helper agent and this is a hinderer agent.
And that's actually quite sophisticated, if you think about it, because what this entails is that we're not just representing the utility functions of these agents. We're actually representing the fact that this helper agent has a utility function defined on the utility function of another agent.
In other words, when this agent's utility function goes up, this agent's utility function is also going to go up, whereas when this agent's utility function goes up, this agent's utility function is going to go down. So there's a kind of compositional structure of utility functions, which is quite amazing if we're thinking about six-month-olds.
OK. Third ingredient, compositionality. This is the basic idea, again, that we can construct new representations by recombining old representations. We can understand complexity by breaking it down into simpler components. And anyone who's ever program or computer is, of course, familiar with this because we don't write programs from scratch. We use old components, and we compose them together in useful ways. That's exactly the point of having a library and some operators that can connect the different parts of the library.
And we think that the human mind works much the same way, that both for language and for thought, more generally, we can think an infinite number of thoughts and utter or understand an infinite number of sentences, despite the fact that we only have a finite representational capacity. And this is what Helmholtz called-- sorry, not Helmholtz. What's his name? The other German guy said that it's the infinite use of finite means. OK.
And then, we see this also in vision. So new objects can be represented as novel combinations of parts in relation. So this was the basis of an important theory of high level vision developed by Irving Biederman called recognition by components, where he posited that there is a bunch of representational primitives that can be composed into rather complex volumetric representations.
So what are the benefits of compositionality? Well, it's much the same as what we saw before, which is reduced sample complexity and deeper generalizations. Again, if you can represent a complex domain in terms of a smaller number of components, then there's fewer things that you need to learn. So you can learn more efficiently, and you can generalize much more broadly because you can take those same different components and combine them in a new way to understand a new domain.
So here's an example of what we might think of as the usefulness of computationally in an AI context. This is a paper by-- this is based on a paper by Tejas Kulkarni So there's this Atari game, Montezuma's Revenge, which turned out to be extremely difficult for deep reinforcement learning algorithms, like the deep Q-learning network, or DQN.
And the reason is because the rewards are extremely sparse, and actually, you have to chain together a whole bunch of actions basically perfectly to get any reward at all. So if you look at the original DQN paper, which was published in Nature, they basically get zero points on this game.
This was in the context of a paper that claimed to have achieved human level game playing ability. But humans can actually learn this reasonably well. And one thing that we think people are doing is basically breaking it down to a bunch of subgoals and trying to pursue these subgoals and chaining them together to achieve the reward.
And so what Tejas Kulkarni did was basically build a hierarchical version of the DQN that could identify these subgoals, and then, pursue them independently such that they could be chained together to actually win the game. And he showed that that worked fairly well.
All right. The fourth ingredient is learning to learn. And this is the idea that learning new tasks is accelerated due to previous learning. So we already know, actually, that deep learning utilizes learning to learn in several ways, so, for example, sharing features between classes or reusing previously learned knowledge to help perform new tasks, but we think that humans actually are doing some learning to learn in a more sophisticated way and, in particular, in a way that harnesses competitionality.
So the idea here is that if you can take holes and break them down into their parts, then these same parts can allow you to learn new tasks more quickly. In other words, you're learning not just representations of individual objects, but the vocabulary with which to describe objects. And that kind of compositional representation will supply you with the mechanisms needed for learning faster a new task, where you can reuse the same vocabulary.
All right. And the last ingredient is causality. So, if you think about it, many deep learning systems and other kinds of modern AI architectures are really organized around a kind of pattern recognition motif that you're learning a very sophisticated function approximator to identify patterns, like for classification or unsupervised dimensionality reduction, or what have you.
A different kind of approach is to think about in terms of causality. What we see, for example, when we look at handwritten digits is not a pattern, but rather, a causal process that generated that image. And that's literally true for handwriting digits because there was actually a motor program that got executed to causally produce that image.
And this was something-- this was an insight that Brenden Lake had, and he developed a Bayesian Program learner that would actually take the images and infer the underlying motor program that generated them. And he showed that this could work really well for recognizing digits or symbols from new vocabularies given a very small amount of data.
So again, we see this same theme coming back again where you can achieve really favorable sample complexity and fast generalization if you have the right kind of representational primitives. And causality is part of that. If we can represent the underlying causal structure of our data, then we can learn more efficiently.
One place where we think that this could be useful is tasks like caption generation. So there are now caption generation systems that you can even use on the web. You can give it an image, and it will produce captions. So we tried this on a few images.
And here's what we got out this is. Admittedly, this is a system from 2015. There's been 20 more systems developed since then. But in our anecdotal experience, this is not an isolated occurrence. Basically, the problem is that the system can recognize often the underlying objects, but it basically gets the deeper meaning completely wrong. Yes, there are a group of people standing on a beach, but that's not really what the image is about. That's not how you would describe the image to another person.
And so we think that if these systems have a deeper causal understanding that appeals to underlying intuitive theories of different domains, intuitive theories of psychology, to recognize that these people have emotions, they're in some state of distress, intuitive physics, that this house is collapsing, then you can actually produce much more realistic, human-like descriptions of images.
OK. So coming back full circle, I started with this kind of seductive hypothesis that we have neural networks that are vaguely brain inspired, and they achieve human level performance on important tasks, and that maybe the two of these things together will produce the necessary ingredients for human-like intelligence. And we think that that's not entirely the whole story, that there is a bunch of things still missing from these systems, namely the core ingredients that I just mentioned to you. OK.
And I think that the AI community is already rather receptive to the notion that there are important psychological principles that could be incorporated into AI systems. So you already see this happening. For example, the development of selective attention mechanisms, experience replay, external working memories, these are all things that are, at least at an abstract level, inspired from the study of human psychology and neuroscience.
And what we're really advocating is pulling in more kind of higher level cognitive ingredients than these relatively low level things. And we think that this will have a wide range of applications to things like scene understanding, autonomous driving, creative design, autonomous agents, and intelligent devices.
Now, some of you, the more biology oriented of you here, might wonder, well, what about biological plausibility, right? Can we really expect to build a human intelligence without kind of emulating some important properties of the underlying biology? And that may or may not be true.
But I think it is important to be careful about prematurely knocking down computational ideas because of their apparent biological impossibility, because that's often what neuroscientists do when they hear about particular psychological ideas. Well, how would we ever get a neural network to do that?
I think the question that they have to ask is, how cognitively plausible is their neural architecture? Because we have a set of ideas that have been calcified into the textbooks about how neurons work, but, in fact, many of these are disputable. So, for example, any of you who took a neuroscience course were probably taught that long-term potentiation is the mechanism of memory formation, kind of the fundamental mechanism of memory formation at the cellular level.
But people like Randy Gallistel have made quite persuasive arguments that actually, long-term potentiation lacks the properties to produce many of the behavioral characteristics that we know are true of animal learning and memory. So if that's true, then in what sense is LTP actually relevant for understanding behavior, understating the actual phenomena of memory that it sets out to explain?
It's not denying that LTP exists. It's denying the postulate that LTP is important for producing important behavioral phenomena. And so this is just one example where we can't get too stuck on the biological details, and we need to consider cognitive plausibility.
And I'll just close by saying that I'm not arguing against neural networks in any way because, at some level, the brain is a neural network, so it has to be the case that everything that we're doing, everything that goes on in our minds, is implementable by a neural network. And, of course, we have theoretical proofs that say that certain classes of neural networks are universal function approximators, so it's almost true in a trivial sense.
But I think the interesting engineering problem here, and also, the problem for computational neuroscience, is what kind of network, that maybe the neural networks that are going to achieve human-like intelligence will look very different from the kinds of artificial neural networks that we are developing today. And I think that's an important question to ask ourselves. All right. Thank you.
[APPLAUSE]