Panel Discussion: Is there anything special about human intelligence? (vs. non-human animals, vs. machines)
Date Posted:
August 17, 2020
Date Recorded:
August 14, 2020
CBMM Speaker(s):
Laura Schulz ,
Matt Wilson ,
Nicholas Roy Speaker(s):
Venkatesh Murty
All Captioned Videos Brains, Minds and Machines Summer Course 2020
PRESENTER: Wilson, so Matt Wilson, also from brain and cognitive sciences, and of Nicholas Roy from CSAIL at MIT.
So the topic for the panel discussion today is, is there anything special about human intelligence? So I was envisioning that this question may take a number of different directions and formats, including what's particularly amazing about biological intelligence versus what machines can do these days, but also discussions about, how do we go about studying intelligence in non-human animals, and what are the differences between non-human animals and human intelligence as well?
And then generally speaking, what are the directions that we should take in AI to learn from the incredible resources and opportunities that we have from studying both cognition in humans as well as detailed neural circuits in animal models? So without further ado.
NICHOLAS ROY: I pondered the question, anything special about human intelligence. And my first thought was that special is kind of a funny word. And I wasn't quite sure how to interpret special. So one thing that is clear is that there's lots of things that human intelligence can do that machines can't do at all, or least can't easily do.
And so I thought through a few of these things, and this is by no means an exhaustive list, and it's easy to argue about whether or not machines can do them or how easily they can do them. But certainly, humans seem to be much better at learning from very, very small numbers of examples in a way that robots and computers seem to require a lot of data.
Another thing is that machines, at least our best performing machines, seem to be hobbled by the ability to generalize outside the training set. There are certainly some systems that can generalize outside the training set, but they don't tend to be our best performing systems right now at a lot of tasks we care about, perception, et cetera.
Humans certainly can reason at multiple levels of abstraction. And we actually can see in the brain, we can actually physically see the meat, if you will, that reasons at multiple levels of abstraction. And my robots do reason at different levels of abstraction. But man, it's an effort to get them to do that. And certainly, I think as far as embodied intelligence is concerned, there is no principled way of articulating those different levels of abstraction, and actually writing down an objective function.
And one thing is that robots are phenomenally bad at recognizing risks to themselves and others and choosing actions. You can encode risk in the reward function, but that's very different from recognizing that action you might take might end your existence. Robots have no, and other machine intelligences have no concept of the end of their existence and what that might mean in terms of their decision-making.
And the flipside of that is they can't choose actions that have no apparent value. So I and a couple of other folks have a project to actually try to understand curiosity in embodied intelligence. And it's really hard to understand, what is it that motivates us to learn things that have no apparent value, but oftentimes end up telling us something useful for the future? So these are some ways in which they're at least different.
But there's no reason why machine intelligence couldn't do these things at a fundamental level. So learning from teeny numbers of examples and generalizing outside the training set, that might very well just be a function of the inductive biases that we put inside our system. So part of what makes neural networks really good is that they're able to use all of the representational power to capture what's inside the training set. But that's the result of an inductive bias.
And so maybe we just need to think harder about what the inductive bias is that's in human intelligence. Reasoning at multiple levels of abstraction could just be, again, a different internal representation that we just don't have, at least my robots don't have the right representation.
And then the last thing is that choosing actions and exhibiting curiosity, they appear to just be different loss functions. Maybe fatal risk is somehow different to a loss function. But maybe the reason our robots and our machine intelligence don't have these properties is because we just haven't gotten the loss function right, and we need to understand what it is about human intelligence that's in terms of loss functions, is giving us this sense.
So I don't know that I can answer the question of whether or not human intelligence is special compared to machine intelligence. But I think there are some clear differences where human intelligence has desirable properties that machine intelligence doesn't have. And there's some ideas about how we might get to those. And then I'm happy to answer questions about this throughout the rest of this panel.
PRESENTER: Very good. Thank you very much, Nick. I just want to pretty quickly introduce our fourth panelist, Professor Venky Murthy, who is the Director of the Center for Brain Science at Harvard. And now I suggest that we go with Laura and then Matt and then Venky, and then you answer questions. So I will shut up and now the four of you can lead the rest of the panel discussion.
LAURA SCHULTZ: Great. I didn't prepare any slides here. Can everyone hear me?
NICHOLAS ROY: Yes.
LAURA SCHULTZ: OK, good. I didn't prepare any slides, because I was coming in to talk about this perennially fascinating question. And I think we maybe do ourselves a disservice by thinking just in terms of humans and machines, because we share the planet with vast numbers of organisms with very different kinds of intelligence, each of which is fairly unique and distinctive. And there are lots of creatures that can do lots of things that we can do quite effortlessly, and things we can do that no other creature can.
So I think one of the ways that it's often fruitful to think about machine intelligence is what kinds of intelligence is it exhibiting? And are the processes by which machines are learning, are they most similar to humans? Because even when they're not similar to humans, they may be capturing kinds of intelligence that are present and emergent in other species.
And so, I think the broad range of intelligence is important to consider here. And of course, even the very simplest machines have long been able to do things humans can't do. Our calculators are much better than us at doing rapid calculations, for instance. That's why we invented the machines. So even a simple pocket calculator, is in some ways much smarter than humans in a very, very limited way.
I would agree with basically everything Nicholas said about human intelligence, but I think it goes a lot beyond that. It's not just that we are curious about useless things, or that we're bad at calculating fatal risks. It's that we habitually invent new problems for ourselves. We create problems we don't really have.
And then we go about trying to solve them. And when we do solve them, we create new problems. And those problems themselves act as bootstraps into learning. They constrain the kinds of searches we organize, they constrain the ways we frame and think about our ideas. And we think about the value of our problems ourselves.
I think one thing that might be distinctive about humans, and this is an idea I've been playing with a lot recently is that most animals have a very restricted set of utilities, largely governed by their survival and reproductive ends. And humans can really choose almost any arbitrary reward and incur almost any cost, including cost to life, limb, and health.
And the flexibility for utility functions is what makes us able to create all these innumerable problems, in part because every kind of goal we set and every cost we're willing to incur creates a different problem space. And those again, impose different inductive biases, different constraints for different avenues of learning.
So I think that's very interesting. I think one of the things that is probably not desirable in our machines is that we allow them to set their own utility functions. There's a lot of reasons why we want our machines to have our utilities at stake and not invent their own.
But it might be critical to human intelligence that we do have that flexibility, that we can decide what to want, that we can decide, even in very idiosyncratic ways, what goals to pursue. So I think I'll stop there and pass it on.
Yeah, great. [LAUGHS] I also didn't prepare any slides, but fortunately, both Nick and Laura brought up the points that I was actually thinking about discussing. And that is some of the idea of animal intelligence or of the human intelligence. I think the thing about human intelligence we focus on is because we sort of think of humans as representing a collective niche. We all share and understand the way humans think.
But as Laura points out, the thing that really strikes me about animal intelligence, and brains in general is that there are so many of them, that intelligence is manifest in so many different ways, both structurally and in terms of the environmental and behavioral constraints that drive it. But in the end, as Laura points out, it really is all about how do you survive, how do you thrive?
And I don't fundamentally think that humans are really any different in that way. I think the idea that well, humans have some way to sort of transcend the immediate constraints and immediate objective functions. We create problems to solve.
It brings up an interesting point that I think Nick pointed out. And it was this question of risk, I was also going to bring that up. One thing have that I like to think about in just observing human behavior is not how people are really so intelligent, I'm never really that struck by how intelligent people are. I am very struck by how stupid people are when they do things.
It's like, why is that person [LAUGHS] doing that stupid thing? Why do people take risks? And I think we fail to appreciate that intelligence, when manifest in individuals, is not just about individuals optimizing their outcomes, it's about the collective group. We need people to take risks. We need individuals to try and fail so that the group in general can succeed. We need people to take risks and we need others not to take risks.
And so I think what's difficult in embedding the kind of intelligence that organisms that work within groups and manifests that intelligence in terms of the optimal outcome of the group is that we need robots, as Nick said, not to do stupid things on an individual basis. We can't have robots take risks unless we have armies of robots that are able to collectively solve problems.
And it's that conflict. How do we get robots to embody the best parts of intelligence that really objectively, when you look at animals and humans, while we can point to instances of that in general, intelligence is kind of a distributed function. We need diversity of problem-solving abilities. And this comes in studies of, when you think about problem-solving in groups, you think, what's the best group to really tackle a problem?
Well, you think, oh, you just get all the smartest people together. That's how you do it. Well, actually, it turns out no, [CHUCKLES] it's about having a heterogeneous group. You need people or individuals or intelligences that span that diversity of problem-solving approaches, not just a narrow definition of what we think of as really intelligent, really focused, optimally solving a problem.
And so thinking about animals, risk-taking, diversity of intelligence, and then the idea of individual versus collective or group intelligence, and how we incorporate what ultimately is going to be a new kind of intelligence into machines. And that is an intelligence that does not serve them, but an intelligence that serves others. And that serves those that would like that intelligence to somehow augment their ability to survive and thrive.
And animals are smart. They do survive and they do thrive.
VENKATESH MURTHY: So I think, one of the advantages of coming in last is that people make important points already, so you don't have to speak very much. But it's also obviously a disadvantage.
Great points, everybody. I want to actually think about two points. One of them, I've been actually thinking about overnight to the point that I even tweeted out something related to that an hour ago, which is that [INAUDIBLE] intelligence and the whole concept, is that like a scalar analog variable, or is it like something digital? Is it like why, or maybe this is another way of phrasing what other people have said, which is like, is human intelligent, somewhere on the number line of other kinds of intelligence? Or is it somehow digital, that it maybe crossed some threshold, and therefore it's somehow unique and different?
And I personally don't think so. I think for almost everything that we can think of as intelligent behavior, I think we can probably find analogs and metaphors in animals. So I think it's not clear that there's something that we have that's somehow digitally, we just crossed the threshold and it's just completely different for something else. So I think maybe that's the one thing that I don't know the answer, but I've been pondering about.
Language is just an obvious thing, that we think somehow we're just spectacularly different. But is that true? So can we think of other communications that string together sequences of elements that resemble something like language, whether it's widely fluent versus in a more stereotyped? And I think that maybe there is. So perhaps we should think about things that are not just that we're somewhat completely different versus that we're on the scale of everybody else and maybe we created new axis, I don't know, a new axis of intelligence along the dimension scale.
Related to that, my own research, and I work on animal research, obviously. So I always think about the sense of smell, which I think about quite a lot as something that humans, we just simply don't seem to have a very good intuition, is that any form of intelligence? Are animals that are extraordinarily good at sensing chemicals, navigating, using them and identifying them, and figuring that out, is that a form of intelligence?
And if so, how do we even think about it? Machines, of course, robots, we're talking about it. There's no robot. It's all about sound and vision, right? There's no robots right now or no embodied intelligence other than animals that go about looking for chemicals or in being intelligent about chemicals.
So I think the two things, one, like is it analog versus is it something that we just don't have a good sense of, like in our imagination is limited by what we feel comfortable out and about and that's what intelligence is? So that's all I have to say.
PRESENTER: OK, thank you very much to all the panelists. So I want to mostly shut up here and let all four of you lead the discussion.
NICHOLAS ROY: Sounds good. Here's one from [INAUDIBLE]. All animals probably have a good grasping of intuitive physics, and obviously some sort of intelligent communication. Do you think that the ability of grasping mathematics and language at the level of humans are correlated? Can there then be a unifying framework or software implementation that manifests as these two separate forms of intelligence?
LAURA SCHULTZ: I'm happy to kick that off a little bit, because it's been a question that people have talked about a lot, especially in the developmental literature where I work, because human babies obviously start off as the sort of organisms that have some grasp of intuitive physics, and some forms of intelligent communication, but don't start off with mathematics and language, but they do end up with it. And so one of the questions has been, how much do they start by sharing capacities that we share with a wide range of other organisms? And quite a number of animals understand some basic things about mathematics. So very many animals, have got an approximate number of representations, they can tell roughly how many berries are in this bush, how many prey animals there are in this field.
And quite a few animals are also good at exact numbers. They can tell how many eggs are in their nest, and they will kick out an egg if you try to sneak an extra egg in there so they can subitize very small numbers, and they can represent approximate numbers. No other animal that we know of has a count list that can actually [AUDIO OUT] that the difference between 9 and 10 is the same as the difference between 21 and 22. No other animal can really bridge these systems to do the kinds of representations and computations that humans can do.
And the argument has been exactly that the symbolic abilities that are associated with language help to bridge core systems of mathematics that are present in other animals, but that in humans, turn into the ability to manipulate symbols very, very richly, and turn into things like the formal mathematics that we all benefit from here. So I don't know if that entirely answers the question, but yes, I think there is an idea that there is, perhaps a unifying framework or software that gets culturally transmitted in humans that doesn't build mathematics from scratch, but that takes these core representations and lets us use them in much richer, much more flexible ways.
NICHOLAS ROY: I completely agree with everything Laura said. The one thing that I would say is that at a meta level, they might be the same thing, and that there's common operations between math and language. So there are ways in which mathematical symbols and linguistic symbols correspond to things that we perceive and we can do.
They're essentially both forms of abstraction. Those abstractions, they are well-defined composition operators. You can assemble mathematical symbols together to express functions, et cetera. You can assemble linguistic functions together. There's a well-defined parse structure, you can have a nonsensical mathematical equation, just as you can have a nonsensical linguistic sentence.
And then you can abstract further. And the more and more abstract you get, the harder and harder it is for somebody who's not well-versed in that abstraction to follow. I think that the differences might be in the instantiations of those operations. So the actual symbol grounding operation between language and math to the physical perception actions, I think are not the same. I think the compositional framework is different.
But the way in which you compose these things is similar. The actual abstraction operators are different. But the fact that you have abstraction operators is shared. So they may have a common template for abstraction and composition and hierarchy, but the actual instantiation of templates may differ.
MATTHEW WILSON: Yeah, I would agree with all of that. And I think the question of do animals have this capacity? And I think Nick really hit it in the head. So the idea of having a system of compositional operations that can be applied to what we might think of as symbolic representations, I think symbolic to the extent that they can be generalized and they're subject to these kind of common compositional operations.
And that is, you take one thing, you apply the operator, and it gives you another thing. And do animals have that? I think they do have that capacity. So I think they have this sort of common substrate for symbolic thinking. I think the idea of abstraction and the ability to generalize and to take these compositional operators, move them from one domain into another domain, it's more complicated. I think they have the primitive structures for that.
And I like to think of the basic evolutionarily conserved structure, brain structures that might subserve those functions that are kind of common in maybe language in mathematics, and also in general animal problem-solving, the hippocampus, the parietal cortex, frontal cortex. They're contributing to those functions.
And so you can think of animals, they can do these things, but they just don't do them very well. So you can think of animals, if you want to think of intelligence as the capacity to perform those functions. And animals have that. If you think of the scope of generalization, abstraction of those operations as a measure of, as Venky said, degree of intelligence, then yes, I think we are more intelligent in that sense. And that there are individuals that have a greater capacity for that generalization, as Nick pointed out, that may be something that can actually be acquired through learning through training.
And I think a question of whether animals can be trained to do this is an open question. I think sort of Laura's point that oh, animals, they can't generalize between 9 and 10, 20, and 21, the equivalence of those, well, humans don't do that very well, either. Humans are really terrible at doing that. In a sense they know, but behaviorally, when you actually query them, they don't really express that. They have all kinds of screwed-up scaling, temporal scaling, magnitude scaling.
So that you could say, yeah, they understand the abstract concepts. But when you actually probe them behaviorally, they don't really express that very well. The problem with animals is we don't have good tests for that. We don't have good ways of actually measuring that beyond the rudimentary behavioral tests. Which, if we did this in a human, do they really have mathematical concepts?
And I actually did the scaling. Look, I'll give you $1 today versus $100 two weeks from now. You know the difference between 1 and 100. And they take the dollar. You say, wait a minute. [LAUGHS] I don't understand. You don't seem to understand that 100 is worth more than 1. But if you actually scale that, you introduce sort of hyperbolic temporal scaling. They say, well, you know $1 today is worth more than $100 a year from now.
In the same way $10, the difference between 10 and 20 is not the same as the difference between you know 999 1,099. People don't see a $20 difference when the relative difference is small. So you think 20 is 20.
LAURA SCHULTZ: But Matt, [LAUGHS] let's [INAUDIBLE] a bit. Because we should keep two things different. So temporal discounting and decision-making, for sure. But if I give you a choice right now, between a pile with two grapes and a pile with three grapes, and you like grapes, you'll choose the pile with two grapes, and so will many other creatures.
Between 22 grapes and 23 grapes, you can do something other animals can't, which is you can count those, and you can decide that you want the 23 grapes. And most other creatures cannot. And that is a representation of number that most humans do have once they've learned account list function, that I think really is distinctive.
Even though you're certainly right, that there are lots of behavioral measures where we wouldn't display it. There are lots of cases where this cognitive technology that humans have is widely accessible and surprisingly distinctive.
MATTHEW WILSON: Yeah, I think that for instance, our experiments that have been done with animals that seemingly are more intelligent, like all this with the corvid experiments and the scrub jay experiments, where they do seem to be able to both understand and express this concept. Are they really smarter, or are they simply more capable of demonstrating using the sort of behavioral assays that we have, that they have that capacity?
Are scrub jays really smarter than rats? Mm, I think probably not. Are they smarter than chickens? Chickens, they get kind of a bum rap. If you keep chickens and you watch chickens, [LAUGHS] they have very complex behavior, and you think, they can't be that stupid.
It's just they seem stupid because A, we're not looking at intelligence in the same way. We're measuring by our scale or intelligence rather than theirs. But I think that's a different question. I think the idea that there are different levels of intelligent capacity as reflected in the ability to abstract and generalize, I think that's perfectly reasonable. I don't think that we're as smart as chickens, or as stupid. One or the other.
VENKATESH MURTHY: I'm going to just say one point just so I don't forget. I think Nicholas said that I'm not sure I'm convinced that this analogy with sort of abstract math with any sort of language intelligence. Math formally is this idea of, you have a set of axioms and you derive theorems based on these axioms. And I'm just not convinced, and obviously you can correct me, that almost anything we do is anything more than approximations. I'm not sure that we really go through with them.
When you manipulate everything, that we kind of follow rules, even if in language. So I think the question perhaps is that have we internalized some of these axioms just from all the experience that we actually maybe don't even know, but we just have enough of the training examples and we just kind of go through it?
And what looks overtly like we're manipulating some fundamental axioms to derive theorems is really just instantiations of lots of different combinations just comes from experience. And now again, I'm happy to be corrected. But it's more of an approximation than a real [INAUDIBLE] theorem.
NICHOLAS ROY: So that's an interesting point, and I think I largely agree. And what I would say is that most of us are operating with a relatively low level of abstraction that is absolutely approximate in the way that you describe. And I think as you get higher and higher in the abstraction, then the edges of the symbols become more and more precise.
And fewer and fewer people, I think, are actually executing at that level. I also think the same thing is true of language to a large extent, that like highly rarified even fiction is not as accessible as pulp fiction, as it were. And I think for the reasons you're pointing at.
VENKATESH MURTHY: Yeah, but I think maybe your argument, just to put in your side is that the existence proof of some people who do exist in this very abstract, very high level, says that there's something about human brains, perhaps. I wonder too. Music is another thing, right? I enjoy all kinds of music.
And I really like mid-20th century music at an abstract level. There's all kinds of amazing formalism. It's just not enjoyable, but you know there's lots of formalisms. And it's like, what is that? Is that axioms too?
NICHOLAS ROY: Like Schoenberg and Alban Berg?
VENKATESH MURTHY: Yes, indeed.
NICHOLAS ROY: And that's interesting, because a lot of the music is derived from mathematical procedures. And yes. I appreciate that you appreciate it. I don't have as much of an appetite for it.
LAURA SCHULTZ: I think we should be again, maybe because I think about how children learn fundamental concepts. But one of the things that's really striking is that something as simple as two or five is very abstract. Because when you think of five, and you're a human, and you really know what five is, which you do when you're about three years old, you know that it's not just five grapes or five M&M's or five cookies.
It can be five minutes or five days or five utterances or five claps. It can apply to everything. So that doesn't have to be modern contemporary music with deep abstract structure. That doesn't have to be post-modernist fiction. That's just five. But it's abstract and it's symbolic. And it's a concept. And it is very powerful.
VENKATESH MURTHY: Sorry to interrupt, but do you think that it's been shown that animals don't have that, that you don't transfer from five grapes to five pellets to five other creatures?
LAURA SCHULTZ: For the reasons Matt suggested, you never want to bet against animals, because there's always a possibility that maybe we aren't asking in the right way, maybe we haven't come up with the simplest method. But a lot of work by a lot of folks, you have some existence proofs and gray parrots. Alex, that parrot who could. But very many animals cannot. And those failures are surprising and shocking.
And they are not that surprising and shocking, because you can also get really surprising, shocking failures from human children. So one of these very famous, classic demonstrations is we put one cookie in front of a kid in a box and two cookies in another box, they'll get two cookies in one. If you put two cookies in one box and three cookies in another, they'll get three cookies over two. But if you put four cookies in one box and one cookie and another, they're at chance.
And if you put four cookies in one box and two cookies in another, they're at chance. It's not about the ratio, it's about the absolute number. Until kids can count, they cannot track four objects, any more than you can track four moving dots on a screen. They just drop it out entirely.
And those are two-year-old kids. And it's not that they don't know or want more cookies. It's just that they can only do it with 1, 2, and 3. They can represent those exact numbers as object files. At 4, they can no longer do it, because they can't count. So that's a human child who's going to learn to count, and they can't do it either. So there are lots of pretty stunning instances of failures that you don't get until you have these abstract representations.
And one point I just want to make, and this is these measures of animal intelligence. And another point that Nick brought up, I think is really important. And it's this idea of exploration. And that is the idea of evaluating outcomes that may not actually be optimal or may not actually reflect the problem that's being solved. And how important exploration is for intelligence, particularly as manifest in groups.
If you only solve the problem that you think you need to solve, you're not going to be able to respond adaptively when conditions change. And that really is how intelligence in an evolutionary sense is driven. That is, the ability to express adaptive behavior under changing contexts. And this idea of exploration, both behaviorally, but also when you think of the neural substrates, how do we actually think, how does intelligence emerge?
I think the idea that it involves in some way, some kind of deep search, that we search some kind of problem space, we come to understand through that search how the underlying representations might have some utility in optimizing that search. And that is the idea of symbolic representations and abstraction. And I think all of this comes from applying this process, these generalizable, compositional operators in the context of some larger search beyond media problem-solving. So animals are trying to solve a bigger problem than the one you confront them with.
And this often comes up, even in a simple task. So you give an animal simple task, like a two-arm maze. Food on one side, no food on the other side. And just solve this simple task. And lot of people point out, well, you know, you give an animal this task. And monkeys, maybe they do like, 90%. A rat, they'll do like, maybe 70%. So 70% of the time they go to the arm with food. And then 30% of the time, they go into the arm that doesn't have any food and you think, how stupid are these rats? Don't they get it? Food's on the right, not on the left.
But now, that's this two-dimensional thinking. You're thinking, what they're trying to solve is the problem you gave them, instead of they need to explore. They need to determine, has the problem changed? If I don't check out the left side, I'll never know whether perhaps the left side is actually now optimal. And so they're trying to solve the broader problem, which is how do I adapt my behavior given the expectation that the problem will change?
And so thinking about oh, intelligence is when you express the optimal response. You solve the problem well. I think animals often get shortchanged because they seem to be suboptimal in behavior, but only in the narrow task domain for which you actually put them. And I think primates and humans, one thing that they're good at is sort of focusing on narrow problems. You ask them to solve a problem, they will solve that problem.
Maybe kids are the same way. You want them to do one thing, they want to do other things. They're exploring, they're playing, they're trying to solve a broader problem. And so you think, well, that lack of focus is somehow diminished intelligence. When it may actually be the other way around.
VENKATESH MURTHY: And this goes to what I think Laura mentioned very early on. So think this old joke about, you can always get your optimization function based on integrating where the solution is, going backwards. The utility function is something that you can cook up whatever you want.
So I think, we have one particular function. So I think that's a fair point. I also want to point, one of the commenters said, should we move onto other questions? I don't know what people feel like what the time constraint is and if there aren't as very specific questions that Chris or Gabriel want us to think about.
NICHOLAS ROY: So we've got one, not voted, submitted by anonymous. Non-human primates have complex interactions and social structures, which presumably require complex models of others' minds and communication. What do we think stops non-human animals from developing complex languages like us?
MATTHEW WILSON: Language is always a tough one, right? Because there's both sort of the internal symbolic thinking, and that's necessary, but not sufficient. Language requires that we be able to communicate that. And so distinguishing the thinking from the communication, and I think do animals have that ability? If they do, it's very limited. The combination of the two. I think the underlying symbolic thinking, the substrate for language, is probably there.
VENKATESH MURTHY: I think Laura probably should answer that. But I thought that one of the [INAUDIBLE] physical constraint was really our structure of our larynx. There really is some evolutionary development that is quite different from other non-human primates, the way we are epiglottis or not, that we really can create the output. I wonder how much that really is a constraint of expressing what one is thinking?
MATTHEW WILSON: People sign. It doesn't have to be vocal.
VENKATESH MURTHY: I'm not sure if sign would have come before the vocal language. Maybe that's a theoretical question. [LAUGHS]
LAURA SCHULTZ: I think language is, again, like intelligence itself, one of the perennial mysteries. Why do we have it? Why do other [AUDIO OUT] I think one of the interesting questions is, how much is the internal structural features of language recursion, or the compositionality, versus how much is it closer to what we think of as a motivational things, like an interest in communicating with our account specifics, and a desire to cooperate with them, and a representation of their minds as potentially containing content that is different from ours.
So if you assume that other members of your community know exactly what you know, there isn't a strong incentive to communicate anything. You usually communicate things because you think there's something in your mind that might not be present in the mind of another agent. Animals do in a limited way communicate. Actually in a quite rich way.
This is my territory, go away. There's a predator nearby, they alarm about that. But you need a comparatively rich sense of the possible things you might think about and might communicate about even to want to express and communicate this. And I think it's an open question how much the absence of language and rich communicative systems in other animals is a constraint on the representation of the kind of knowledge you might have and lack and when it might be present or absent in other creatures' minds, versus the ability to manipulate a complex, communicative system itself.
NICHOLAS ROY: I largely agree with everything everybody's said. The one thing that I wonder is, humans appear to have the ability to manipulate their environment more than most other animals. In some cases, animals exhibit tremendously good skills with tools, primates and even birds, et cetera. But I think overall, we obviously manipulate our environment considerably more than any other animal.
That active manipulating the environment and seeing the effects drives a lot of concepts that we wouldn't otherwise have or need to have. And so Jacob Bronowski, the anthropologist, has this wonderful turn of phrase, that the hand is the cutting edge of the mind. And I wonder if one of the things that we have at our advantage is the morphology that lets us manipulate the environment a lot, which exposes to a lot more concepts than drives the need for representation and a communication system. But I'm talking out of my hat here. I'm speculating wildly as to whether or not that's really true.
MATTHEW WILSON: I think what Laura pointed out, at first I thought she was arguing that animals do have that capacity, because the need to communicate sophisticated concepts, clearly I think is a part of animal communication. What seems to be lacking is the infinite generative capacity that you could say distinguishes language, and then the ability to communicate that.
So when I think of, if there's some sort of equivalence in the substrates for language and that the infinite generative capacity for, let's say symbolic introspection. And so I think of navigation as being equivalent to that. Navigation is like that. There's an infinite generative capacity for navigational trajectories.
You give me path to point A to point B. There's going to be another route that you can think of that will get to that point. So you can think of language as symbolic navigation. And so animals, you can say they have that capacity. They can think about and plan, solve problems of space.
What they can't do is they can't communicate that. They're very limited capacity. You can think about bees and their ability to communicate. That kind of communication, perhaps communicating discrete way points, other aspects of that underlying infinite generative capacity for sequential symbolic evaluation, is they just can't communicate that.
So perhaps that's something that we've gained, gained the ability to take that, communicate that, and that's what distinguishes the language capacity, not the language substrate, or necessarily let's say the social communicative imperative, which we will share.
Animals, they need to get along. They need to communicate complex state, complex motivations. They do that, they just don't do it with let's say that very specific, and perhaps human limited capacity.
NICHOLAS ROY: This is interesting. So Laura and Matt, you both pointed, and the question itself pointed to the fact that there are rich internal concepts. They're just not being communicated. And so the question is, you might fall into the trap of thinking that animals, non-human primates can't generate the concept, but clearly they can. You both are given examples. So what is preventing that from being translated into communication? That's a really interesting question, I have no idea.
VENKATESH MURTHY: Again, just to harp back on the degrees, I was just thinking a little bit about that, the waggle dance of bees. Yes, it's very clear that there's communication of the orientation and the angle and the distance and the source more [INAUDIBLE]. I'm curious now how much variability is there?
And is there some way of looking at generative capabilities rather than just stereotyped, like there's only two bits of information, three bits of information versus something more? I'm playing devil's advocate a little bit. But it's also a little bit of the interpretability going back to the overt expression of the intention, essentially. That's all we see in animals.
So I think it comes down to capacity of, do we have it and can't express it, I'm just rephrasing what all of you said. Or you just don't have it, and therefore you don't express it?
MATTHEW WILSON: I think there's a limited capacity to construct and evaluate the kind of constructions that underlie language. Now are they really equivalent? I don't know. Because we don't have any way of actually measuring them, I think that's not really a meaningful distinction.
I think the fact that animals do not actually communicate that, and that's something that can be measured. And as Venki points out, [INAUDIBLE]. These things that you can sort of operationally define to measure how much information can they actually communicate about, let's say complex navigational solutions or strategies.
We can measure that, and it's limited. And you could say, when we do this in humans, seemingly unlimited. But we have this ability to, as we're demonstrating here, to speak forever on end. You take a simple concept and you just go on forever. And animals just don't do that.
Does that make us smarter? I don't know.
NICHOLAS ROY: I think we're all thinking about colleagues that we know who go on forever. [LAUGHS]
MATTHEW WILSON: I think there's value to being concise, that's right, in animals.
PRESENTER: Jim [INAUDIBLE] asks, one thing that's missing in current AI is open-ended pondering. Most nets are feed forward. Why is it so? And he has little clarification. I meant the question to be about pondering refining thoughts by investing more time and computation at the individual level.
NICHOLAS ROY: So there's a bunch of ways in which robots actually can do that. When you have a size, weight, and power limited computer, you often prefer anytime algorithms. And there's several ways in which robots sometimes control their own speed as a function of the available computation and range, et cetera.
You wouldn't necessarily call that pondering. But it has certainly to your comment about refining thought by investing more time in computation, a lot of field systems do that all the time. Maybe we should call them pondering, I don't know. But I actually don't think that that is missing in AI. It might be missing from neural networks, but it's not missing from modular embodied intelligence.
MATTHEW WILSON: I agree. And there was a question I noticed here where Patrick Winston, he liked to think about our ability to tell stories as somehow being fundamental to intelligence. And he often asked the question, do rats tell stories? And I used to say to him, I believe the rats do tell stories. They just tell stories to themselves.
And again, back to this language point. I think one of the abilities we gained, we all tell stories, rats and humans. We tell stories to ourselves, and that is so we construct our internal narrative to explain and understand experience based upon some underlying model of the world. We take facts, we put them together so that they make sense. And what humans have gained is the ability to take those stories and to communicate, to tell stories, to tell stories to others.
And so I think again, this pondering. So I think of pondering as this internal evaluation, internal storytelling. Thinking about trying to put things together.
NICHOLAS ROY: And so matter from your work with replay, it's telling a story but with the past. And the robots a lot of the time do forward simulation. And so it's telling a story about the future.
MATTHEW WILSON: I think of it as both. It is the storytelling. It's constructing a story explains the past in order to predict the future. So once you have a story that works to explain the past, now you can use it to try to predict and guide the future.
NICHOLAS ROY: And that's why fixed wing aircraft are so hard, is because they can't stop and ponder. Rotary wing aircraft can stop and think.
VENKATESH MURTHY: I think Nick kind of alluded to the neural net. [INAUDIBLE] I thought that one of the points of the, what was it, the Helmholtz machine that [INAUDIBLE] Geoff Hinton. It was the idea that you have this generative part of it, that you ponder and generate something and then you use that in the feed forward circuit, anyway.
PRESENTER: So the next one from Sophia, she's curious about what you all think about animal consciousness. In the line of what Matt raised about having a correct estimate for animal intelligence, is there a good test for self-awareness in animals? For example, most dogs cannot recognize themselves in the mirror, but they can recognize their own scent. Is the scent test a more appropriate test? And how can we test awareness in [INAUDIBLE]?
MATTHEW WILSON: Wow. I'm going to stay away from the consciousness question. [INAUDIBLE]
VENKATESH MURTHY: Oh boy. This whole self-awareness [INAUDIBLE] thing is crazy.
MATTHEW WILSON: Well self-awareness, I think it sort of points out this question, I think you have to operationally define it before we can start talking about it. And that's, I think the problem. And I think that the questioner gave you good examples of how you might actually operationally define it in a way that might allow you to demonstrate it in certain animal capacities.
I think that we kind of think about consciousness in terms of our ability to express through language what we're thinking. And obviously, if that's what it's about, then animals don't have that. They can't tell us their stories. But if they could demonstrate, let's say something like self-awareness, which is of a much narrower definition, and they could do it through some very particular behavior, I'd say I think you could argue, yes, they have that. They clearly have awareness of self, right? They have models and they think about themselves in the world.
LAURA SCHULTZ: I think it's helpful to think about it as, lots of pictures of models of the world. They have models of their own bodies in space. And then they might have models of their ability to make models. So they may have some metacognition, models of their own representations. And again, even in human children, you get mirror self-recognition around 18 months, but you don't have autobiographical memory at 18 months. So children have some kind of self-awareness, but they don't have a continuous representation of themselves through time until much older. So there's degrees of representation, degrees to which we represent ourselves to ourselves. And those stories get elaborated.
So I think it's an interesting question. What do we mean by machine self-awareness? I think having machines that model their own models doesn't seem that far off. To what extent we would want to call that consciousness versus just one more layer of an internal representation, I don't know. At some point, presumably they will scale into each other.
MATTHEW WILSON: I agree with that.
NICHOLAS ROY: I also like models upon models, the thing that struck me what you said, Laura, that's interesting is that from 18 to I guess, do you get autobiographical memory at 3, is that right?
LAURA SCHULTZ: On average, most people's first reported memory will be around 3.
NICHOLAS ROY: So the size of the brain is growing substantially. And the available computation is much more. So I wonder if there is a relationship between the amount of computation a device has and its ability to model itself and model its own models. Because from a purely in silicon perspective, that's a very expensive operation. And that's the reason why a lot of robots don't have it, is because it's hard to employ. In fact, most silicon intelligences don't have it, because it's really, really hard to do. And it's really, really computing expensive.
LAURA SCHULTZ: Who knows [INAUDIBLE]. Again, children's self-conscious emotions much before that. They can be proud and they can be embarrassed. They can recognize themselves in mirrors long before they have autobiographical memory. So represent yourself through time I think may be a particularly late development.
And we're so energy efficient compared to machines. Children seem to run on Cheerios, compared to what it takes to run [INAUDIBLE] these days. So I think it's hard to know what will be computationally expensive, instantiated in human brains.
VENKATESH MURTHY: But it's also the development of the human brain is pretty heterogeneous right? It's not that the entire brain is developing and some parts are pretty mature at five and some parts are completely not mature. So I think it seems a little bit tricky to say that it's some overall connectivity matrix versus more specific regional things.
MATTHEW WILSON: Yeah, and I also think that the concept of autobiographical memory, we go back to storytelling. I always like to say all memory is false memory. We think that we actually have this capacity to remember what happened, but what we really do is we have the ability to actually tell stories about what has happened.
And we sort of build a model that's sufficiently consistent, that it can be applied to explain things in the past. But often those explanations are inaccurate, or consistent but inaccurate. And so I think one of the difficulties that we have in evaluating autobiographical memory and let's say young children is basically the model has changed. And so we simply don't have a model of a three-year-old.
By the time we can actually interrogate the memories that might have been present and the model might have been consistent, the stories might have been there and being told, it's just that we don't understand how to tell those stories, the story of a three-year-old when we're six. And so I think that capacity does exist at that time, just like with animals. We don't have the ability to interrogate it because they can't articulate what they're actually thinking about.
And then by the time they can articulate it, the model has changed sufficiently, they've gained all these other abilities, language. The way in which you think about things just no longer has the ability to express that. So you just can't tell that story anymore.
But to Nick's point, the basic substrates are there. They do develop and you think about the hippocampus as being central to this ability to both capture, express, and evaluate these episodic events, the autobiographical component of memory, the thing is having that essential hippocampal component. Well, it's there, it does continue to develop. But it's still there. And so I wouldn't say, oh yeah, you're not conscious until you're three or four or five.
Just like I wouldn't say, rats are not conscious.
VENKATESH MURTHY: But to the extent, Matt, that we buy the, hypothesis may be too strong a word, the idea that hippocampal PFC, prefrontal interactions are somehow particularly important for doing all these calculations, and that develops much later. So for humans it's possible that yeah sure, you can do [INAUDIBLE]
MATTHEW WILSON: It develops. It becomes more robust. It's not that it's not there. It's there, it just becomes more robust. And so yeah, there's a greater capacity. And just as there is a greater capacity in let's say, humans relative to let's say, rodents. Yes, rodents have the same structure as the substrates, the connections. There's just not as much of it. So it's this capacity issue that Nick raised. Yeah, I think that there is a difference in capacity. But unless you want to use some sort of scale or threshold for consciousness. Yeah, if you have you know whatever you want to call it. G, so much of it.
OK, you cross the threshold, yes, you have it. Below the threshold, no, you don't have it. That doesn't seem that useful. It might make us feel better, but I don't think it's useful.
Largely because I think if you think about it in the continuum, it's not so much falling below that. It's what happens when you actually go beyond that, when you extend that. And so I think it's better to think of an expanded consciousness. That is, what does it mean to be able to incorporate a much larger scope of models into your framework of abstraction?
That's I think what we sort of hope and imagine or fear that synthetic AI is going to be capable of.
VENKATESH MURTHY: And to go back to, I think I was just browsing through the questions, to go back to, I think maybe Laura's point, I think somebody pointed out that why are humans able to, and why is it useful to do in a 22 versus 23? And is this some epiphenomenon of some other thing that we developed? Or is it really that we really need these kinds of counting ability?
And as somebody pointed out, a goat eating leaves, it doesn't matter if they eat three leaves or a few leaves. So that got me curious. Do you think that this is just sort of a sight thing or some other larger symbol manipulation thing?
LAURA SCHULTZ: Well, I think, we can return to Matt's point. Obviously, the ability to build richer and richer models of the world, there's probably no human ability that's more powerful than abilities that start with the ability to distinguish 22 and 23, like mathematical models to the world have been what is given human beings the capacity to manipulate [INAUDIBLE] beyond the capacity of any other organism.
And right now they're allowing us to create and ask questions about other forms of intelligence and artificial intelligence. Part of what is interesting about humans is again, what's useful? We have the ability, obviously animals survive and thrive and have for millennia without any need for this. We have the ability to develop capacities and create problems that go far beyond what might seem like the simple evolutionary requirements, and then just turned out to be incredibly powerful for our species and let us fill a very distinctive adaptive niche. And that is the remarkable signature of human intelligence, is how much richer models are than they would seem to need to be merely to eat and survive.
VENKATESH MURTHY: Part of me, I was just thinking, maybe this is a false analogy. There's sort of issue of resolution. And if you think, for example, visual resolution, just having finer and finer visual and an angle is good for an eagle that's hunting, right? There's a very specific need. So I'm wondering if [INAUDIBLE] thing is just somehow a higher resolution for some symbolic manipulation. This is just some sort of sensory perception thing, but some other more abstract thing, and we're getting finer and finally resulting. Maybe it's an imperfect analogy.
PRESENTER: Let me thank all the panelists. I think this was a very fun discussion. And also all the attendees as well. I wish everybody a very nice weekend. And we will continue on Monday at noon. OK,
Thank you very much.