Panel Discussion: Hilbert questions in AI
Date Posted:
August 11, 2020
Date Recorded:
August 10, 2020
CBMM Speaker(s):
Gabriel Kreiman ,
Tomaso Poggio Speaker(s):
Stefanie Tellex
All Captioned Videos Brains, Minds and Machines Summer Course 2020
PRESENTER: Welcome back, everyone. So now we'll have a panel discussion about Hilbert about questions in AI. For those of you who are not familiar with the term, Hilbert questions, in the year 1900, a very famous mathematician, David Hilbert, proposed a series of unsolved original problems in mathematics that played a critical role in inspiring generations of matheticians during the 20th century. So continuing on that notion we'd like to discuss today Hilbert questions in AI. And we have three panelists, myself, Tommy Poggio, whom you've heard already, and Stefanie Tellex.
So the plan is that we are going to have a very short introduction by each one of us on what we think are fundamental challenges. And then we'll open it up for discussion among us, but also to answer all your comments and all your questions. So I also want to thank everyone who put out a lot of questions in the Google doc. There are a lot of interesting questions there, which are really fascinating. I'm not sure we'll have time and be able to do justice to all of them. But we'll try and we'll do our best.
So to get started, I'd like to invite and introduce Stefanie Tellex who hasn't spoken here yet. She's an expert. She's a professor in computer science and robotics at Brown University. And she has made similar contributions to studying language, to studying actions and intentions in robots. And so I'd like to invite her to share your screen, if you want, and and tell us what are the most fundamental challenges for robots, for AI, for you, for us, for the field.
STEFANIE TELLEX: Cool. so I was super excited to be invited to serve on this panel because totally independently, I was telling George, my colleague at Brown, that we needed to think about what are the Hilbert questions in AI. So I spent a little bit of time today reading about David Hilbert and how he came to pose these questions it was at a meeting of the mathematicians in 1900, at the turn of the century. And I thought it was really interesting that one of the questions that he posed is that there exists a set of axioms for math that's consistent. And of course, that question turned out to not be right. That is, there is no set of axioms that are powerful enough, that include arithmetic, in which all true statements have proofs.
There exist statements either you have to choose to have axioms to have everything have a proof, but have some things have contradictions. Or you can prove that they're just true statements with no proofs, which this is Godel's Incompleteness theorem. So in a little sense, he was trying, when he was posing these questions, to ask like really fundamental big encompassing questions to establish what are the foundations of mathematics. And in some sense, one of the most important questions was that it was almost ill-posed. You cannot prove that these theorems, these axioms, are consistent. It's not possible.
So in AI, I think for the Hilbert questions, for these fundamental foundational questions, I think you have to start with the Turing test, partly because it came first. It was the first such question posed by an AI researcher by Alan Turing. And partly because I think it really does say something about what it means to be AI complete. So the idea of AI completeness is, if I can solve this problem, then I can solve all the rest of AI too, right? So like chess let's say, is not AI complete. I have programs that can beat human players at chess, but I haven't solved all of AI, right?
So Alan Turing tried to write down an AI complete problem, when he wrote down this test, that a autonomous system and an autonomous agent over a typewritten interface, using language, has to fool a human interlocutor, a human who's asking questions, about the autonomous agents and trying to figure. And specifically, the human is trying to figure out if the agent is an AI or not an AI. So there's a lot of people running like Turing chat bot competitions. But in almost all of those competitions, the human is not allowed to be like trying to figure out if it's AI. They're supposed to go with the agent and talk about Shakespeare or something and not like actually ask the tough questions that Turing puts in his examples of things that you could ask if you were really trying to figure out is it in AI or is it not an AI. And by doing that he kind of opens up the whole domain of human experience to talk to the agent.
But I am a roboticist. So for me, AI always comes back to a robot perception system, an actuation system, and a compute element working together in order to carry out complex tasks in the world. So the task that I thought of, the kind of motivated me to think of Hilbert problems was pick and place, the idea that we should make a robot, a mobile manipulative robot that can drive into any indoor environment, even if there's lots of clutter and stuff. And a human can say in words any object they want that robot to pick up. And the robot-- and it may not be even in the same room-- and in the robot has to be able to drive around, find the object, maybe open drawers and cupboards and push things around, and pick it up and deliver it to the person.
And I like this problem because it captures a lot about robots. But it's also kind of a fundamental thing that a robot in the home would need to do. And so I think if we nailed it, there would be a lot of real world impact. But Hilbert had of course, 24 problems. So the other thing is we don't get. We have to pick a lot of them. And I'm really excited to hear what you all have to say about what else is out there and whether these are good ones and stuff.
PRESENTER: OK, very good. Thank you very much, Stefanie. That sounds very interesting. Can I ask you very quickly before we get to me, can you give us a quick idea of where are we right now? What can a robot do in the pick an object task that you just alluded to. How far are we?
STEFANIE TELLEX: I liked that. I mean, one reason I like that task is I think my group is close to it. So we have a lot of different pieces right now. Trevor Darrell's group at Berkeley made a really nice paper that takes images of scenes and natural language descriptions and then segments out the object in the scene that matches the description. So you say, the big gray car. And it gives you a mask around the big gray car with deep learning. So that's pretty cool.
It doesn't handle if there's no big gray car in the scene. It doesn't handle that case very well. It doesn't had a looking for the big gray car. If it's outside your field-- I'm making like a camera field of view with my hands here. It doesn't handle looking for the big gray car, if it's not already in the field of view. But that's like one piece. My student, Arthur Wandzel, and my PhD student, Kaiyu Zheng, made systems where we abstracted away the detector. We just said, assume we have a detector for the thing and assume a fan shaped field of view for the camera. And we had the robot infer where it should go to find the object. And it could take information about where it was from the language and information from its sensor and systematically search in environments until the object was detected, until it was found.
So that's kind of a detection piece. And then my student, [INAUDIBLE] recently submitted a paper where we can do-- it's an end to end push and grasp system. So it learns to if there's a clutter of objects on the table, it learns to push aside objects, so that it can clear them, so that it can pick up an object and pick it up. And she's just extended it to call-- so like the original system, which is clear the table. But she extended it, so you can call its shot. You can give it a mask. And it will push stuff aside to pick up that particular object that's been masked out.
So I'm kind of excited to put those three things together. I mean, I don't know how well it'll work, end to end, because robots are robots. But like in all of those pieces, we should be able to do, for the first time, generalized language based pick and place, where you can give a description of an object the robot's never seen, the robot can find it, it can push stuff out of the way, and it can pick the damn thing up.
PRESENTER: That sounds fascinating. Thank you. Thank you very much. Hopefully, we'll come back to the robots. Tell me, what are the Hilbert questions in AI?
PRESENTER 3: So there where 23 Hilbert's problems that he presented in 1900 in Paris. This is the word exhibition at the largest Congress of mathematicians of the time. There were as many as 200 mathematicians. In fact, he never presented it, he did not attend the conference. It was published after, later. But as Gabriel mentioned, it had a big impact for many decades to come, even today.
And likely, our Hilbert problem has a similar impact. But here I go. My personal dream would be to answer the following question. How can circuits of neurons compute programs or routines, the kind of programs that must underlie a number of typical human activities, such as language. The hard the question is, of course, how evolution discovered that, and, of course, where in the brain, and can we ever record from such neurons while they are creating programs and routines.
We have far away from, I think, from answering this kind of question. Apart from the neuroscience, [INAUDIBLE] of some of them, just a questioning principle of how to have the neural network that can do that. There is, of course-- and Maunsell we know that this kind of trigger you can essentially embed a program in a digital network-- in a computer and run it. But I really want to know how a more biologically plausible network could do that. And in fact how neurons, [? mirror ?] neurons could do that. So here it is. Just stopping here.
GABRIEL KREIMAN: OK, very good. Thank you, thank you, Tommy. Because we organized this, I gave rather vague instructions to all the panelists, so I didn't necessarily specify that it had to be a single question. So here are a handful of questions that I think are quite fundamental. And we can divide them into 5 year questions, or 10 year questions, or 50 year questions. Some of these, I think, are exciting frontier questions that are amenable to current research, and that I think will transform both neuroscience as well as AI in the years to come.
So I'm going to list these six questions very briefly, and then come up with one example that I think relates to the questions that Tommy and Stefanie were raising, which I'm particularly intrigued about. So one is-- the first question is, how do we go from connections to computations and back? So for those of you who don't know, there has been a major revolution in neuroscience over the last decade. We now have the ability to interrogate neural circuits with unprecedented resolution. We are beginning to have a very detailed wiring diagrams of who talks to whom and when, in terms of neurons in different parts of the cortex.
We don't quite have yet a circuit diagram of the human brain. We have a circuit diagram of the nematode C. elegans. And we are going to have, very soon, complete wiring diagrams of other species, probably the fly, that Tommy was talking about earlier today. When we don't have yet is how do we go from those connections to AI? How do we go from connections to function and to be able to infer computation, and vise versa. Given a particular cognitive function or a particular computation that we're interested in, how can we go about finding that given a wiring diagram?
Another way to pose this question is, imagine that you are a martian and you come to earth, and I give you the wiring diagram of a computer, and you have to figure out what the computer does, purely based on the wiring diagram.
The second one I think relates very closely to what Tommy was alluding to and it has to do with how we can link cognitive behaviors to neural circuits. There's been about a century of exciting discoveries in psychology and in studying behavior and it's been very, very challenging to move from that a phenomenological description of behavior into neural circuit level computational models, and I think this is also an urgent and exciting question to pursue.
This third one is a little bit, perhaps, not directly connected to the Hilbert question in AI, but I think a very fundamental corollary of understanding brain function is how to fix brains when they malfunction. Brains are the most precious devices that we have on earth, they are the most expensive devices that we have on earth, in terms of the cost our health system. So if we really truly understood brain function, not only could we educate computers to function more like humans, but we might be able to also fix brains. I think that's also an urgent question.
In terms of directly linking to computer science and AI, we need to search for adequate learning rules and loss functions. I think this is a critical question-- how do we train our algorithms? How well do these algorithms generalize to out-of-distribution problems? So we've been getting quite good at certain problems, like achieving high performance in a particular data set, like ImageNet. Many of those algorithms still struggle to work in the real world scenarios. So this is a broad generalization, which also has impacts for issues about biases in computation algorithms.
And, ultimately, we want to be able to incorporate world knowledge into these algorithms. And with that, I want to quickly show just one image that many people, including ourselves, have used before just to describe some of the problems and some of the exciting parts. Here's a picture where all of you, in a glimpse, can understand pretty well what's going on.
So as Jim pointed out in his talk, we're getting decently good now, in terms of understanding that this picture is indoors. We may be able to detect faces. In fact, any of your digital smartphones can detect faces and use that do to focus on a face. We also do face recognition and recognize that Obama is here. And we can do a lot of quite amazing things that were undreamed of merely one decade ago.
And yet, I would contend that we're still very, very far from understanding what's actually going on in this picture, and why these pictures are a little bit funny. So, as you can see, Obama here is being playful. You need to understand-- to be able to comprehend what's going on in this picture-- you need to understand that humans are often self-conscious about their weight. You need to understand that this is a scale. Some of you are, perhaps, too young, and have never seen a scale like this.
So this gentleman here is measuring his weight. And you need to understand that Obama is asserting a force here. You need to also grasp the idea that he is unaware of what's going on, and that all of these people are smiling, partly perhaps because he's Obama-- and therefore they have to smile-- but also because he's being playful and asserting a force here, therefore changing this gentleman's weight.
So this goes way beyond our ability to count how many shoes there are in this picture, how many people are in this picture, recognizing Obama understanding that this is a mirror, and so on. There has to be an ability to spatially and temporally integrate all the different pieces of information. And also, put this together with basic knowledge and basic understanding that we have about the world. With what Tommy alluded to us, abstract knowledge an abstract concept, how are they encoded? How can they be brought to understand an image like this one?
So that's all I wanted to say for now. So those are some of my Hilbert questions, and one specific concrete example of something that I find quite fascinating and quite mysterious. So I'm going to stop here.
So Stefanie, do you that your robots can do this? They can take this picture and understand whether it's funny or not?
STEFANIE TELLEX: Not yet. But I think that example gets at why robots are important. Because you're talking about the force that he's exerting, and being able to predict what's going to happen next. That because he's exerting the force, he's going to read a wrong reading on the scale-- and the effects of the actions people are taking in the world.
My hope, and my guess, is that the right way to do that is to have an agent that can take its own actions, and then can reason about what actions to take, and then apply that reasoning to other agents in the world to try to understand what they're doing and why.
GABRIEL KREIMAN: OK, very good. I have several questions that people put forward in our Google Doc, and maybe I'll read some of them. And these are really a lot of these are really very good questions, very hard questions. I have no idea how to answer them. So I just relegate them to Tommy and Stefanie, who are way smarter than I am.
So one question is from Sasha [? Froehlich-- ?] I apologize if I'm mispronouncing your name, Sasha. So the question is, can intelligence creativity emerge in a deterministic system? What role might stochasticity, that is intrinsic randomness, and/or chaotic behavior play for the emergence of intelligence and creativity?
TOMASO POGGIO: No idea.
GABRIEL KREIMAN: I could have predicted that we were going to say that. Stefanie, do you need randomness in robots?
STEFANIE TELLEX: I suspect. You don't need it but it probably is good. I think a lot of our random algorithms, when they use randomness, they're not using real randomness. They're using super random number generators which, of course, are completely deterministic, right? You start with the same random C and you get the same thing out. And I suspect that's probably good enough you don't need cryptographically secure a random number generation to get that behavior. But I don't know. Creativity is a big word. It's complicated. What is creativity? Right? I feel maybe we'll go back after we have AI and go back and say, oh, that's creativity. That chunk, that module-- all those things working together is what's creativity, rather than maybe trying to design it in an advance.
GABRIEL KREIMAN: I intrigued by the notion that there have been a couple of attempts in AI recently to create art in the form of visual art, in the form of music. So I'm intrigued by the notion that maybe creativity is nothing more and nothing less than a suitable cost function. In the case of art, that cost function may indicate, is this beautiful or not? Is this attractive or not?
For example, a given piece of art or a given piece of music, and pseudo random exploration. So semi-intelligent exploration combined with adequate machine learning to discriminate which of those directions are good and which ones are not. And maybe that's enough to define creativity. That's a simple definition of creativity, perhaps.
STEFANIE TELLEX: I mean, that makes me think about the Chinese room problem, and the idea that-- from Searle-- this question of if you have a person inside of a room, looking up books in a dictionary, and somehow they can speak Chinese, but where is the thing that's actually understanding Chinese? And the answer, if you're a computational thinker, is that actually, for that to happen-- for the person to really be responding fast enough, for the system of the human plus the book-- to really be responding fast enough, they just have to be really, really, really, really, really, really, really, really, really fast at looking things up in their book, to the point where it's like one of our computers.
So I and I think that your example is legit. If you had such a system, and it went fast enough, and the cost function was good enough, then yeah it would work. But the devil, of course, is in the details. How do you make it go fast enough? What structure you're going to have for that cost function? And how are you going to decide what to test, and how are you going to test fast enough to get that speed? That is what we really mean when you say that.
CHRIS: We do have a question in the Q&A, if you'd like.
GABRIEL KREIMAN: Yes. Go ahead, and then I'll read some more questions from the Google Doc. There are plenty in there, as well.
CHRIS: Sounds good.
So this one is, comparing these set of questions to the Hilbert questions in mathematics, it seems the questions in math are more concrete. Do you think that is the case? The pick-and-place question Stefanie put forward seems to be concrete enough, though. Is that the right amount of breath a possible Hilbert question should have?
TOMASO POGGIO: Well I'm not sure concrete is the right word because they are rather mathematical questions-- abstract questions. They are more formally defined, yes. And that at least some of them were almost conjecture, that Hilbert had. Which turned out some to be right and some to be wrong.
So I think it is in the nature of what we are doing-- which is not strictly mathematics-- that we are we are, by necessity, not as formally precise as we could be. But this may be just an excuse. With Stefanie, I don't feel like being Hilbert.
STEFANIE TELLEX: I mean, I think it's important to at least-- I mean, some of Hilbert's questions actually failed, in the sense that there is currently-- they're considered too vague today to say whether we've resolved them or not. And I think that's a failure on Hilbert's-- that's not to diminish that he did a good job and all-- but that's something to avoid.
When we pose these questions, I think that it is important, that there's a possibility of consensus that we've resolved it. That there's some check that is objective, because we're scientists. We should be able to know if we've if we've reached this milestone or not.
GABRIEL KREIMAN: But I do think it's a fair statement that, as Tommy pointed out, I think both in neuroscience and AI and cognition, we are not-- I think it's fair to say that we're not where mathematics was in the 1900s. And I would contend that mathematics had several millennia of progress and success. And depending on exactly how you count, neuroscience and AI are very young disciplines. Only a few decades. So I think we can allow a little bit of less concreteness right now in terms of our under the condition of the Hilbert question, but I think it's a perfectly fair comment.
OK, I'm going to read another question from the audience. And this goes back to linking brains and machines. This question comes from Bobby Brown. And the question is-- there are two questions here. What sort of advances in electronics and technology are needed to build a machine to think? How do you go about building a circuit that has the versatility of a neuron? And does a machine need to be based on the brain to be able to think?
STEFANIE TELLEX: I definitely think a machine does not need to be based on the brain to be able to think.
TOMASO POGGIO: I concur.
STEFANIE TELLEX: --with that. What?
TOMASO POGGIO: I agree.
GABRIEL KREIMAN: I agree.
STEFANIE TELLEX: Great. I think that-- for me the reason I think that is going back to Turing again, and the universality of computation. I think that what AI is and what it will be is a computer. And I know people say, well we thought brains were steam engines back when that was the cool thing.
But Turing and gave us some really nice math that said that computation is universal. And I think that there's a lot of reasons to think that the brain-- with the neurons in the brain are doing-- is a form of computation. And by the universality of computation, that can run on any substrate. The devil, or the trick, is that it's got to run fast. And the brain, of course, is massively parallel, unlike our computers, which our GPU is more, but are much less parallel than the brain. And it may be that you need that level of parallelism in some form in order to make things work.
GABRIEL KREIMAN: So the way I usually describe the connection between brains and AI is that I think we can learn a lot from brains and because brains are the products of millions of years of evolution, they can solve problems in interesting ways, and we can learn from those tricks, to be able to build intelligent machines and AI. But it doesn't have to be that way.
As Stefanie pointed out, we can build algorithms on completely different hardware. You're seeing completely distinct principles that are completely unrelated to the brain, and they can still be considered to be intelligent, they can still because considered to be thinking, and they may outperform humans and other animals in many ways. I think we're at this stage right now, it's very, very clear that animals and humans still outperform the best machines in a wide number of tasks-- and many, many, many different tasks. Not all of them.
Machines are much better at visual pattern recognition of barcodes in the supermarket. You certainly do not want humans to solve that task. But there is a plethora of tasks, from basic pick-and-place, to navigation, to understanding that this image of Obama is funny, to language communication and whatnot, where humans are much better. So I think we can learn from neuroscience, but we don't really need to have brains, necessarily, in the [? question. ?]
I think we can gain a lot from that conversation, from that dialogue, but it doesn't have to be that way. With respect to the first question, I'm curious to know what Stefanie thinks. My guess is that we don't really need new technology. New technologies is great, and if we can get better GPUs use and more parallel computing, that that's going to accelerate research, for sure. That's always fantastic.
But I would argue that we need better ideas. We need more ideas and better algorithms. Not just, sit down and wait for a better [? hour. ?]
STEFANIE TELLEX: I totally agree with you. My colleague, George, his advisor-- I forgot his name. The RL guy at UMass Amherst. The famous one. Andy?
GABRIEL KREIMAN: Barto?
STEFANIE TELLEX: Yeah, Andy Barto. I'm sorry. Forgetting his name. Thank you.
He said that he thinks that there's only 50 papers between us and AI. 50 papers that have to get written between us and strong AI. But the problem is it's not like later, when we trace back the line from now to then, it's going to be 50 papers. But it's this giant space, and we're searching in the random space, right?
So to get to that line of 50 papers, all of these other papers get written as we search. And the other problem that happens is that all of these other papers-- as you spread out in the search-- they have this wonderful property that they solve important problems in the real world that people will pay you for. And that's actually really awesome and amazing.
So what happens is that instead of working on the 50 papers towards AI, people go off in other directions, and they work on wonderful, important problems with real world impact, which of course we want. But there's relatively few people working on the line that's focused on the AI thread, if you will.
And I think that one of the things that's exciting to me about this whole summer school-- and then the whole group at MIT that's thinking about brains, minds, and machines-- is it's a group of people that's like, yes let's think about AI. Let's think about what could be on that path of 50 papers.
And we think a lot-- at Brown, me and George and Michael-- think a lot about action, perception, manipulation, decision making, abstraction, and symbols, and combining all of that with learning to make a robot go. And one of the things I hoped the Hilbert problems could be-- or should be-- is like milestones along the way. We may not know how to do this yet, but we think this is a milestone along the way. So that we don't have to be searching for this thing that's 50 years out. We can find these mile posts and work on those along the path to AI.
GABRIEL KREIMAN: Tommy, do you want to say anything about this?
TOMASO POGGIO: No, I think it's important to realize that when we speak about intelligence, we tend to be misleading to ourselves. When we speak about intelligence, we really think about human intelligence.
I personally think that an infinite variety of intelligence and evolution, of course, [? have ?] converged on one, and that and this one depends strongly on a lot of constraints that evolution had to deal with, including properties of sense and neurons and so on. So now, if we want to build machines that replicate this particular form of intelligence, then, of course, how do mutations play a role?
I believe, like Stefanie said correctly, that a Turing machine is a universal computer. It could be able to simulate everything on it. But, of course, if you have the right hardware, it's easier to do it, faster to do it, and faster to do experiments. So having the hardware similar to brain's hardware-- I don't think this is in GPUs, but we make this task easier.
But in principle, hardware does not matter.
GABRIEL KREIMAN: OK, I'm going to put together two related questions that may be better, perhaps, left off by someone who's speaking tomorrow, but just let me just read them.
Mengmi Zhang asks, do machines have their own desires? And then later on, Manuela [? Rouse ?] asks, what do you think is the importance of understanding consciousness in our quest to create human level intelligence? Can an non-conscious AI pass the Turing test?
TOMASO POGGIO: I like this, that second question.
GABRIEL KREIMAN: OK, so it's all yours.
TOMASO POGGIO: Well, I like it because this is a debate that I had multiple times with Christof, Christof Koch, was my first graduate student, and was the advisor of Gabriel, so a lot of relations here. And Christof maintains that intelligence and consciousness of separate.
I believe-- and this is one of the situations where none of us can right now that can have good arguments to claim it's right, or more right than the other one. But I think if you have a Turing test for consciousness, they should look very much similar to a Turing test for intelligence. And by the way, I think that Turing test is still the best definition of human intelligence that I know of.
But it's a very interesting discussion, and I think if anybody can come up with some good story or example why a Turing test for consciousness should be different from a Turing test for intelligence. Or situations in which you could think that this other thing you are interacting with is intelligent but not conscious, or conscious but not intelligent. I'd love to hear it.
GABRIEL KREIMAN: So I think that maybe tomorrow, the first talk by Christof will have a different view on this question. Now maybe I should have had Tommy and Christof debate here.
Stefanie, what do you think about consciousness? Are robots conscious? Will robots-- the Ex Machina scenario-- will that happen soon? And what's the relation with the relationship between consciousness and intelligence?
STEFANIE TELLEX: I mean, I think the only person I really know is conscious is me, because I get to see me on the inside. And I think that, that all the rest of you are philosophical zombies, and I don't know if you have conscious experiences or not, right? I think from that perspective, I am willing to assign things that are not me consciousness, or at least the moral and ethical rights as autonomous agents, like me, who experience the world, like me.
I try to buy cage free eggs because I think chickens have some kind of consciousness. And it sucks to stick them in small cages and break their legs and all that. So I'm willing to assign consciousness to our computational entities, as well. I think-- by the same logic-- if something comes up and it starts talking to me and starts acting like a conscious agent, that I think it probably does have some amount of consciousness-- an intelligence or something, I don't know. Some right to be treated as a moral-- intentional. Dan Dennett uses the word the intentional stance, an intentional entity, that I'm going to assume has its own goals, and then I'm going to reason about in that way-- that has a moral worth, that deserves-- that has certain rights, and stuff like that.
I think our agents will have that. I think we're probably a ways away from that, though. I think a chicken has more moral worth than our agents today. I don't know how that interacts with consciousness. I think Tommy asked a good question, how what would the difference be between a test for intelligence and a test for consciousness? I don't know the answer.
Doug Hofstadter in The Mind's I talks about consciousness as being the thing that happens when you have an agent with enough computational weight to make models of itself and then you get this effect of infinity mirrors looking at each other and that infinite reflection is what is consciousness. And so it's computationally grounded with this kind of trick. And that's the most rightist thing I've heard. But I really don't know.
I don't know if it matters that much. I think that there's concrete research questions to work on now. How can we make the agent do things it can't do right now? And that's enough for me.
GABRIEL KREIMAN: So I agree with these comments. I just want to relate one brief experience that we had in the summer of course a few years ago. Marc Raibert, from Boston Dynamics, was showing off some of his amazing robots who have a pretty amazing amount of stability. And in order to train these robots, they do all sorts of things. In one of their videos that are available for everyone in YouTube-- you can just Google these-- they push these robots pretty violently to train them and to see how stable they are.
STEFANIE TELLEX: I got to do it once when I visited Boston.
GABRIEL KREIMAN: Maybe Stefanie did it. So the entire audience is an audience of amazing, very smart PhDs and whatnot. Everybody was scandalized. They thought that the investigators-- the humans-- were being cruel to these machines.
So I don't know-- certainly this is-- I don't want to claim that this is anything even close to consciousness. But my guess is that as soon as there is some very, very rudimentary form of apparent volition, and some rudimentary form of communication in the form of language, humans will be very, very willing to ascribe consciousness to machines.
That doesn't really mean consciousness perhaps in the way that that I would describe or that Christof would describe as a true consciousness. But just in terms of how people relate to machines, I think that we don't need much. I think that just the basic notions of volition and basic notions of communication and language are sufficient for people to refer to machines.
STEFANIE TELLEX: I mean, people have pet rocks, right?
GABRIEL KREIMAN: They do. They do, exactly. So I think that if they have pet rocks, they will have pet machines that they will be really endeared to in many ways. But this doesn't really get at the fundamental core question of whether consciousness and intelligence are same thing or not.
STEFANIE TELLEX: The field of human robot interaction has a lot of really, really interesting-- half of HRI-- human robot interaction-- is about studying how people relate to robots. And they don't care to totally operate their robots. They're really cognitive scientists and psychologists studying how people relate to these autonomous systems.
And one of the coolest results, they made a weight loss robot. So all it was was a little head-- this was at MIT in the Media Lab from Cynthia Breazeal's lab-- they made me a little head robot with a face and it looked at you. And then there was a keyboard and mouse interface to enter how much your weight was every day and keep it to a weight loss journal. And then they had a non-robotic version. And it turns out that just having this little head that looks at you-- that's all it did-- made people more likely and more willing to engage with the weight loss program and write their weight down every day, and all that stuff.
So there's a lot of results of that nature that I like-- that somehow putting it on a little thing that moves-- and as you say, it doesn't take much-- really has a profound effect on people. And I think that comes back to, a little bit, the intentional stance. We as roboticists, I think, have a responsibility when we create a robot not to overclaim unintentionally,-- even with its body language-- what it does and what it can understand.
That's a failure mode of our robots. So some of the actual AI studies say that if you act more competent than you are, then you fail. That's worse than being realistic about what you can and can't do, even if that's not as competent as you could claim to be.
GABRIEL KREIMAN: Very good. There are a lot of excellent questions in the Google Doc, in the shared Google Doc, but I also want to give people in the audience the opportunity to ask questions. So Chris, if you're still here, maybe you've been able to look at the Q&A and select some of those questions up for discussion?
CHRIS: Sure thing. One that's risen to the top is, on the topic of hardware, can you comment if you think that the development in deep learning-- or, in fact, AI-- could be possible without GPUs, which we rely on everyday.
STEFANIE TELLEX: I think no. We had neural nets around for a long time. And I think that GPUs crossed with lots of data was the thing that changed that made them start to work.
CHRIS: Great. I've got another one. Does-- it just moved-- [? Antonas ?] [? Stangaf ?] is asking, does Moravec's Paradox come into play for these questions?
STEFANIE TELLEX: I don't know what that is.
GABRIEL KREIMAN: So I think-- I'm not sure I'm going to phrase it correctly, maybe Tommy knows better. But this is the idea that there is a pretty strong divide in terms of what computers are good at and what humans are good at. So certain tasks that humans are extremely good at require very little computation, and some other things that are, apparently very, very easy to us are extremely challenging for machines.
So we have computers that can calculate the square root of 2 very well, recognize barcodes very well, but we're nowhere even close to having a computer that can play soccer like Lionel Messi, for example. And so on.
TOMASO POGGIO: And the question is?
GABRIEL KREIMAN: I'm not sure what the question is. I think this was referring to-- the question was whether this paradox comes into play in the-- I'm not sure if I follow the question. But I think yes, in general I think that there is sometimes there is this double dissociation between what's easy for machines and what's hard for humans, and so on.
TOMASO POGGIO: That's something that Marvin Minsky used to say-- what's easy for us, it is difficult for machines, and vise versa. And I think it's still true. I think if you think about where jobs would be lost to AI in the future, would be in things like physicians, traders, airline pilots. Whereas, let's say, the plumber will be employed for a long, long time. The person that fixes your house, what is broken there, would be very difficult to replace for a long, long time.
So what is the answer to this? Well, evolution had literally millions of years to evolve visual and motor systems, and language and mathematics much shorter time. So that's one answer.
CHRIS: Great. We have another one here from [? Kwajoh ?] [INAUDIBLE] what is your take on the free energy principle, as proposed by Dr. Friston and embodied cognition.
GABRIEL KREIMAN: I'm not sure about the free energy principle, but maybe Stefanie wants to say something about embodied cognition and--
STEFANIE TELLEX: Yeah I don't know I-- also don't know what the free energy principle is.
TOMASO POGGIO: [INAUDIBLE]
STEFANIE TELLEX: What's that?
TOMASO POGGIO: You never read papers by Karl Friston.
GABRIEL KREIMAN: But maybe you want to talk about-- you want to say something about embodied cognition, Stefanie?
STEFANIE TELLEX: Yeah, I think embodied cognition is super important. So that if you look at the things people are good at, like plumbing, what is going on that's hard is this interplay between action and sensing and planning. And, to me, that's exactly the essence of the unsolved problem in AI. And it's the essence of what a robot is.
So I think it's very important to be thinking about sensing, acting, and cognition together, in order to make progress. And I don't know. To tie it back to Hilbert problems, I would love to think about, what are those milestones, those goal posts along the way? And I think that what is lacking now in the AI, broadly defined-- the computer vision community, and the computational linguistics community, and the cognitive science community, and the neuroscience community-- is this drive to put the pieces back together.
George talks about reintegrating AI, right? Putting these pieces back together in one system and thinking about the ways-- like Gabriel was saying-- that the structures, and the computational elements, and the algorithms that we need to put these pieces back together to make progress. And I think it really needs to be in an embodied setting, because if you're embodied you can't just do computer vision because you have to move, too. You have to plan, too. So the embodied setting really forces you to think about those issues. That's why I work in it.
GABRIEL KREIMAN: Just to put another angle on this-- and I have to say that I come from a different angle on this question. But I'm slowly shifting towards Stefanie's view of the world. But in general, I think that what we've been doing for many years now is trying to use the divide and conquer strategy, which is the opposite of integration.
So-- partly because I think it's easier-- if I want to study vision, I don't want to have to worry about motor cortex in the brain. And I don't want to have to build a system that does all the amazing and very complicated aspects of navigation that Stefanie was alluding to. What I want to basically shut down everything else, except for vision. And that has been a strategy, I think, to try to simplify and try to isolate them, but I'm slowly.
STEFANIE TELLEX: I don't think it's wrong. It's a great strategy. We should keep doing it. That needs to happen. But also--
GABRIEL KREIMAN: Right, so that's why I think it's a little bit of both. I think that there's a lot of power in isolating problems, because I think if we want to build a system that can do every possible intelligent task, that's way overly ambitious to me right now. So I think it's good to have smaller tasks that we can grapple with and study.
But at the same time, I think this idea of, how do we integrate all systems? Once we understand a little bit and make progress on some of these is, how do you actually put them back together, and how to integrate them? I think that's very profound and very important. So I would contend that we need both.
OK, so we have a few more minutes. Maybe Chris, if you want to take some more questions from the audience.
CHRIS: OK, we've got a great one here from [? Quan ?] [? Wan. ?] To relay the topic of creativity, a lot of high level abstract cognitive constructs, like creativity or beauty, might correspond to the heterogeneous computations in the neural circuits, i.e. What we think about-- or what we think we are doing might not be what the brain thinks it's doing. How can we identify which human defined constructs are worthwhile or sensible to find the neural subtracts for mechanisms for things such as math and language.
GABRIEL KREIMAN: Tommy? Stefanie? So I think--
STEFANIE TELLEX: I don't know about neurons.
GABRIEL KREIMAN: So let me just say a few words here. I think this is a very it's an important question and the answer, I don't know. This also brings the notion that, as scientists, we often use our intuitions to guide us, in terms of searching for particular mechanisms. And we may very well be wrong, as you point out.
We may think that a particular task, or a particular solution, or particular [? pathways ?] is the way the brain is solving problem, but it may be completely off. It may be completely off. So I think it's important to go through the three levels of analysis that Tommy was alluding to in his earlier talk today.
We want to be able to define the problems of the computational level. We want to be able to quantify on the behavioral level, where there the system-- humans, or the robot, or the algorithm-- can solve it. And then go into the hardware and into the neuronal circuits and to try to relate that to mechanisms.
TOMASO POGGIO: Yeah I think this is partly related to my Hilbert problem, which was asking what of that-- what could be circuits of neurons that can implement the typically human ability of language and reasoning? Which I think have to be-- have to have the flavor of programming, and routines have to have the flavor of logic.
I think it's interesting, if we want to draw a connection between the beginning of our discussion and now and this question, is to note that Hilbert's problem-- it's not directly related to his questions, but it's simply related to it what did he do as a mathematician-- was to axiomatize all of mathematics.
And so it's actually the second problem, I think, was asking about whether mathematics is going to prove that it's consistent and complete, in the sense that-- he may not have used exactly those words-- but completeness is the statement that every property of a field can be proved-- any property, any essential [? meaning ?] statement can be proved as true or false within mathematics.
And Godel, 30 years later, disproved that. And this was the end of Hilbert's program. Mathematics cannot be such good system. Statements that cannot be proved within actually arithmetic [INAUDIBLE]. So the interesting part that was born out of it is literally computer science. Because several of the people who started to work in issues strictly related to this program-- [INAUDIBLE] and Godel, [INAUDIBLE] where Alan Turing, working on the-- tried [INAUDIBLE] problem, deciding whether there is an algorithm that converges in a finite time, and could state whether you can not prove or not prove that a statement can be can be proved or not. And his conclusion was you cannot do it. There is no program that finishes in a finite time that can tell you whether a statement can be proved to be unprovable or not.
And the other person is John von Neumann. Both of them ended up playing a key role in defining computer science. And computer science itself was really born out of the necessity to make a nice logic and mathematics.
So it's interesting how Hilbert's problem, or Hilbert is related to computer science, and to this question now, what are circuits in the brain that can really run programs. In a sense, we did it for computers. We don't know how the brain does it.
GABRIEL KREIMAN: OK, so we're running out of time, but I want to quickly put one more question out there from Mengmi Zhang in the audience. Just to tie this back to the beginning-- maybe this is more for Tommy, but also Stefanie, if you have thoughts on this. So you mentioned that about jobs, you mentioned that physicians are out, plumbers are in. How about mathematicians themselves? Do you think that we'll have general AI that will be able to solve all the Hilbert questions, and all the problems in mathematics? When will we have AI that can prove theorems in mathematics?
TOMASO POGGIO: Well this is strictly related to what we had just discussed.
GABRIEL KREIMAN: Right.
TOMASO POGGIO: We have circuits of neurons that can prove theorems. And there are two ways to imagine that this would become possible. One way is the optimistic way. This is saying, yes it will be. And we can have machines and neural networks of some type, that will do what mathematicians do.
The other one, it say forget about proving theorems. We'll get computers powerful enough that will just run simulations and just-- forget about proofs.
STEFANIE TELLEX: I mean, one answer is we already do have programs that prove theorems. One of the things that's happening in math right now is formally encoding the steps in Ethereum, in a program, and doing proof-checking and stuff.
TOMASO POGGIO: Yes.
STEFANIE TELLEX: And it's trivial to put some axioms in and start checking out theorems, it's just not very interesting theorems.
TOMASO POGGIO: Right. It's not creative or original enough, but you can see how it could become, right?
STEFANIE TELLEX: Yeah
TOMASO POGGIO: Yeah, exactly. But the opposite is the greener view that maybe mathematics will not be needed at all.
STEFANIE TELLEX: It reminds me of something I asked Jerry Sussman once. What happens when AI is solved? Will we not get to program anymore? I want to program. And Jerry said, no I'm always going to program, because even if the AI comes and it can write all the programs for us, I want to write the program because that's how I think, and that's how I understand it, because I wrote the program.
So I think that as long as there's people that want to understand it for themselves, there will always be programmers and theorem provers and mathematicians.
TOMASO POGGIO: On that line, computers are much better than humans on chess and Go. But people still continue to play chess and Go.
STEFANIE TELLEX: Exactly.
GABRIEL KREIMAN: Yes. OK, very good. So thank you very much. I want to thank both Tommy and Stefanie for being panelists in this exciting discussion about Hilbert questions. And I want to thank everybody for participating, and we will continue tomorrow at 12:00 noon, Eastern time, with a very exciting talk by Christof Koch, whom we've mentioned before.
TOMASO POGGIO: And he will speak on?
GABRIEL KREIMAN: So I think he may talk a little bit about consciousness, but mostly he's going to talk about this going from structure to function in the nervous system, and some of the work that he's been leading at the Allen Institute relating to photon imaging, and [INAUDIBLE], and computational models of the visual system.
TOMASO POGGIO: Thank you.
GABRIEL KREIMAN: Thank you. Thank you very much, everyone and I'll see you all tomorrow, then. Thank you very much.
TOMASO POGGIO: Thank you.