BMM Virtual Summer Course 2020 - Introduction
Date Posted:
August 11, 2020
Date Recorded:
August 10, 2020
CBMM Speaker(s):
Gabriel Kreiman ,
Tomaso Poggio ,
Lizanne DeStefano ,
Boris Katz All Captioned Videos Brains, Minds and Machines Summer Course 2020
Description:
Gabriel Kreiman, Tomaso Poggio, Lizanne DeStefano, Boris Katz and more welcome the students to the BMM Virtual Summer Course 2020. Prof. Poggio goes over the history of CBMM and the BMM Summer Course, how it was conceived and the questions that spurred that on, and some of the reasons behind this.
GABRIEL KREIMAN: OK, very good. So welcome everyone. This is a very exciting new edition of our Virtual Brains, Minds, and Machines summer course for 2020. And I'm very excited to be here together with the two other of course co-directors.
Tommy Poggio, from whom you will hear very soon, as well as Boris Katz, who is going to be also giving a talk in in two days.
So welcome again, everyone. I want to introduce a few more people. Kathleen Sullivan is working behind the scenes as our Managing Director nothing here in the CBMM Center for Great Minds and Machines-- or in the summer course, would happen without her without her magic and her help.
Similarly, we have a Kris Brewer, who's the Director of Technology. He's the one also who's making everything possible. And if you see somebody-- if you have any questions about Zoom, or if there are any problems connecting-- please connect with him.
I also want to introduce Andrei Barbu and Mengmi Zhang. These are our head teaching assistants for the course. They are both wonderful people, wonderful researchers, and a pleasure to work with. Andre has been the head TA for the course since the very beginning. Many, many years ago, Mengmi was a student in this discourse and now she's one of the head TAs. And they will be helping us with the questions on their own, but they're both looking forward to interacting with you, and potentially forming new friendships, as well as new collaborations.
I also want to introduce this Lizanne DeStephano, who's our External Evaluator. This is the first time that we're doing this course in a virtual fashion. Many of you probably know that we usually have a residential course in Woods Hole, Massachusetts, where we have a much smaller number of students, and a very, very different flavor of the course. So, I'm very excited about these new virtual edition.
In some sense, I have to warn you, you are our Guinea pigs, and we really want to get your feedback on how we're doing, in terms of this summer course, because we want to take all of your ideas to be able to make it better in the next edition. So Lizanne is going to help us do that, through the evaluations. Lizanne, I think you wanted to say a few words before I continue.
LIZANNE DESTEPHANO: Yes, I just wanted to introduce myself. So you will get some emails from Lizanne DeStephano or from Georgia Institute of Technology. You will get a mid-course survey, and also one at the end of the course. Please take the time to fill out the surveys. As Gabriel said, this is kind of an experiment. I think that the CBMM faculty have been thinking for a while how they could make this summer school available to more people. And, so this is an example of one way to do that. And so we'd like to get your input on what went well, what didn't go so well, and how things can be improved. So I'm just telling you, please, please, please fill out the survey.
The surveys are anonymous. I'm a third party evaluator. I work at Georgia Tech. I'm not at MIT or Harvard, so anything that you say in the surveys will not affect your relationship with CBMM Your name will not be disclosed. But we would really like your honest feedback on how things went. And then at the bottom of this survey, there will be an opportunity to volunteer for a focus group or an individual interview, and we will be selecting some people to go more in-depth with their experience, so please fill out the evaluation.
Your results will be reported to the National Science Foundation. They'll be very interested in what we're doing this summer. And they will be read by everyone on the Summer School Leadership Team to make improvements in the program, so it's very much worth your while. So I'd like to thank you in advance for participating in the evaluation. That's it, Gabriel.
GABRIEL KREIMAN: Thank you very much, Lizanne. As she said, again, I want to emphasize that we really want your feedback, and we'd really appreciate your ideas.
All of these slides will be posted on the course website, which is the first link that you're seeing here. Separate of this-- because we'll also be posting links to resources, including videos of previous lectures, as well as PDF documents for either books or publications or papers. We have four very exciting poster sessions, where I hope that there will be plenty of interaction with all of you, as well as with the poster presenters. And again, you can get all of these information through our website.
This is also an opportunity to introduce Ellen Hildreth, who is the Coordinator for Education. She has put an amazing amount of work in creating what we call the CBMM learning hub. This is a resource where you can get plenty of materials, links to many of the classes that many of us teach at Harvard and MIT. You can also get links to lectures from past summer courses, including many of the speakers that will be presenting during these summer course, but also several people that could not make it this year, but gave lectures in the past. And there are plenty of other resources. So I'd like to encourage everyone to go and visit this Learning Hub, which is a fantastic resource that Ellen has created.
We have five discussion panels on what we think are five fundamental questions. One of them is about what are the Hilbert Questions in AI? What are the main directions in the field, and that's going to happen today. The next one is going to be about the relationship between biological brains and AI. Next, we're going to discuss whether there is anything special about human intelligence in contraposition to machine intelligence, as well as biological intelligence from non-human animals. We will have a conversation about AI and ethics. And we will close with a panel discussion about the path towards general artificial intelligence.
You should have received links to Google Docs, where you can post your questions ahead of time. There will also be opportunities, too, for you to ask questions during the panel discussions themselves. But given that we have a lot of people, and that we may not get to all of your questions, I think it would be very good if you can go to the Google Doc and use that as an opportunity to interact with all of us, as well as ask questions that you'd like to see in these discussion panels.
So very soon, I will give the podium-- or the microphone-- to Tommy Poggio, who's going to be giving a historical account of AI in neuroscience, a personal account of how Brains, Minds, and Machines was created, on the path towards the future. And I want to emphasize that one of the reasons why we have these summer course here, is that we really want to start a new field, right at the intersection of cognitive science, neuroscience, artificial intelligence. And that's why here at the end of his line, there are your discoveries. We see all of us as the next generation, and we really are looking forward to seeing all your major accomplishments, and putting your seminal work and landmarks on this on this timeline.
So these are pictures from our residential summer course, that we usually have in in Woods Hole. Many of you have participated in these courses in the past, and we hope that we will be able to resume this once the COVID-19 is over next year. So this year, we're all locked at home and doing this via Zoom.
We asked every one of you to indicate what your field is during your registration, and I want to emphasize that this is a multidisciplinary effort. We really need all of you-- we really need ideas from all kinds of fields. The mode of these distribution is people coming from cognitive science, and computational neuroscience, people define themselves as working on a computer science, machine intelligence, computer vision. These are perhaps the main fields that are represented.
But, of course, we have somebody whose field is being a student. Somebody works on the gut brain axis. So welcome, whoever you are, working in the guy brain axis. Welcome. We have a forensics psychologist. Welcome to you, as well. Welcome to everyone. We really need all of the voices and all of the efforts in-- what we think-- is the most exciting and biggest challenge of all times. That is, understanding how brains work, understanding how intelligence works, and being able to build machines that can think and act the way that we do.
People are coming also from all over the world, and I don't have time to do justice to all of the diverse locations that are joining us today. This is just a very, very small, random sample. Of course, we have a lot of people from Cambridge, Massachusetts, which is where many of us are. We have people from other places in the US. We have people from Buenos Aires, Argentina, which is where I was born.
We have people from Russia, from Norway, from Israel, from Egypt. We have courageous people joining us from Japan, and China, and Australia. And I don't want to even think what time it is for you-- for all the people in India, and Japan, and China, and Australia. So welcome, and thank you very much for staying awake through all these insane hours for you. And again, we really appreciate that you're joining us and we look forward to interacting with you.
Here's a copy of the schedule. We'll have two talks today, and then a couple of tutorial presentations, and then followed by a panel discussion. I want to very quickly go through a couple of frequently asked questions we receive lots of emails on. I'm going to go through these very quickly.
If you don't know how to use Zoom, I sent a link with a tutorial. You will be able to ask questions through the Q&A. Some of the talks will be recorded, some will not. It depends on the speaker. If they are recorded, we will be posting all the videos through the CBMM website. We are not going to be issuing any participation certificates this time. We're not going to be requiring attendance or checking attendance. We hope there will be plenty of interaction through your questions, through the Google Docs, through the panel discussions, and especially, also, through the poster sessions, which I hope will be highly interactive in allowing you to ask questions and interact with all the researchers doing the work.
We do not have a Slack channel. We don't have an official Slack channel that we're creating. You're all welcome to create your own Slack channel in order to communicate amongst yourselves, but we don't have an official Slack channel. For the poster sessions, there will be multiple Zoom links-- that will be seven sessions in parallel. And you should have received those links. And you are welcome to join any poster you are interested in, and go in and out of any of the poster sessions.
Admission to the virtual summer course for 2020-- which is what we are starting right now-- it's completely independent of what will happen in 2021, so you cannot defer admission to 2021. We will be running a separate admissions in 2021. You're welcome to apply again, to take the residential course in 2021, but admissions to our 2020 course does not help or hinder any future participation in the summer course.
There will be no coding or experimental projects this year. Regularly, when we have our meetings in Woods Hole, we have projects. We're not going to do that this year. And if you have any questions that I haven't answered, please feel free to reach out to Kris Brewer, to Andrei Barbu, and to Mengmi Zhang, who can help answer all of these questions.
So now I'd like to introduce, again, Tommy Poggio, who is one of the course directors. He's also the director of CBMM, and one of the fathers of AI in neuroscience. One of the seminal figures in competitional neuroscience. And without further ado, I'd like to give him the microphone, so that it can give us his own history of CBMM, and his own history of the field, and his personal journey through artificial intelligence and neuroscience. So welcome, Tommy.
TOMASO POGGIO: Thank you. Welcome to everybody. I'm going to tell you briefly about the Center, with organizing this summer school. As you heard, for the first time, virtually online, which is a very interesting experiment in itself. To see how much we can scale it up, in terms of the number of people by going virtually. And Center itself started in 2013 as a science and technology center, among the largest of NSF. It's about $50 million funding from NSF for 10 years.
It's more institutional, multidisciplinary, and it consists of a number of researchers across different institutions and departments-- like computer science, neuroscience and cognitive science. Its focus is on the science and engineering of intelligence, not only artificial intelligence in order to build intelligent machines, but especially the study of intelligence in order to understand how our brain works. With the secondary mission then, producing being able to replicate intelligence in machines. And, so in a sense, our basic battle, basic philosophy, or basic vision is that-- it's similar to DeepMind, because after all Demis Hassabis who started it was it postdoc of mine.
It's similar to DeepMind, in the belief that the problem of intelligence is the greatest problem in science today. It's one of the great problems in science, like the origin of the universe, and the origin of life-- I think it's the greatest of all. And so, we all agree on that. The slight difference is that we want to understand how the brain works, in order to make intelligent machines-- or as an aftereffect to make intelligent machines. But the first priority is understanding how the brain makes the mind.
So our organization is shown here. It's a number of people we have heard already about, and from, and on about, and you will hear more about them during this course. That also play an important role in your organization of this center. And we have more than 100 researchers also working in the Center across different institutions.
We have a wonderful external advisory committee from Demis Hassabis, the founder of DeepMind. Christof Koch who is-- that was, until recently, the Director of The Allen Institute for Brain Research. To Lore McGovern. Joel Oppenheim. Pietro Perona, in computer vision at Caltech. Marc Raibert, robotics. Judith and Kobi Richter, from Israel. Amnon Shashua, was a student of mine, and started Mobileye in Israel, and is now part of Intel. David Siegel, who is Two Sigma hedge fund. Susan Whitehead, a member of the MIT Corporation. And Jim Pallotta, also in quantitative finance. So these are great people who are advising us, and have been advising us continuously over the years.
As I said, we are a multi-institution center, and include a number of universities and research centers. You can see them here, the two main ones in terms of number of faculty, MIT and Harvard. And these faculty come from neuroscience, cognitive science, and computer science. And we also have a number of international and corporate partners, from the academic side and the corporate side. On the corporate side, we have the usual suspects in artificial intelligence, like Google, and Microsoft, and Siemens, and Fujitsu. And also, from the very beginning, we add as partners, some of the companies that then became quite important and big, like DeepMind, and Mobileye, and Boston Dynamics.
You can see on our website a lot of interesting videos to look at, about a thousand. Kris Brewer is behind all of them. We have, as Gabriel mentioned, a great learning hub and then he got all the other people involved. And coded software and data set that you can access through our website. Now, joelle in terms of contribution, I think, is this summer course. Gabriel and Boris-- who are effectively the directors-- have been doing a fantastic job over the last six years, seven years, and really creating an example that is highly followed at NSF, by all the STC, because of the success of this summer school.
It's, from my point of view, the most interesting outcome was that we have now a community that is self-sustaining, in the sense that students in the course have become so enthusiastic about it and so knowledgeable about it, that they are volunteering to be TAs for the next course. And so the faculty is not really necessary for this course to continue successfully for the future. Which is great. You see there, in the [INAUDIBLE] lady spoke about the key people that make this possible. On the right hand side-- this here is a good way to try to scale the discourse up, as NSF would like us to do. So we have many more students than the limit we had when it was not virtual.
Lizanne has collected lots the positive feedback about the course from students. You couldn't see here are some of the comments that we got, and I want to make, again the point of what is our key vision and mission-- not only for the Center, but also for this summer course and not only when it's in Woods Hole, but also now in a virtual way. The key question is-- the key problem is-- how the brain works, how the brain creates the mind. And if we understand that, then we can make intelligent machines. This would be proof that we really have the understanding. And, of course, will be very important and interesting in its own, to have machines to make ourselves more-- help us to be more intelligent.
The reason why we think that's a good way to make intelligent machines, you have to understand how the brain works. The reason is that if you look at the recent success stories in AI and for a I'm little bit biased-- I chose this to DeepMind and Mobileye. There is a sadness and I'm not so sure They happen to be postdocs of mine-- ex-postdocs of mine. That's not the reason I chose them. I chose them because DeepMind is arguably the most advanced research places, at least in the corporate world, in the area of artificial intelligence. DeepMind is almost 1,000 people based in London, part of Google.
And Mobileye has been a very successful AI company. Created in Jerusalem, around the year 2000. Was acquired by IBM a couple of years, then by Intel a couple of years ago, for around $15 billion dollars. And both of them are really basing their success on two types strains of algorithm-- reinforcement learning and deep learning. And now, both of them come from neuroscience. That's one reason I am saying that it's a good bet to bet in neuroscience, if you want to do AI. Reinforcement learning comes from the research of [INAUDIBLE].
And it's not the first neuro physiologist to serve to pave the ground for research on reenforcement learning. There are several other ones, but the origin is that it could say neuroscience, and cognitive science. Deep learning is the original idea about radical architecture of neurons-- in the specific case here, in visual cortex-- came from a contraction of the [INAUDIBLE] vision, based on their work in the '60s, recording from visual cortex in the monkey, and the cat at Harvard. And the architectures of deep networks today is essentially the same as what they postulated in their paper, having a hierarchy of neurons in different layers with more and more complex features and properties from view-- from simple sets, to complex sets-- what at the time they called hyper-complex sets, and so on.
So, so vision in this summer school is to focus the combination of neuroscience, cognitive science, and engineering. And, as I said, the reason is that-- as this historical glimpse shows-- I think it's likely that several of the next breakthroughs in artificial intelligence or machine learning are likely to come from neuroscience and then engineering-- the combination of neuroscience to the engineer. So let me finish with this this brief introduction about the Center and the vision, and the vision of the summer course, and why we designed it around the neuroscience, and machine learning, and cognitive science.
I want now to go to shift to a different gear, and give you some observations about the history of computation neuroscience and machine learning, from my personal life. This will be part of the last 50 years. I'll tell you a few things about the year between this 1970 and the beginning of the Center for Brains, Minds, and Machines.
I started my work as a researcher in Max Planck Institute in Tübingen, Germany, not far from Stuttgart. Not far from where Daimler is, producing Mercedes and Porsches. And the director of the Institute was Werner Reichhardt. He was a physicist originally. He had two advisors. One was Max von Laue, who got the Nobel Prize in physics. The other one was a Ruska, who is that shown in the center here. And he got also Nobel Prize in the '80s for his work on the electron microscope.
So he started this institute to study the visual system of the fly, and he called to the Institute to work with three other directors-- Karl Gotz, Valentino Braitenberg, and Kuno Kirschfeld. Braitenberg is quite well known for a delightful book called Vehicles. The idea was to understand a system-- a nervous system, which was simple, but not to simple, the visual system of the fly. The fly has about 1 million neurons, which puts it about halfway-- on a logarithmic scale-- between us and unicellular organisms. We have about 10 to 12 neurons. And unicellular organisms, that have, of course, one cell.
So, it's a beautiful system, with beautiful eyes. The eyes of the fly are, essentially, 3,000 little eyes, each one has its own lens, called ommatidium, each containing 7 photoreceptors. And what we did in the work there-- this was the work in the group of Werner Reichhardt, one of the four groups in the Institute-- was a series of studies at three different levels. Trying to understand some cognitive behavior-- let's put it this way-- on the fly.
And so, the first level is that he's trying to define and model the behavior, which includes chasing the butterflies-- which is the part of the sexual behavior of a fly-- and fixation of objects-- which is part of survival for trying to find a place where to land-- and then, at another level, it was looking at the algorithms that were needed to do this. For instance, measuring motion, or measuring the composition of contrasted object, in order to be able to fixate on them.
Computing the relative motion between parts of the image and the rest, in order to detect, for instance, if fly objects the gaze, to stationary background. And the last level was biophysics of computation. How do you implement, in neurons and synapses, this algorithm. I'll give you an example and then come back to this idea of understanding the complex system at different levels.
So the study of the behavior-- this is kind of doing psychophysics on the fly-- was putting the fly in ceiling. They're suspended, from [INAUDIBLE] the measure of the thought. So the fly was flying as if the fly was free, but, in fact, it was fixed. But [INAUDIBLE] or the air was measuring how the fly would like to turn, and ideally how to move up and down, and so on. And then, there was a simulation of the dynamics of flight-- a kind of virtual reality at the time. And then, according to this simulation, the environment was moved around the fly, as if the fly were to move around the environment. And this is a way to measure it, very precisely, all the quantities involved. And creating a virtual reality system for the fly.
And from this kind of experiments, you could measure how the fly was going to behave, with respect to tracking a flying object, or a not flying object, depending on the various variables involved, and we came up with a model of this, which was a stochastic differential equation, which included some of the dynamics of flight, which is just how the wings seem to interact with the air, and with inertia and the mass of the fly. And then on the fly does-- which is on the right hand side here-- which was producing a search noise, which is roughly random, roughly [INAUDIBLE] to the first approximation.
And then, producing that torque around the vertical axis-- which depended on a single object, single target was depending in an inordinate way, on the angle between the directional flight of the fly and the position of the target on the eye of the fly. This is this [INAUDIBLE] angle side here. And then another detriment to measuring the motion of the object of the image of the object on the eye of the fly.
And so this [INAUDIBLE] a qualitative theory. And giving that as a probability distribution where the fly was expected to go up. And based on this, we could actually, from this theory, which was that [INAUDIBLE] measurements to this virtual reality environment, could actually predict some of the free flight behavior of the fly. And we could verify this by filming the flies chasing each other in a box, a three-dimensional box-- filming them from two points of view and obtaining x, y, and z of each fly as a function of time.
And so you have here stereo pairs of a chase between two flies. You have the left stereo image on the left and the right one on the right. And the top plot is a chase between two flies. And the bottom one is a chase in which the trajectory of the first fly is a given, it's the same as the one above. But that trajectory of the chasing flight is actually produced by our model. It sees not exactly the same as what we have measured, but quite similar. And this is a prediction just based on the trajectory of the chasing-- the first fly.
So it was pretty good, modeling the behavior. It was-- also, the same theory was able to predict a few qualitative aspects of the fly, of the flies. For instance, the fly has a behavior which is somewhat similar to what we'd expect from the [INAUDIBLE] illusion. So when confronted with the pattern above, you see a distribution of fixation, which is the first histogram below.
And when you have the other figure of the [INAUDIBLE], you see the behavior, which is quite a bit different, kind of indicating a perception that is similar to ours of the segment being longer in the second case, just by the fact that it has a similar length. And also, the ability to see what we call subjective [INAUDIBLE], which is a kind of illusion you could see in B. And the fly also happens to fixate on this subject. It contours as predicted by our models. Of course, I did not explain all the details of the models to you, but just to give you an idea.
So in a sense, at this level we were able to provide a theory a little bit similar to Bayesian theories of could cognition in humans. The theory does not speak about neurons. You make some assumptions about the kind of computation that goes on, and are able to predict the model, the behavior quite successfully.
But then the question is, if you go back to what I showed you here, you know, I told you that this term, psi dot, is measuring the velocity-- the velocity of the image of the first flight on the fly, the image of the first fly on the tracking fly, on the eye of the tracking fly.
And this is the velocity. Who is measuring this? The phenomenological theory assumes it's available. And here, it is essentially giving you a position of the target on the eye of the chasing fly. So we've computed this. How is this?
So the phenomenological part of the theory up here assumes that somebody, some boxy, some models do that. And the second [INAUDIBLE] asks how do these models do this? How do you compute motion? How do you compute the relative motion you need in order to detect an object moving against a background, which is itself moving because you're looking at it while you're moving.
And so at this level, the basic rules of how motion can be perceived, for instance whether a pattern moves from the left to the right in front of two photoreceptors, one and two, or moves from the right to the left. How do you do that?
And this work had been done by [INAUDIBLE] Reichardt and [INAUDIBLE] in work that is famous. And they came up with a model which was a correlation model.
You imply you have delays, you multiply a signal from receptor one by a delay, then seek it out from receptor two, and you do this symmetric opposite thing on the receptor two. And then you have to subtract these two outputs. And this gives you a way to detect in which way a pattern moves in front of these two photoreceptors, and describe exactly what [INAUDIBLE] ant flies do.
So this was the algorithm. It's actually this correlation model turned out to be completely equivalent to the so-called energy models that were devised later by [INAUDIBLE] and Bergen to describe how motion cells work in primate cortex. And this found also applications in engineering, in particular to stabilize motion in early video cameras in the '80s.
And we also looked [INAUDIBLE] algorithm for motion [INAUDIBLE] about their work how you can detect motion of, say, a piece of this random patterns against itself. This is kind of figure ground discrimination, very good psycho physics also for humans. If a little piece of this textured pattern moves in front of the rest, you can see it very well. If it stops moving, there is no more relative motion, it disappears in the background.
And so we looked at that and came up with Werner and [INAUDIBLE] with an algorithm that is showing me in terms of a neural network that can do this. It can be described as lateral inhibition between motion detectors. And it can do that very well. It can do that very similar in terms of its properties to how flies do it, a lot of experiments to substantiate this claim.
And we know in the meantime which neurons are involved in the optical lobes of the fly. Some of the neurons have been recorded from and properties characterized. And so we have a pretty good handle on this algorithm for motion and relative motion in the fly.
But again, there is another level you can go deeper into. Because again, you can ask in algorithms like this one, what is-- OK, I'm assuming here that it is a multiplication, but how is multiplication done by neurons in synapses and what are clearly the biophysical properties-- which synapses are involved? Which transmitters and ionic trends?
And so at this level, I did work with two people, but mostly one was Vincent [INAUDIBLE], it's now in 3S. And the other one is Christoph [INAUDIBLE], who is in Seattle at the Allen Institute. And Christoph, by the way, was the advice of Gabriel. And Christoph was also my first graduate students.
And so what we did is what we called biophysics of computation. And Christoph pushed that much further. There is a book by him about this, how you can-- how neurons and synapses can do elementary computation such as multiplications.
In the case of multiplication and particular motion detection, the model we came up with, [INAUDIBLE], was based on a non-linear fact that inhibitory conductor changes can have. This is assuming passive membrane neurons. But equilibrium ions, such that the inhibitory conductancies can effectively shunt the effect of an excitatory input.
And so our model is a model of multiplication, of division, of something a bit similar to an end-not gate, one of the inputs essentially vetoing the effect of the excitator neuron. And this effects-- as I said, it's similar to multiplication, and to division, and non-linear, and explains how motion detection could be done.
It's still unclear whether this is the mechanism used by the fly. It's still unclear whether this is the mechanism used in the vertebrate retina, although there is a good evidence that it is at least involved in determining direction selectivity in neurons in the [INAUDIBLE].
OK, let me go on saying that at the same time I was doing this, I started to work with David Marr. And we developed the neural network, would be recurrent neural network to solve the problem of stereopsis. This is quite well done.
But I want just to mention David, who, with his book "Visions," which was published posthumously-- he died of leukemia much too early-- had a big impact in the field of computation and neuroscience. And this is David, and me, and Francis Crick in about '76-- no, '78 I think, in the desert behind La Hoya, behind San Diego.
The part I want to stress here before I finish is that one of the important points in the book was this claim that we should understand-- study and try to understand a complex system like a brain at different levels-- and levels like computation, algorithms, biophysics. And these are the levels I told you about in the example of the fly. You can make a similar arguments for the study of the human brain. And I invite you to do this over the next two weeks, about the talks you are listening to.
The important point is that in that work, this is-- Marr made it-- David Marr made it famous, this argument. But the argument came from a joint paper that we had before, and in part from something that Werner Reichardt believed well before I met David Marr.
In our paper, we stressed the fact that these levels are separate and independent. You can understand, say, PowerPoint without knowing anything about the transistor and the logical gate that are working in our computer. And you can understand, at the very fine level, your transistor and logical gate, and know nothing about the software that is running on your computer.
So in an engineering system, these levels are quite separate, independent. And understanding a computer really means understanding all of them, each one, as I said, almost separate from the other. In the brain, we suspect that there are different levels. But they are much more connected than in our engineered system, which is why it's important to know and study all of them at the same time.
Now, I was going to follow this with the rest of the trajectory but this would take at least another half an hour. And so I'll leave it for another opportunity, maybe another summer school. This will be the time I spend at MIT, from '81 to now. And this is a picture, by the way, of neuroscience at that time.
You can see some interesting people. There was-- Francis Crick was here. Ann Graybiel was here. Jerry [INAUDIBLE] and Albert Price, [INAUDIBLE] then working a lot in neuroscience, is here. And Don [? Grazier, ?] Nobel Prize in physics, is here. Eric [INAUDIBLE], here. Torsten Wiesel, here, David [? Hubert, ?] Chuck Stevens Rudolfo Llinas, this is me. And so a lot-- John Dowling, Snyder, [INAUDIBLE], Werner Reichardt. This is Frank Schmitt, who started the Department of Biology at MIT.
So with this, let me finish this short [INAUDIBLE]. It has been shorter than I would have thought. But at least I gave you some food for thought in the sense of these levels of understanding, which I think is an interesting paradigm to keep in mind across the next two weeks of the school. Thank you.
KRIS BREWER: Great, thank you very much, Tom. We have a few questions. If you'd like to answer them, I can read some for you.
TOMASO POGGIO: Yes.
KRIS BREWER: So the first one, or the top one, is from GM Bautista Para [INAUDIBLE]. The question is, how similar are the brains of two twin flies? Are the exact configurations of neurons and synapses encoded in their DNA or how much is left to chance slash development slash learning, and in twin humans?
TOMASO POGGIO: Yes, that's a very interesting question. And I did not follow this up. But I believe that there are papers from researchers in [INAUDIBLE] that tried to address exactly this question. It's a key question for development and for understanding how much information is in the genes.
KRIS BREWER: Great, thanks. And the next question-- I'm not sure if there'll be enough time for this. But somebody was asking, please can you talk more about the search noise?
TOMASO POGGIO: Yes. That's an interesting question-- the search noise, we did not, at the time, focus much on it in tubing. And we were able to measure this noise, which was, as I said, looked like a Gaussian-- suddenly Gaussian distribution of the torque with a certain variance under different conditions.
So for instance, it was present in the presence and absence of visual stimuli. It seemed to be independent of it. We did not go after a neuron generator for it. In the meantime, there are evidence for, for instance, in the song birds, a work by Michael [INAUDIBLE] in our department, there is evidence of noise generators and neural networks that generate noise like search patterns.
And it could be that the same is true for the fly. This would be my first conjecture. But we did not go after this question when I was in tubing. And I don't think other people working on flies did either. It's, of course, difficult to measure if you're just looking at free flight behavior. Because it's then mixed up with reaction to anything in the visual environment.
But this would correspond to a fly that does not see anything, that is just moving a way, in a random-like, Brownian-like, motion.
KRIS BREWER: Great. Thank you. The next one is, in order to understand how the brain works, what do you think is or are the most important questions or gaps in neuroscience that should be addressed as the next step?
TOMASO POGGIO: There are a couple of very important questions. It depends whether we want to address questions that I really think are the key problems, in which case we have a session about Hubert's problem this afternoon. And I think that the person who asked the question should come to that session, because we'll address it.
Or you ask me, what should be the next step? If it is the latter, then I would say it would be really important to be able to make connection from cognitive science to neuroscience. And this means that models in cognitive science should find a connection to neurons.
What is the prediction they're making about the neurons or requirement from neurons? I think it's very important to be able to make this connection in order to verify models and theories coming about cognition, and connect them in a powerful way to everything we know about neuroscience and all the kind of experiments we can do in neuroscience.
KRIS BREWER: Great, thank you. The next one-- I think we have time for one, maybe two more at most-- in your experience and research, when making a complete model, do you usually start with the computation and work your way down to the physiological mechanisms? Or is it usually a discovery of a physiological mechanism that leads you to the computation?
TOMASO POGGIO: I would say, as often in science, if you look at history, is both. And I would not want to give prescriptions. Of course, when we wrote about this kind of epistemology of levels of understanding with David Marr, we thought about doing an analysis versus the top level and then working our way down.
But you know, sometimes it happens that way. For instance, if you look at physics, at, say, thermodynamics and the equivalent of the computational level and then statistical mechanics is the equivalent of algorithms and mechanisms. And thermodynamics came before we understood what heat really was in mechanistic terms.
So you can certainly have an understanding at that top level without much understanding at the lower level. But the opposite is probably true, in a sense. You know, the idea of algorithms like the deep networks came, as I said, from conjecture and speculation of [INAUDIBLE] coming from experiments done, really, at the level of single neurons in monkey's cortex, then they make this [INAUDIBLE] that then became the typical architecture of deep networks.
Science has a lot of different ways to weapons. And it's usually quite different from what engineers think it should be.
KRIS BREWER: Great, thank you very much, Tommy. So we did allow this to go a little bit over since we did start a little bit late. So at this time, we are going to take a quick break while we switch over our panelists.
Jim DiCarlo be speaking next at 1:15. Thank you very much, everybody, for all the great questions. Unfortunately, we just do not have time to get to all of them. But please do keep submitting them. And keep uploading the ones that you are interested in so that we can get to as many as we can.
Great, thank you very much, Tommy and Gabriel.
GABRIEL KREIMAN: Thank you. Thank you, everyone.