Welcome & Introduction

Welcome & Introduction

Date Posted:  August 12, 2023
Date Recorded:  August 5, 2023
CBMM Speaker(s):  Gabriel Kreiman
  • All Captioned Videos
  • Brains, Minds and Machines Summer Course 2023
Associated CBMM Pages: 

GABRIEL KREIMAN: We would like to keep this very informal. We have a tradition here where people ask lots and lots of questions. You'll see that many of the speakers have many slides, but this is mostly for you. So the idea here is to have this be a conversation more than lectures. This applies to what I want to say today, but also to all the lectures in the future. So please, please stop and ask questions.

So welcome, everyone, to the Brains, Minds, and Machines summer course. We're very, very excited to have you here. I'd like to introduce here Boris, our co-director. Many of you have met him already. Tommy Poggio is the other course co-director. Unfortunately, he couldn't make it today. He will be here in a couple of days.

And I'd like to start by giving a brief introduction to some of the ideas behind this course. This will be pretty high-level. And then I'll discuss some logistics. And then we'll have another-- I'll give another talk in a few days talking more about specific research and science.

So I'd like to start with a very famous quote by a statistician called IJ Good. And if you haven't read this before, I'll let you read this for a few seconds.

So the idea here is that at some point, we are likely to build machines that are more intelligent than humans. There are physical limits to how fast we can move. For example, we cannot move beyond the speed of light. There are physical limits to temperature. We cannot go below 0 Kelvin.

As far as we know, there is no limit to intelligence. Humans are particularly interesting creatures. They're cute, they're pretty smart, et cetera, et cetera, but there is no reason to think that our intelligence is any kind of supreme or maximum in any kind of distribution.

So in whatever way we define the word intelligence, and there is no consensus on exactly what intelligence is, at some point, I think this will happen. We will have machines that surpass human intelligence.

In many ways, we can argue that this has already happened in very specific domains, some of which may not be super exciting to you. We have machines that can read barcodes in the supermarket much better than humans can. Good luck trying to read those barcodes in the supermarket. We cannot quite do that. So you can call that superhuman intelligence. We have machines that can surpass humans in many tasks along those lines.

And so the claim here is that once we do that, these machines will be so amazing that then that's the last machine we ever need to make because those machines can then build other machines that are smarter and so on and so forth.

And one of the many ways to actually measure-- or one of the many proposed metrics for intelligence is the so-called Turing test that many of you are probably very familiar with. People have been debating for decades now about whether this is a good test or a bad test of intelligence. We can come back to that discussion, but I think it's a very good way of assessing and evaluating what machines can or cannot do in a variety of different tasks.

So in case you haven't heard about this, the Turing test, the basic idea is that you have two rooms. In one of those rooms, you have a machine; in the other one, you have a human. That's the person labeled B in here. And then you have a judge-- in this case, a human judge-- that can ask questions to those two agents. And in those days, those questions were messages that they would pass below the door of that closed room.

And based on those questions and based on those answers, the judge that is Agent C in this diagram would have to be able to identify which room has a human, which room has a machine.

If the judge is unable to determine which room has a human and which one has a machine, then he said this basically-- that means that the Machine A is a very good imitator, that this was also called the Imitation Game. And then in that case, we say that Machine A basically passes the Turing test.

If you haven't read this paper, I strongly recommend that people should read it. It's a fascinating read. Way ahead of its time. And there are lots of fascinating ideas and discussions in there.

In fact, before he introduces this Turing test, he proposes different variations of this, one of which, for example, is a room where you have a man and a woman and you're trying to discriminate which room contains a man, which one contains a woman under situations where they both are trying to deceive you or they're both trying to pretend to be a man or pretending to be a woman. But in any case, so this is one version of how we can try to evaluate machines and their performance.

So you can imagine many different versions of such Turing tests. So for example, we can have a Turing test for barcodes in the supermarket, we can have a Turing test for how much we know about string theory, one for vision, one Turing test for playing chess, for driving, et cetera, et cetera, et cetera. So you can have infinite variations.

So I'll give you a couple of examples, and over the course of the next couple of weeks, we'll give many examples that have to do with vision and visual processing. This is in part because AI has been particularly successful in several aspects of visual processing and because we know a lot about visual processing in brains as well.

So the Turing test for vision would look something like this. We have an image, any arbitrary image-- it's important that we talk about any arbitrary image. And then we can post essentially an infinite number of questions on that image. We can ask, how many people are there? What color are the signs? Are there any dogs? Et cetera, et cetera, et cetera.

So based on these kind of arbitrary questions on arbitrary images, we can ask, again, given the answers to those questions, are those answers coming from a human or are those answers coming from a machine? So there's a lot of important aspects to this kind of testing. One of them, of course, both agents need to be able to understand the question.

So for example, if I ask this question in Arabic or Chinese or Hebrew, if the person doesn't understand the question, that that's not a valid test. Similarly, the computer needs to understand the question. So if the computer doesn't understand the question, it's the same as me posing the question in Esperanto and you don't speak Esperanto. So they need to understand the question.

And then it's important that these questions are arbitrary and infinite in principle. If we're asking only one kind of question, it may be very easy to construct an imitator. The challenge, of course, is to build imitators that can actually surpass or imitate humans in an arbitrary variety of different kinds of tasks.

So Tommy Poggio, who's one of the founding fathers of the Center for Brains, Minds, and Machines, which gave origin to this course; and also, I would argue one of the founding fathers of AI in general, made a claim that if we can understand the brain and understand intelligence, we can find ways to make us smarter and to build smart machines to help us think.

And I like this because I think that it emphasizes the potentially transforming role that understanding intelligence has on almost every aspect of life on Earth. If we really understand intelligence, that may have a major impact in building algorithms to be able to develop large language models as you all know very well.

It will also help us play chess better and develop algorithms that will play chess and Go and so on and build self-driving cars. So there's a lot of engineering applications that many of you are probably quite familiar with and that make the cover of the New York Times quite often.

But in addition to that, one could imagine that understanding intelligence may be able to transform mathematics and physics, and how we interact with each other, and education, curing mental diseases, politics, security. It's hard to think about any aspect of our existence that will not be touched upon or potentially completely transformed if we can really understand intelligence.

So we think that this is not just a question like many other questions. This is really a potentially transforming question that will, in many ways, change history.

OK. Many of you are quite familiar with many different astounding successes of artificial intelligence over the last decade or more.

Here are some examples that many of you are probably very familiar with, from self-driving cars to beating world champions in Jeopardy, chess, or Go. Having systems that you can basically talk to and interact with, all the way to solving problems like trying to determine the three dimensional structure of proteins from the primary amino acid sequence.

I have a very thick accent myself. I was born in Argentina. I'm quite impressed. When I open up a new iPad or phone and just with one or two examples, the machines can understand my accent quite well. In fact, when I talk to people on the street, they often-- I often have to repeat what I'm saying.

Machines can understand me better than when I talk to a random person on the street. So it's quite amazing. Of course, there are many people with Spanish accents, so they have an enormous amount of training.

And then fast forward to today with many of you have probably played with large language models, and we can criticize them in many ways, and we will criticize them in many ways, but it's quite, quite amazing what they can do. It's quite astounding what they can do in terms of their performance.

The problem of protein folding, I confess, I was very, very impressed. There are very, very serious people that have been working on the question of protein folding for decades. When I was a grad student myself back in the pre-history-- my kids like to think that I was basically living at the same time as the Tyrannosaurus Rex and other--

But anyway, so when I was a grad student, I was hesitant about whether I should go into protein folding, and I thought it was a very exciting problem. People have been building detailed biophysical models of interactions and so on for decades.

And in a few years, AlphaFold, a combination of astute algorithms and brute force and a lot of data and computational power, they could beat decades of research into the physics of protein folding, which is one of many examples of when I was wrong in my career.

When Demis Hassabis said that he was going to start working on protein folding, I said, this is not going to work, you're not going to be able to beat decades of research in the physics of protein folding. And here we are, and this is really quite amazing. So this is another example of what I think is a tremendous success of AI.

OK. So back to questions about Turing tests for vision. We'll talk a lot about object recognition. And we've become quite successful at things like-- answering questions like, where are the people in this image? And algorithms in general work quite well. Sometimes they make mistakes.

And just to emphasize what the problem is, I know that many of you are connoisseurs and experts on this, but if you haven't really thought about this problem, this is what the problem looks like. So an image is just a bunch of numbers. So it's a matrix of numbers that denote the intensity of every pixel, perhaps the color of every pixel if you want to work with color images.

From these numbers, we need to be able to infer what that object is. So imagine now, I remove that picture, I just give you those numbers, good luck trying to figure out what that is. But that's somewhat akin to the transformation that happens when light is reflected from objects and impinges on our retina, and then there are retinal ganglion cells that need to send signals to the back of our brain.

So we get some signal that looks more or less like those numbers. From those numbers, we need to be able to infer what's out there. And we'll talk a lot about the algorithms that happen both in the brain, as well as in machines to solve that kind of problem. So loosely speaking, we have a matrix, we want to extract relevant features and use those features for classification.

And a particularly successful way to do that over the last several decades has been to build algorithms that are based on neural networks. So the idea is that we have very simple computational units that are very, very loosely inspired by the idea of neurons in the brain. They are interconnected with each other, hence the term neural network. And those neural networks, when trained appropriately, can do apparently magical things. They can do amazing things.

So there are a lot of emergent computations that happen depending on exactly how those units are connected. And we'll spend a lot of time talking about those connections, those neural networks, what they can do, what they cannot do, how to train them, how to learn from them, how to improve them, and many of the projects in the class will focus on these.

So this is a list of some interesting computations and recipes that have been quite successful in the neural network world, including things like CONVolutional layers, including NORMalization layers, RELU layers, POOL layers, weight changes, dropout, and so on.

And I will argue and I will contend that most of these ideas that have been so prominent and important in the machine learning world. I have actually come from neuroscience. That is from studying the biology of these same problems that need to be solved by actual biological brains.

OK. So here's a semi-random list of a couple of things that neural networks are extremely good at and they can do now, specifically focusing on questions related to vision and pattern recognition. So for many years now, we have algorithms that can recognize handwritten digits, classifying large image data sets like ImageNet.

We have algorithms now that are better at face recognition that what are called superrecognizers or forensic experts. We have algorithms that are better at diagnosing diseases like brain cancer than radiologists. Better than ophthalmologists at diagnosing things like diabetic retinopathy.

And the reason I like this particular story is that we'll have a talk in a few days by people from Google who worked on this project. And what they realized is that they used these kind of images, which is an image of the back of the eye, called a fundus photograph. And they use this image in order to diagnose diabetes of retinopathy, a particular disease of the eye.

But then they realized that from these same images, they could ask different questions, and they asked questions that no clinician had ever thought about before. They could detect the gender of the person, they could detect their age. Moreover, they could detect the risk of cardiovascular disease, which nobody has thought about getting that kind of information from this kind of image. So in a way, they could use machine learning for discovering new principles and new ideas that nobody had thought about before.

We can classify plants, galaxies. If you have a phone, if you have probably played with this, you can recognize plants when you're walking around, recognize galaxies. And then extending to other domains, we have speech recognition, sentiment analysis, decision-making, automatic translation, predictive advertising, earthquakes, protein structure, and so on. So there are many, many astounding successes of AI.

And yet, I will contend that there are many things that deep convolutional networks cannot do, and I want to talk a little bit about some of those things as well.

Before I do that, I want to talk about another angle of AI. So I put this very quickly at the beginning. Can anyone tell what's common to all of these images? What's in common among all of these people? If you have seen me talk about this before or anyone else, don't say it, but--

AUDIENCE: It looks like they're all based on the same face. Some of them are really similar to each other.

GABRIEL KREIMAN: They're very similar to each other. They're based on the same face. What's that?

AUDIENCE: They're not real people.

GABRIEL KREIMAN: They're not real people. OK, good. So I'm preaching to the converted here. You are too smart. So that's good. So you're both right. When I do this with the lay audience, people don't realize, they start to say, well, they're all men. No, that's not true. Oh, they're all colored. No, they're not. They're all smiling. They're not. OK. Anyway. So that's what usually happens, so this doesn't work at all. Anyway, so you're absolutely right. So these are all fake, these people don't exist.

But the reason I wanted to point this out, even though you're both completely right, is that we have amazing generative algorithms now. Not only can we classify lots of things, from faces to breast cancer to galaxies and so on, but we can actually generate things. In this particular case, we can generate images.

And I think there's tons of interesting applications and potentially interesting questions that will come out of the fact that we can actually generate semi-realistic images despite the fact that, in a few seconds, we have at least two people that detected my trick very, very quickly.

OK. So one of the things that-- let's see if this one works, and this is a challenge for everyone, especially for the two of you. So another thing that, of course, has been quite spectacular, especially in the last year or so, is the development of amazing large language models that seem to be quite amazing in terms of their abilities.

So this is an actual Turing test. Actually, this is one of the projects that was started here in the summer course last year led by Mengmi here and with many other people, some of whom are in this room who participated. So this was a conversation between two agents. The two agents are called A and B here. It could be that they're both human, it could be that they're both machines, or it could be that one of them is a machine and the other one is human.

I don't know if the font is large enough. That's yet another reason for people to get cozy and come closer. But I want you to spend a few seconds reading all of these, and then I will ask you whether you think that A is human or not and whether B is human or not. OK. So please read the whole thing.

OK. I'm a very slow reader myself, but hopefully most of you have read this. OK. So raise your hand if you think that A is human. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11. OK, 11. Raise your hand if you think that A is a machine. 1, 2-- OK. I think A, machine 1.

OK. B. Raise your hand if you think that B is human. 1, 2, 3, 4-- OK. Raise your hand if you think that B is a machine. OK. So that one I thought it was 50/50. OK.

So as you can see, it's not that easy. And the fact that I counted, it was exactly 59% for A being machine, and then 50/50 exactly for the second question. It shows that this is not that easy. This is a GPT-3 here in B, and A is a human. So A is a human and B is actually a machine, it's GPT Davinci version for the aficionados. Maybe you're laughing.

AUDIENCE: Oh, I was saying, I will never forget that I got it wrong.

GABRIEL KREIMAN: You got it wrong? OK. So she's the first author and she got it wrong. OK. So this is actually pretty challenging. So if you want to-- just for fun, if you want more examples, you can go to that paper over there.

So we conducted a lot of these experiments. The short answer is that it's becoming pretty challenging to be able to tell. So if you're having a conversation online and talking to some agent, it's becoming not trivial to be able to determine whether you're talking to a human or not in this case. I actually forgot to check exactly what our two vision experts, whether they got it right or wrong. You got both wrong?

AUDIENCE: Both wrong.

GABRIEL KREIMAN: OK, good. OK. So one thing worked. OK, good. All right. There were-- in this case, in that particular version, there were six different tasks. And the question was to try to operationalize the Turing test and to assess to what extent current imitation-- current algorithms can imitate humans or not in a variety of tasks. So this was one of the six tasks, which was a conversation task.

And then there were other tasks, including word association. So I give you a word. I say sky, and I ask you, what's the first word that comes to your mind? And we did that with machines as well. There was image captioning. Similar to the previous one where you have an image and you have to caption that image.

There was one where we had images and you had to detect objects, detect color, and so on. So there were six different tasks. And this is one of infinite possible variations of the Turing test where we're trying to quantify. So if you're interested in Turing-like tasks, come and talk to me or to Mengmi. There's plenty more things to do in here.

So I want to highlight a couple of things that, despite the fact that I've been selling and advocating the normal successes of AI, I'd like to highlight several of the main challenges or several interesting challenges that I think we're still far from solving. So one classical one that has been described now over a decade ago is the notion of adversarial attacks, which many of you are probably quite familiar with.

So you can take an image like that one, for example, which is correctly classified by an algorithm to indicate a label that is a pig. You can introduce a certain amount of noise and convert that to another label-- in this particular case, an airliner. This amount of noise is basically imperceptible, so it's very hard for humans to tell apart these two images.

In fact, you can make the noise so small that you cannot even render it on a computer screen. And yet, you can easily fool most algorithms today in terms of this kind of very basic visual recognition task.

A lot of the enormous successes are from cases where the problems are just too easy or very ill-defined. So this is a classical example where people have claimed about a decade ago that you can actually do action recognition, and the way this is done is by scraping lots of videos from the internet, and then you have things like videos of people playing billiards, cliff-diving, cricket shots, et cetera, et cetera. And then you have-- you write an algorithm to be able to detect whether you have-- whether you can recognize actions or not.

It turns out that this family of tasks is not that challenging. You can actually take a single frame, take pixels from that frame-- nothing's very sophisticated, just pixels-- and be able to do well above chance just by doing that. For example, most of the videos that contain cricket shots have a lot of green, and most of the ones that have writing on board have a lot of black.

So just by detecting the overall dominant color in the image, you can already do quite well, even though that has absolutely nothing to do with action recognition.

And in fact, when you do control tests-- for example, here, the question was, is this person drinking or not? So you build images that are really paired to try to make it really challenging where in one case, the person is drinking, and the other case, the person is making the post as if he were drinking, but he's not actually drinking, this makes the problem very hard. Most algorithms are basically a chance when you actually have adequate controls on the image data set.

OK. So I'm from Argentina, so I cannot give a talk without showing Lionel Messi over there. So the point I want to make here is that this is a little bit outdated, this is 2015, this is 2019. But if you look at the World Cup in robot soccer-- yes, there is such a thing. If you've never thought about this, there is such a thing.

There are people who actually build robots to play soccer. And this is basically what they do. And so if you compare that with-- even I'm not a good soccer player. Even if I were playing soccer, I think I can play better than that.

So I think that there are many problems where I think it's pretty clear that we're extremely far from solving. Part of this is dexterity. So we can debate whether this is intelligence or not. I would contend that there is a lot of intelligence in Lionel Messi and in playing sports in general. And part of it is just the dexterity of being able to have robots that can stand and move and so on, but I think it's pretty clear that the gap here is still quite enormous.

Here's another one that I like very much because it really boils down to basic vision, although I would argue that it actually transcends vision, which is understanding humor in an image. So imagine I give you a picture and I ask you, is that picture funny or not? Is that trying to portray something that's humorous or not?

So this is a very simple binary discrimination. Deliberately we are avoiding text in here, although one could, of course, ask the same question with language as well. And so if I show you one image or another image and I ask you, is it funny or not? Most humans have some intuition about what's funny or not. Humans may disagree. There are cultural influences on humor. Sometimes there are things that are funny to you but not to me, et cetera, et cetera, et cetera.

But all in all, people tend to agree on what they find funny or not. And I would argue that we're still extremely far from being able to have algorithms that can actually understand whether an image is funny or not. And if you're interested in this, we actually have tried this and we have a data sets on humor, and we're trying this sort of thing. There have been some claims out there that machines can do it. I hope I didn't offend you.

AUDIENCE: You now I used to work--

GABRIEL KREIMAN: OK, all right. All right. So anyway, so I think that there have been some claims that you can actually show a picture and then you have some of the very large language models integrated with images that can explain why an image is funny.

I think that most of that is overfitting. I don't think it's true. I don't think you can actually do this and really understand any arbitrary image, whether it's funny or not. And if people disagree, I'm happy to discuss this. And if people are interested in working on this, we have actually images and data sets to work.

So where do we go from here. So we have amazing successes in AI, but at the same time, there's an enormous gap with biological intelligence. And so I think that I'd like to turn to-- it's another one of the main pillars of this course and to our thinking about this family of problems, which has to do with neuroscience.

And I always like to quote Oscar Wilde saying, "The great events of the world take place in the brain. It is in the brain, and the brain only, that the great sins of the world take place also." So despite the enormous computational power that we have, we still have a huge number of tasks that biological brains can solve and machines cannot.

And so I will argue that brains have enabled us to go to the moon, to solve Fermat's theorem, to find antibiotics, to elucidate the structure of DNA and the basis of inheritance, and many, many other problems. And I think we're not quite there. And I would argue that we will need to take inspiration from neuroscience for the next chapter in artificial intelligence.

And so in addition to arguing that neuroscience is critical for constraint and inspiration for AI, we also want to understand neuroscience and brain function because of the enormous toll that it has on our world. So these are just some of the many, many available statistics on the huge toll of mental disease that's prevalent throughout the world.

So we don't need to go into specific numbers. Just to note that mental disease is a major problem in terms of young people, in terms of older people, and basically in terms of it affects everyone. And if we want to fix that, it's not going to be sufficient to be able to build algorithms that can play Go and chess very well. We need to actually go inside the brain and figure it out-- and figure out how the system works.

How do we study brains? So many of you are connoisseurs and are experts. If you're not, people have been developing lots and lots of different techniques to study brains at many different spatial and temporal resolutions. So here is a diagram that on the x-axis, you have the temporal resolution used to study the brain from milliseconds all the way to months. On the y-axis, you have the spatial resolution from studying brains at the level of synapses all the way to whole brains.

And I would argue that there is a spatial resolution, a gold standard to examine brain function, which has to do with the scale of microns and the scale of milliseconds because we know that a lot of the computations in the brain happen at these spatial and temporal scale, and we really need to understand computations at this particular level.

So many of the algorithms that we have constitute only a very poor and very simple approximation to the computations that transcend in biological tissue. So here on the left, you see a staining of an actual neuron, and this whole complexity that you see in there is often approximated by something that looks more or less like this in the world of neural networks.

So in the world of neural networks, we talk about inputs from another neurons, from other neurons. Let's call them presynaptic neurons. We talk about a cell body that linearly integrates the activity from all the inputs. So the x here correspond to the inputs, the w corresponds to the weights, which you can think of as the strength of those synapses. Those are linearly integrated. Then there may be some nonlinearity and activation function, and that neuron produces an output.

So this is at the very heart of, I would say, almost all, if not the vast majority, of neural network models. And even this, I think, it's very clear that people have been discovering and understanding, that there's an enormous complexity even at the most basic level at the level of the computations that happen in a single neuron.

My PhD mentor, Christof Koch, wrote a beautiful book called The Biophysics of Computation. This is an entire book just devoted to that problem. That is, what are the computations that happen in one neuron? What's happening at the level of the dendrites, what's happening at the level of the soma, et cetera, et cetera. So if you're interested in this particular question, I strongly recommend that book. So there's a lot of complexity even at the very bottom of the circuit.

And then perhaps even more relevantly and excitingly, computations in the brain don't happen because of just a single neuron, but it actually takes a village. It's all about the interconnection, it's all about the entire circuitry. So this is what a neural network might look like, and this looks nothing like the type of complexity that we have in the wiring diagrams of actual brains.

So at the architectural level, at the circuit level, there are also massive differences between current neural networks and actual biological brains, and we'll talk a lot about that as well. So I'd like to give a quick shout-out to two animals. OK. I lost my mouse. Anyway. Trust me, there are two more videos on the right. Trust me, I'm a scientist. There were two more videos in there.

Anyway. So animals are amazing. They do a lot of amazing things. They evolved over millions of years to survive and to be able to solve amazing problems.

And the reason I want to bring this up is because I think that there is some Homo sapiens supremacy theory out there that humans are special and that we need to understand human intelligence. I think that there's a lot to learn from animals, and that's why I like to talk about biological intelligence rather than human intelligence.

As excited as I am about humans and interacting with humans, I don't think there is anything really special about humans, we're just one branch of evolution. And in fact, I would contend that most of the progress in neuroscience, most of the progress in terms of understanding circuits and brain function have come from actually studying different types of animal models much more so than from understanding humans and the human brain.

We almost nothing about the human brain, mostly because it's very hard to study human brains. We don't have the tools, we don't have the resolution. There are many things that we cannot do. So I think it's imperative that we need to study animal models if we ever want to make progress on the question of understanding brain function.

So very quickly, this is one seminal example of a profound discovery in neuroscience that has really been extremely influential in neuroscience and vision in general, but also specifically for neural networks. These two gentlemen here are David Hubel and Torsten Wiesel working at the time at the medical school.

And basically, what they did was insert electrodes to listen to the electrical activity of neurons in primary visual cortex, working first in cats, and subsequently in monkeys. And they discovered that there are neurons in primary visual cortex that are tuned to specific features in specific locations of the environment.

So those locations were called the receptive field. Some of those features had to do, for example, with the orientation of a bar. So neurons would fire very strongly for an oriented bar that had 45 degrees, but not for a vertical bar, not for a horizontal bar. They were very extremely sensitive and very specific.

So that incredible degree of specificity was unlike anything that people had discovered before. And for that work, they were awarded the Nobel Prize a few decades ago. Not only that, but they went on to propose a circuitry that were purported to explain some of those features, how that kind of selectivity could come about.

And if you look-- and if you squint a little bit about these diagrams-- this is actually an actual diagram from their papers, what you're seeing here is actual data from the paper, this is a diagram, you can see a very initial diagram reminiscent of the neural networks today.

So many of the neural networks that we have today were inspired by this kind of diagram and this kind of idea of how we can actually create exciting, emergent properties by connecting neurons in the appropriate way.

So fast forward many years. So here's the list of properties that I argued are at the heart of many neural networks today. And I would contend that most of these have an analog from biology. So convolutional layers are aching, to some extent, to filtering operations and the simple cells that David Hubel and Torsten Wiesel discovered.

There's extensive work in neuroscience on the question of normalization, which is very similar to the normalization layers that we have in neural networks. As I said, there are whole books written about the biophysics of computation and individual layers. The input-output curves, what do you get put into the neuron, what do you get out of that neuron? A very, very simplified version of that is the RELU layer.

People have been examining questions about tolerance and invariance in the visual cortex for many years. This is somewhat parallel to the idea of POOL layers. The notion of weight changes is the question of plasticity in neuroscience. People have been studying plasticity in neuroscience both at the synaptic level as well as at the behavioral level for a very long time.

I would contend that the well-known technique of dropout whereby basically some neurons are-- some of the units in the neural network are inactive during some of the epochs during training in order to build tolerance is similar to what happens in the brain in terms of synaptic failures and activity that doesn't propagate from one neuron to the next.

And the idea of deep architectures is very similar to the notion of hierarchical architectures in the brain, which have also been described in neuroscience for quite some time.

OK. I'd like to discuss now very quickly three reasons why I'm optimistic about neuroscience, I'm very excited about neuroscience, and why I think neuroscience has the potential to transform AI as well. Before I do that, I want to get a few-- people have been very silent so far. So a couple of questions, comments. Any thoughts, disagreements? Yes?

AUDIENCE: I'm curious why you thought that neural networks, understanding why certain images are funny was just overfitting, not dimensionally understanding?

GABRIEL KREIMAN: I'd love to see evidence otherwise. I'd like to be able to input any image, particularly those from our data set where we have a pretty good controlled data set, and see what networks do. I've seen examples of very specific images where, for example, the friend has shown me one image that I always like to use in many of my talks, and then the human user asks questions to a large language model about the image. At the end of that--

AUDIENCE: --the large language model know what's in the image? Is it first described to it in words and then it can answer questions about it?

GABRIEL KREIMAN: It's not. So the neural network has the image. There is no description. OK? And then there's a human that asks a question and says, are there people? What are they doing? What's happening? And the human is leading the large language model towards the answer of why the image is funny. And at the end of this dialogue, the large language model says, yes, this is a funny image because blah, blah, blah.

And I may show this on Monday, but I don't think that the computer understood. It's all-- I think all the work was done by the human in leading throughout-- through the questions in that particular case. I haven't--

AUDIENCE: --that the reason correctly for why I thought it was funny is because the human--

GABRIEL KREIMAN: But the human basically walked the model all the way through. what I'd like to see is, if somebody wants to explain-- OK, so I don't know if you found it hilarious or not, but maybe you didn't, but the image of Abraham Lincoln with a phone. Someone, why is that purportedly funny?

AUDIENCE: Because it looks like a bathroom selfie that teenagers do.

GABRIEL KREIMAN: Right. OK. And why is it funny? Why is it funny that Abraham Lincoln--

AUDIENCE: Because--

GABRIEL KREIMAN: OK, right. So OK. So I didn't tell you much. I guided you, now I asked you, why is it funny? But I don't think that-- that image is very famous. Many of you may have seen that image. So it's likely that many models may have been trained with that image. Assuming that the model has not been trained with that image, I'm very skeptical that a model can actually take that image and understand why that image is funny. Or at least I haven't seen any evidence of that. Yes?

AUDIENCE: Although you could argue that you could provide context to the model of saying, well, there's this time period that that occurred. [INAUDIBLE] technological advances and there's-- culture derives how pictures are made. So that there's context there that accumulates to form human. I think the image itself inherently is not really funny, but I think it's the fact that you can incorporate other aspects into it.

GABRIEL KREIMAN: I agree, I agree. And then the question is, why is a human-- I didn't tell him anything, but he could get all of that? But of course, he-- I don't know how old he is. Let's say-- he looks like he's 20. But anyway, he has two decades of experience in this world. So you argue that during those two decades, he got all that context.

So I think that's fair. But I don't-- I'm not disagreeing with you. I think that networks right now don't have those-- the two decades of experience with the world that he has. What do those two decades buy you? They buy you the notion of what a selfie is. They buy the notion that there were no cell phones during the time of Abraham Lincoln.

They buy a lot of things. I think-- and they buy the opportunity to integrate all of that knowledge. I think that we're very far from being able to do that. So I'm not disagreeing. I think we're saying the same thing. You're saying it better, but I think it's the same idea. Yes?

AUDIENCE: I [INAUDIBLE] think that [INAUDIBLE] by giving context without the help of humans. Because like with things that change [INAUDIBLE], and there's a thing of like, let's think step-by-step. And I'm pretty sure if you-- if it starts with us, without the help of a human, it shouldn't be able to infer why it's [INAUDIBLE] understand why step-by-step [INAUDIBLE].

GABRIEL KREIMAN: I accept it. I'm very skeptical. And if anybody-- if anybody is interested, you're welcome to do this as a project for the summer school. We have a data set of images. I'm happy to offer a very, very, very nice price. If you want a dinner-- something of your choice.

I cannot offer Lamborghini, I don't have that kind of money, but I'm happy to offer a dinner in town or whatever to someone that gets a given level of performance on the test data in our control data set. It cannot be overfitting on one of the existing images. I'll give you some training data, we have a test data. You cannot see the test data at all. With the test data, if you get more than 80% performance-- this applies to everyone--

AUDIENCE: Can we do whatever we want?

GABRIEL KREIMAN: You can do whatever you want. You can do training, whatever you want, any model you want. The only thing you cannot do is look at the test data. And that's all. And it has to be our control data set. So for example, here's one silly way that people have published papers. You can say that all the funny images are in black and white and all the non-funny ones are photographs.

Yes, you can write an algorithm with pixels. My daughter, who's in high school, can do that. So you can take pixels and do an SVM and determine which one is funny or not right. That's not really-- it has nothing to do with humor. That's just black and white versus color. So there's lots of confounding factors like that.

But in a controlled data set where you try to get rid as much as possible of those confounding factors, I don't think that that's doable right now. And I'd be happier to be wrong here. So if anybody can prove me that I'm wrong, I'll be very happy. Any other questions? Anything that's not about humor? You want to ask about humor?

AUDIENCE: I have a different question. As you mentioned about [INAUDIBLE] take place at the [INAUDIBLE]. But [INAUDIBLE], for example, do you think AI research can benefit equally from [INAUDIBLE] different scale of neuroscience research?

GABRIEL KREIMAN: No, I think it's not equal. I think the gold standard is to study neurons and circuits of neurons, and I think that other scales are just not as informative for artificial intelligence.

I think that it's going to be very hard to-- I think it can benefit a lot from studying behavior as well, and behavior can be a very important constraint to understand and build better AI, and we'll see many, many examples of that throughout the course. But no, no, I don't think that these different scales are equal. Some are more relevant than others.

AUDIENCE: Are there other fresh stories in the field of AI where AI can infer context?

GABRIEL KREIMAN: Where AI--

AUDIENCE: Can infer context?

GABRIEL KREIMAN: Can infer context. There's a lot of work on trying to infer context. It depends on what you mean by context. Mengmi here has also done a lot of work on using context in visual recognition. For example, the fact that you know you are in this room makes you predisposed to think that there may be chairs or computers, and it's very unlikely that there would be an elephant in here. There's no elephant in the room.

So that's-- so people have been trying to incorporate that. Context, in some sense, you could argue, it's critical to large language models. So they look at the sequence of words and so on. But again, I think context is a very loaded word that has many, many different layers. So the kind of context that we were talking about before, I think that requires a lot of knowledge that's still not quite present in any of the current algorithms.

AUDIENCE: On the point that you made before, you were making the argument that the starting neuron gives the most benefit to building [INAUDIBLE] and not the system-level neuroscience. Was that the point that you were making?

GABRIEL KREIMAN: I'm not sure what you mean by systems-level neuroscience. I think studying neurons is part of systems-level neuroscience. I'm just saying that, for example, averaging the activity of every neuron in the brain, I think, is not very informative. So just because you're averaging out a lot of the critical information. That's what I was referring to.

AUDIENCE: I had a question on that. Basically, if we're going to implement the algorithm on different hardware, on digital hardware, there might be like constraints on digital hardware which are different from the analog hardware that we run our algorithms on [INAUDIBLE]. So maybe the question that I want to ask is, isn't it more informative to abstract away the specifics of the algorithm running on the hardware? Just computationally run similar algorithms on hardware?

GABRIEL KREIMAN: That's a great question. I completely agree with the question. I don't know what's the right level of abstraction that we need. I think that's a hard question, that's a central question in neuroscience. So do we need to look at the concentration of every protein in every neuron? Probably not. I hope not.

Do we-- is it OK to average the activity of every neuron in the brain? No, I think that that's too coarse, that that's not informative. So it's somewhere in between. So I think the Goldilocks resolution is looking at individual neurons and how they are connected. I think that that's the neural circuit level that I'm advocating.

I'm happy to discuss this. This is my intuition. I don't have mathematical proof. There are lots of people who are working on trying to characterize the concentration of every protein in every cell. I'm not saying that that's wrong. There's nothing wrong with that. There are lots of people who average the activity of cubic millimeters of brain activity.

So anyway, there are lots of-- I'm just saying that my hunch is that we need the level of neurons and circuits of neurons. But again, we can discuss that. And what's the right level of abstraction? I think it's a matter of fierce debate these days.

AUDIENCE: [INAUDIBLE]

GABRIEL KREIMAN: OK. I'll take one or two more questions, and yeah?

AUDIENCE: [INAUDIBLE] I think it may also depend on the [INAUDIBLE] if you're talking about the algorithm level, I think that there are some cases of [INAUDIBLE] where this comparative study [INAUDIBLE] have found similar things in the mechanism. So that's a case where there's evidence that the specific connections and the specific-- at least at a certain level of [INAUDIBLE] algorithm is. But then [INAUDIBLE] it does matter to actually [INAUDIBLE] support in this way [INAUDIBLE].

So you might be able to answer the question of what's important by looking at that.

GABRIEL KREIMAN: I completely agree. So that's-- she says it better. That's another level, it depends on the question as well. So for different questions, there may be different levels of resolution.

OK. All right. So I want to very quickly mention three reasons why I'm optimistic about studying brains, and then I'll switch gears and talk a little bit about logistics for the course.

So the first one is, one of the things that I'm particularly excited about is that now we have circuit-level diagrams of the brain. And so I'm not going to describe this in any detail because we'll have a whole talk about this. So we'll have this person here called Jeff Lichtman who is arguably one of the world leaders in the world of connectomics and trying to understand the detailed connectivity of circuits.

So imagine that you're trying to figure out how a computer works or how a phone works, but you have no idea how things are connected, you don't have any information about the wiring diagram. It's pretty challenging.

So now for the first time ever, we have high-throughput techniques that allow us to get very detailed circuit-level information. People were able to do this decades ago for the nematode C. Elegans that has 302 neurons. 302 neurons. That's the connectivity that we were able to do.

So thanks to Jeff Lichtman and many others, now we can actually have this kind of resolution-- that is, which neuron talks to which neuron. So how they are connected to each other. Not only for neurons, but many other elements present in the brain, at a scale of hundreds of cubic microns all the way up to a cubic millimeter or so. And he will give a whole talk about this. And I think this is playing a transformative role in what can be done in neuroscience these days.

The second one that I want to mention is that we have the opportunity now to record the activity of large numbers of neurons at the same time. So again, back to the analogy of the computer. Imagine that you're trying to figure out how this computer works, but you can only record the voltage of one transistor at a time. That's the equivalent of what Hubel and Wiesel did.

So they heroically spent days putting an electrode and recording the activity of one neuron at a time. And with that, they were able to make fascinating inferences about the function of the circuitry.

But now, just the only thing I want to point out about this slide here, this is work by amazing people at Janelia Farm, including Carson Stringer. The scale here, each row here denotes the activity of one neuron. And that scale bar corresponds to 1,000 neurons.

So you can actually record the activity of thousands of neurons simultaneously. So now in the span of a few decades, from Hubel and Wiesel to now, now we can actually investigate tens of thousands of neurons. There are people who are talking about techniques that may allow us to investigate hundreds of thousands, if not millions, of neurons in parallel. Not the combined activity, not the average activity, but the activity of every single individual neuron.

So this is-- again, just to put out there another analogy, imagine that you're trying to understand the political sentiments of people in the US, and the only thing you can ask is just call a random number and ask them, what do you think about Trump? What do you think about Biden?

Another option is to average everything and say, well, what do people in the whole state of Massachusetts think about Trump? And then you can get some average of everything, which is not very useful, I would contend.

But now, we can actually get hundreds of thousands, tens of thousands, perhaps one day, hundreds of thousands of individual answers to that question in parallel. So I think that this is also going to be transformative. And finally, the last thing that I want to mention is that now we have the possibility to causally interfere with neural activity.

And again, I won't go into the details here because we have this person, Ed Boyden, who was one of the creators of this technique called optogenetics, by which you can actually turn on and off specific circuits. Imagine you can go into the computer and turn on and off specific parts of the wiring diagram to actually causally probe function in that circuit.

So I think this is also playing a transformative role. And I don't know, maybe Cole here has done some amazing experiments with this family of techniques that I hope that he will have time to talk to you about as well. OK.

So the last two or three things that I wanted to say. So I've been talking about taking neuroscience as inspiration to build better AI, and we'll talk a lot about that in the course. Even if we build a system that's extremely intelligent, that can play chess, and can play Go, and can do a lot of amazing things, that doesn't mean, at the end of the day, that that system will have any kind of emotion, any kind of feeling, any kind of consciousness.

So in the neuroscience world, there has been a lot of interest in the question of, what exactly is it about the massive brain that we have here that produces our feeling of consciousness? And that person over there is Francis Crick, who was-- at the end of World War II, he was trying to decide what to do with his life, and he said, well, should I work on consciousness or should I work on DNA?

And he was debating about this too, and at the end, he decided, well, first I'm going to solve the DNA, and then I work on consciousness, which I think it was a lucky thing for humanity because if he had started with consciousness, maybe he wouldn't have finished as quickly.

But anyway, so he elucidated the DNA-- the structure of DNA, together with Jim Watson and Rosalind Franklin and many, many others. And then he went on to spend the rest of his life thinking and providing influential ideas about the study of consciousness. More recently, joined by Christof Koch over here.

How exactly this is connected to the idea of machines being conscious or not remains entirely unclear. Whether we can build machines that have consciousness or not also is very unclear. Whether we want to is also quite unclear.

And I want to dissociate the notion of whether a machine-- let's say a large language model or your favorite convolutional neural network-- has any kind of consciousness or not, from the idea of ascribing feelings to machines. So I think that this is something that's going to happen in the field very, very quickly. Some people would argue that it has happened already.

So here are a couple of examples of different cases where people have ascribed feelings to machines. The most astounding to me is the Tamagotchi effect. Many of you are very young and probably don't even know what that contraption is. This is basically a random number generator. It's a contraption that basically did nothing except that randomly it would say things like, I'm hungry, I'm sad, it would start crying.

And so on believe it or not, there were lots of kids that really suffered for that thing. So they were very willing-- I'm not trying to laugh, I'm serious. They were very willing to ascribe some feeling or consciousness to that random number generator.

This was an article in The New York Times about people falling in love with machines. Do you take this robot to be your wife? So this is 2019. I think it's an interesting article for people to read. This is one of the companies that builds amazing robots, Boston Dynamics. This is how they train the robots. And we had a demo of this a few years ago here in the summer course.

And the reason I want to point out this is that when we had these demos for the first time here, what I was very surprised about was by the reaction of the audience. Everybody was really thinking that this human was being cruel. So this is a piece of metal. It's an amazing piece of metal. It's probably one of the most dexterous and able robots out there in terms of a lot of things.

But then most people see the way that these machines are trained and they think that humans are cruel because they ascribe some sentiment to this machine.

So I think before we get to machines that are conscious and before we even debate whether machines can or cannot be conscious, I think people will be able to ascribe feelings to machines in the same way that people have talked about large language models that can write sentences like "I'm feeling sad" and so on, and people are very quick to empathize with those sentences.

So this brings me to the last point I want to make for now before I get into the logistics, which is that together, with great power comes great responsibility. So I hope that we'll have ample opportunities throughout the course also to discuss the important ethical implications and responsibilities that we have as researchers in the field of AI.

So these are some of the many topics that I hope we'll have opportunities to discuss. I think, with many others, that there will be a redistribution of jobs. So the job market will change in a profound way. Some people have argued, and I don't necessarily disagree, that the change may be aching to or perhaps even grander than what happened during the Industrial Revolution.

I personally think that Terminator-like scenarios are very unlikely. People have argued for this sort of thing as well. There are questions about AI for military applications. What happens when machines make mistakes? Of course, humans make mistakes, too, but we're used to humans being sloppy and making mistakes. We're not used to machines making mistakes and there are lots of questions about what happens in those cases.

There are lots of biases in training data. Of course, humans have lots of biases, too, but again, we're used to humans and their biases, but we're not that used to what to do and how to deal with machines and algorithms that have biases.

Some people have argued-- I personally don't quite agree with this statement that machines don't have true understanding, and we can debate about what understanding means and what people mean when they make such claims. In addition to that, there's redistribution of jobs. I think there will be a lot of social mental and political consequences of those rapid changes in the labor force. Also a topic for interest and discussion.

And in many of these cases, a lot of these things are happening way, way faster than regulations, so I think that this is also an important topic that I hope we'll have time to discuss.

OK. Any questions, comments? I want to switch now to a couple of logistical questions about the course for a few minutes before we end. Any questions or comments about this so far? Yes?

AUDIENCE: [INAUDIBLE] talking about how people-- there's a tendency to take humans as special [INAUDIBLE] intelligence. And I'm wondering about these [INAUDIBLE]. There's some confusion between being [INAUDIBLE] and being humanized [INAUDIBLE]. And [INAUDIBLE] do we want to build machines that do well? Do we want to build machines that behave-- like, do we want them to understand people or do we want [INAUDIBLE] success?

GABRIEL KREIMAN: If you're asking-- that's a great question. By the way, I should say, yet another reason for people to come closer is the acoustics here is horrible. Part of it is that I'm old, and many of the speakers are old, but part of it is that the acoustics are really horrible. So sometimes you need to shout.

If you don't believe us, one day I will do the experiment. I'll have you come here and try to listen to people in the back OK. So either shout or actually come closer. But I completely agree with your question.

So these are different goals. If you ask me personally, I want it all. I want everything you said. So I want to be able to understand biological brains. I want to be able to build machines that can solve tasks irrespective of whether they do it in a human-like manner or not. Because in many cases, we just want the task to be solved and I don't care how. I don't care whether it's inspired, I don't care whether it's similar. I care that it works.

So when you go to the supermarket, you don't care whether that barcode reader was inspired by Hubel and Wiesel or what the heck it does. You just care that-- so I think in many cases, we just care about getting the job done. But in many other cases, we may want to have algorithms that are aligned to humans for a variety of different reasons.

So because we may have-- want them to have similar intentions, because we may want them to label images the way we do. There's nothing wrong with that algorithm that took that image of a pig and called it an airliner. It just doesn't match with what we see. According to the algorithm, according to the classification function, that image is an airliner. It just-- we see it as a pig. So in that case, that's a clear misalignment.

So in many cases, I think for many applications, we do want alignment with human values, with human answers and so on. And in many other cases, we may not care about that. We may just care about building algorithms that work. So I personally want it all. I think depending on the question, I think that there may be different applications. Any other questions, comments? Yes?

AUDIENCE: I think that the thing about humans describing feelings to the random number generator, for example. [INAUDIBLE] the little thing on the screen looks very much like an animal or something similar that the robot looks like a human. So if the robot is able to look like a [INAUDIBLE], for example, I would-- I don't think it would be [INAUDIBLE].

GABRIEL KREIMAN: I think that's true. I think that it depends a lot on the colors and the shape and the aesthetics and so on. Boston Dynamics, in many cases, they build human-like machines, partly for practical-- for legitimate reasons that they may work, but partly because it's useful aesthetically to make it look that way.

There are machines that look unlike humans at all, and still, people use vocabulary that I find very strange, but interesting. If you have one of those Roomba things that's a vacuum cleaner, it goes around in a random way basically cleaning. And many times people say, oh, it's saw a corner, so it wanted to go left. So as if it really wants to do something or it decided to do this or that.

There's a machine that cleans pools, also a robot. Again, it's a random number generator, and people say, oh, why didn't it realize that part was dirty or why does it do this? Why did it decide to-- so people use this kind of jargon, and the tendency to anthropomorphize things, I think, is enormous, and people very quickly think about--

There was this case last year about this Google engineer and this large language model that said that it was not being treated-- again, this was a large language model. I very much doubt that the model had any kind of feeling whatsoever, and yet it was a huge scandal.

So I agree. I think that painting eyes and aesthetics, and all of that helps and it will be very, very important, but I don't think that that's the only aspect to it.

AUDIENCE: [INAUDIBLE] how [INAUDIBLE]. For example, when I talk to ChatGPT, I realize that I'm way more polite to it just because it talks kind of like a human [INAUDIBLE] and it's relatable.

GABRIEL KREIMAN: I say thank you to my Alexa thing as well. And yes. Anyway. Yes, absolutely. Any other questions, comments? Yes?

AUDIENCE: Do you think [INAUDIBLE] fully understands animals' consciousness or feelings will boost the development of creating or understanding consciousness and feelings of artificial intelligence?

GABRIEL KREIMAN: I do. I think this is very hard, I think it's very contentious. And again, I mathematically prove that this is the case. I do think that neuroscience has been a major source of inspiration for AI in general. I think consciousness would not be an exception. I think if we can make progress to understand why animals are conscious, what are the mechanisms-- what consciousness is in the first place. I think that will help us understand whether we can endow machines with--

Another question is whether we want to, and we can debate about whether that's a desirable trait or not. I suspect that it will help, but again, this is just my own personal opinion. I cannot really prove this in any way. Yes?

AUDIENCE: So [INAUDIBLE] talks about relationship between [INAUDIBLE].

GABRIEL KREIMAN: Yeah. So first of all, let me answer first about consciousness. I know that's not your question. My two mentors, Christof Koch and Tony Poggio, differ quite widely on this. So Christof thinks that intelligence and consciousness are completely orthogonal. They have nothing to do with each other. You can have machines that are very intelligent and have no consciousness and systems that have a huge amount of consciousness but no intelligence whatsoever.

Tommy Poggio, on the other hand, thinks that they are actually highly correlated, at least in practice, so they diverge on this. So in terms of emotion and intelligence, I think that I would go with Christof-- that's where my PhD-- and say that they're actually completely different things. So that you can have emotions and no intelligence, and you can have intelligence without emotion.

Of course there's intersection between the two. So this is what people talk about emotional intelligence. But I think that this can be double dissociation between the two. That that would be, again, my conjecture.

Intelligence have no consciousness? I think Christof would argue that chatgpt is a perfect example of intelligence without any consciousness whatsoever. Or any algorithm that you can think of, basically. So he would say that these are extremely intelligent, they can do amazing things, but they have absolutely no consciousness whatsoever.

AUDIENCE: [INAUDIBLE]

GABRIEL KREIMAN: To have no consciousness? OK. OK-- right.

[LAUGHTER]

Right. So I'm happy to continue the discussion. I think that-- again, we can argue whether this chair is conscious or not. We can argue whether a fly is conscious or not. I think right now, most people would agree, and again, this is not a mathematical argument, but I think most people would agree that right now, ChatGPT has no consciousness, in whatever way you understand the word consciousness. I think it would be very hard for people to defend the notion that it does.

In any case, I'm happy to continue this. We're not going to solve that now. I'm happy to continue the discussion on this. Let's take one more question and then let's switch to some logistic issues that I want--

AUDIENCE: [INAUDIBLE]

GABRIEL KREIMAN: OK.

AUDIENCE: [INAUDIBLE] basically what [INAUDIBLE], what they said. But you mentioned that there are systems that are not intelligent, but conscious. Could you describe a system that is not intelligent but is conscious?

GABRIEL KREIMAN: OK. So again, so according to Christof and many others, consciousness is not yes or no, but rather a continuum. And so he would argue that flies are conscious and they have less intelligence than other algorithms-- and other organisms. We can debate about that.

I think most people would be very hard-pressed if you talk about worms, about C. elegans. I'm not sure whether people would still defend that they have consciousness or not. But I think that these would be examples of things that many people like Christof would put on the higher consciousness than on the intelligence axis. Happy to debate about this as well.

Is that-- would you have a very quick question or-- same question. OK. All right. Happy to continue the discussion on this. I want to switch gears now and talk a little bit about logistics.

So first of all, this course was created as part of the Center for Brain, Minds, and Machines, which has a center of gravity at MIT and many faculty at Harvard And. Many other places. The main people that inspired this center were Tommy Poggio and Josh Tenenbaum, and as well as many of the faculty that are highlighted in yellow. There are many other faculty that have been part of this center from the very beginning. I ran out of space, being very unfair to many faculty that I couldn't put their names in here, there are many more.

So one of the goals in creating this center was to try to educate the next generation of scholars that can fluidly converse in cognitive science, neuroscience, as well as AI. And so we thought that one of the best ways to do that was to actually create a summer course, and that's what we did.

This was partly inspired by a sister course that we have here, the Methods in Computational Neuroscience, which was created by Christof Koch and has been going on for a few decades now. And the goal of this course is to bring amazing scholars like you from all over the world and to train them at the intersection of thinking about neuron, circuits of neurons, algorithms, neural networks, machine learning, as well as behavior and cognitive science.

And so I want to introduce, again, a couple of people. I already mentioned Boris. I mentioned Tommy, who will be joining us very soon. Kris Brewer here is our expert. Everything that you do, that you see in terms of all the amazing material that we have on our website is thanks to Kris. So I want to thank him one more time too for coming all the way here for the recordings.

Behind the scenes, there's Kathleen Sullivan. Without her, none of this would be possible. She's making everything happen. Andrei Barbu and Mengmi Zhang-- Andrei is sitting right here, Mengmi is sitting right there-- are our head TAs. They are amazing. Mengmi was a student in the course many years ago, and you can ask her any question. She's quite an amazing investigator.

Andrei has been here from the very beginning. So I would say that this course is Andrei in a way. So Andrei has been the creator of almost everything in this course from the very beginning. So I think that you'll all be delighted to interact with him as well.

Lizanne DeSefano is our external evaluator. At some point, you may get an email from her. We have an amazing group of TAs, many of whom have been students in the course before. And I'm really very, very pleased that they come here to interact with you to help you, to give you tutorials, and to help with projects.

So if the TAs are here, I'd like to you to stand up so that they can all see you for one second. I see Hector. OK. So they will introduce themselves this afternoon after lunch, but I want to take this opportunity again to thank you all for coming here, and also, to remind you that I want to talk to you for one minute before lunch as well. But anyway, thank you, thank you, everybody. OK, very good.

So as was mentioned very early today in the morning, we always have a very large number of applicants. One of the hardest things that we do in the year is having to select 35 students to join our summer course out of the more than 300 applicants. So I'm very honored and very happy that you're all here, and if you're here, that means that you've gone through a very rigorous selection process and that you're amazing already just because of the fact that you have a right in here. So thanks a lot for joining us.

Many of our alumni now are faculty or have set up their own startup companies. The course has been extremely successful in really training people and providing opportunities for people to network and do amazing things after they finish the course. This is just a partial list. I couldn't really put everybody in here. This is just some people that were students in this course.

Some of these people are here. Mengmi, Ko. Some of these people will actually come here or give a talk. We have an Alumni Day where we will have many former students from the course that will come and give a perspective on what they're doing and their lives and their careers, and so this is an opportunity for you to talk to people and find out about their evolution, their careers, and so on.

We also have a lot of faculty that will be giving lectures. One of them will be on Zoom, most of them will be here in-person. These are people from very different fields. We have experts in theoretical neuroscience, experts in machine learning, experts in robotics, in computer science, cognitive science, et cetera, et cetera. So I think you're in for a treat.

We also have a couple of special lectures. I mentioned already Jeff Lichtman. We have two people from Google, Philip Nelson and Douglas Eck who have also given talks in the past and they're quite amazing and they will give us a perspective from industry and some of the amazing projects that they're covering.

We have one joint lecture with Methods in Computational Neuroscience course. It's a special lecture by Sebastian Seung. We have two full days of theory that are led by Tommy Poggio. This is on the mathematics of machine learning, and we have two full days with experts from around the world coming to tell you about theory of machine learning and one Alumni Day.

We have a lot of tutorials. I think the tutorials are a great opportunity for people to catch up and brush your knowledge of specific subjects, and also to explore things that maybe are not familiar with. So we have a lot of the TAs will be giving those tutorials and they are a great opportunity to learn some of the basics and also get to know the TAs better.

You should have received an email to get onto Slack. And then in the past, I'm not sure we're doing it now, but there's also an email list, and also in the past, there was a Whatsapp group for the class. So again, if you haven't done this already, we want to-- for all of you to introduce yourselves today in the afternoon.

So if you haven't done so already, please upload one slide to the link-- the Google Drive link that Mengmi sent. It would be great if you put a picture and your photo and your name. Many of us will try, perhaps unsuccessfully, to remember your names. If I mispronounce your names, don't get offended, I mispronounce everything. But we'll try, so it would be great if you put your picture and your name in addition to a bit of background on what you're doing, what you're doing research on.

So I want to end by mentioning very briefly a few things about the projects. So the projects are a highlight, I think, of the course. We think that you learn a lot by doing, not just by listening to people. So because of that, we hope that the lectures will be interactive and we encourage people to really interrupt the speakers, ask questions all the time.

But in addition to that, we will ask each of you to carry out a project for the class. So typically, most of the projects have been done by individuals. You can also work on groups of two people. We have a suggested list of projects. So the TAs will introduce suggested projects, I think, starting tonight and maybe continuing on tomorrow.

You're welcome to create your own project or discuss variations of the proposed projects. If you are going to do that, we ask you to discuss that with the TAs or the PIs. You'll get plenty of help both from the TAs and the PIs.

We have a policy of open discussions, so if you're working on something that's top secret and you're about to have-- your startup company is going to have an IPO in three days, that's probably not the best project for the summer school. So we really like to encourage everybody to discuss projects, ideas openly with everybody else.

I usually like to ask people not to procrastinate choosing the projects. Three weeks go by very quickly. So I think this is not binding for the rest of your life.

So I would like to encourage people to try to choose the project within the next couple of days. If you start your project after 10 days or after two weeks, that means mathematically you just have like one week or 10 days to work on the project. So I would encourage people to try to choose the project within today, tomorrow, the next two or three days.

There is plenty of time in the schedule to work on projects. And then we'd like to encourage people to work in the lab because we like the atmosphere of people working together and debating together and bouncing ideas on the whiteboard, complaining that the code is not working. It's very therapeutic to do that to your fellow members and so on.

But of course, you can work wherever you want. You can work at the beach and in your room, et cetera, et cetera, but we have ample space for people to work there. That's mostly where people will gravitate towards and that's where the TAs and PIs will go to for discussions.

Many of the projects have been-- just don't work. That's just the nature of science. You work on a project for three weeks and that's it. There are many, many projects that people got excited about and ended up being quite transformative in their careers. Many of them, people continue working on them and they became papers. This is just a very short list.

Again, these are semi-random lists. I could fill lots and lots of slides with projects from the summer school that ended up being published. This is just a random sample. Some students came here and then they changed their entire PhD dissertation based on the project they did in the summer. Some people published papers in prominent conferences and prominent journals and so on.

And then, again, there are many projects where people take risky ideas and they just don't work and that's fine. You're not going to get a grade from us. This is not life or death. We want you to learn-- the main purpose of this is for you to have fun, but not too much fun, somebody said.

But to have fun, to actually learn new things, to try new ideas, to take risks. The goal of this is not really to-- most people pass this course. We only have like five, six people that fail every year. No, nobody fails this course. So there is nothing you can do that that's wrong. There is no project that's wrong.

And so really, we want people to really think about exciting things and exciting questions. We have TAs that are quite amazing that have published lots of amazing papers and can help you and guide you and work with you. Use them, think with them, work with them, and I hope that this will be a lot of fun for everyone.

We also have a lot of social events, receptions. We have a couple of receptions that will happen after the seminars. We have one day where people will go to Martha's Vineyard, which is an island across the water. There's a boat ride that can be organized which is used by the-- it's the Gemma boat ride that's organized by the MBL to go and collect specimens if people are interested.

In the past, people have organized more informally group runs, bikes, kayak, et cetera. And we have a closing reception after you present your projects. The food in the closing reception is contingent of you having presented your projects in the end. OK, so that's all I want to say.