GABRIEL KREIMAN: Sor I'd like to get started with this. As some of you may or may not know, next week is quite important. I'm originally from Argentina. And next week is quite important, and the World Cup is starting. I know that for many people in the US, they don't care about this. But this is an event that happens every four years, and it's pretty amazing for mostly everyone-- mostly everyone else.
So this guy over there is Messi. He's arguably one of the best players in the world. And what he can do is quite amazing. It involves vision and motor commands and motor preparation and motor skills and whatnot. We can talk about his legs and his feet and whatnot, but I would argue that the magic really happens in his brain. So the brain is the one that has to have the vision, the decision-making, the rapid processing, and ultimately, the skills to orchestrate all these types of motor output.
And I'd like to contrast that with this type of video-- [INAUDIBLE]-- which is actually a little bit outdated. I think this is 2011. I couldn't find a more recent version. This is the World Cup in terms of robot soccer, so this is where we are nowadays, in terms of what robots can do in terms of playing soccer. So I'm going to advance a little bit. OK, anyway, so the speaker is sort of making it very exciting by shouting and doing and saying all sorts of things. But this is sort of state-of-the-art, at least in 2011.
AUDIENCE: What's the guy doing behind him?
GABRIEL KREIMAN: What's that?
AUDIENCE: Oh, the guy's there to catch the robot in case he falls over?
GABRIEL KREIMAN: Yes. I'm not trying to argue that playing soccer is the pinnacle of intelligence by any standard, but I think that this argues that the brain is doing a lot of magical things. I would like to understand what's happening inside neurons and circuits of neurons that can orchestrate the difference between someone like Messi or--
So this is, of course, another example. Again, good luck trying to implement this with robots these days. OK, so let me close this and get back to the presentation. Was that a question in the back, or comment? You're from Brazil? All right.
So we know that the brain can solve many of these problems that we ascribe to intelligence. And arguably, the brain is a product of millions of years of evolution. And that has led to interesting solutions to complicated computational problems. So by virtue of understanding how neurons [INAUDIBLE] solve these problems, we may a, understand biology, but also b, be able to instrument some of these ideas into computational output. Biologically possible or brain-inspired algorithms are certainly not the only possible way of solving these problems.
One can engineer a lot of cuts to many of the questions that have to do with intelligence. But we may be able to capitalize on the one system that we know and solve a lot of problems that relate to intelligence. So some of the many, many features that the brain has and which-- I don't know why this is--
AUDIENCE: It's the connection perhaps?
GABRIEL KREIMAN: OK, I think if I hold it all the time, then it will work. Do you think this is the [INAUDIBLE] resolution? This magical resolution. Yeah, I do remember seeing that. Let me try. Does anyone remember the magic number? Let me see if I can try another adaptor and see if that's the problem. OK, well. Here's the claim.
I was going to say that one of the major aspects of computational in the brain is fault tolerance, and this is a perfect example of how computers are not there yet. And also, the notion that the brain can learn and adapt to any situation. So maybe I can hope that you will adapt to that annoying jitter over the course of the next several slides, unless anyone has a better idea how to-- who wants to play with this.
Here are a couple of things that I really admire about brains and circuits of neurons that I think most of you would agree are not quite there yet, in terms of computers and robots. So we have hardware and software that work for many decades. I wonder whether any of these computers that we have today will be available and useful and working in five years, let alone seven decades.
We have parallel computation with serial bottlenecks. We have reprogrammable architecture. The same piece of cortex can be used for very different tasks. We can do single-shot learning. We can discover structure in data. Fault tolerance is, of course, a major aspect of computations in brains. You may hit your head or drink alcohol, or a lot of other ways of trying to damage neurons, and still, you can function quite well afterwards. Try that with your computer or your iPhone, and they're not so resilient to damage.
And one of the major things that we've talked about already, and will continue to talk about in the next couple of days, is the robustness to sensory transformations, illustrated, for example, in the context of visual resolution, where you want to be able to recognize an object in spite of the myriad of different presentations that that object can have, in terms of different scales, positions, angles, and so on. And also, you have interaction among many components and integration of different systems, which happens almost instantaneously. And it's also a major theme in engineering artificial systems.
So in the context of Steven Anderson, of brains, minds, and machines, who are very interested in studying neural circuits to try to understand biology, but also to try to interact with many of the other efforts, and to be able to explore the type of high-level phenomena that described in Thrusts one, three, and four, that in the context of social intelligence, in terms of integrating intelligence, and the development of intelligence, trying to understand how those are instituted in terms of brains and circuits.
I'll try to argue and give you a few examples that this is a golden age to study neural circuits. We have unprecedented capability to interrogate and simulate and interact with brains in ways that we never had before. And that gives us the opportunity to really be able to test complex computational theories, and also be able to inform and constrain those theories from point of view of [INAUDIBLE]. So we rigorously take some of the theories that will happen in Thrust five.
Some empirical findings from biology can be readily translated into [INAUDIBLE]. So I put a couple of slides mostly so that you have them later. Here, these are some books on neuroscience and cognition neuroscience. This is a somewhat obscure one, not one of the most famous ones, but one that I like very much. It gives a very rigorous introduction to computational neuroscience, and the sort of computational models and ideas that I'm going to be talking about and other people [INAUDIBLE].
So let me now give a very brief introduction to neuroscience, and I know this will be [INAUDIBLE] for some of you, just so that everybody's on the same page in terms of some basic jargon. Again, feel free to interrupt with questions. Here's a schematic of the brain. The brain can be divided into different lobes-- the occipital lobe, the frontal lobe, the temporal lobe, and the parietal lobe. And from the early days, in which people were trying to study brains, we came to the realization that there is a certain degree of specialization, that the whole brain is not completely uniform.
And one of the most impressive examples of this is Broca's area, an area that's particularly important in the generation of language. And this was discovered a long time ago. It's particularly localized for most right-handed people on the left hemisphere. Any damage to this area renders people essentially [INAUDIBLE] and unable to [INAUDIBLE]. So this is not a whole uniform mass of neurotransmitters, but there's specialized subdivisions of neurons in different types of functions.
So much so that people like Rothman spent a lot of time looking at the site architectonic structure and tried to divide the brain into a lot of different areas, depending on the specific morphological properties of neurons. To some extent, the story in neuroscience begins with this guy over here Espana called Santiago Ramon y Cajal, who spent his time mostly drawing, looking at stainings of neurons and doing drawings, and had a very powerful intuition to understand some of the basic principles, or to suggest some of the basic principles that governed the function of neurons.
So based on those drawings, he realized that neurons are cells, as opposed to what his mentor and competitor was saying. Golgi thought that the brain was a continuum and was not divided into individual cells. And he insisted-- and largely, he was correct-- that the brain is composed of specific cells and neurons that have a nucleus, that have dendrites that provide input to the nucleus, and then an axon that then sends information to us.
This basic flow of information is largely correct. That's essentially the way we understand flow of information in neurons. And a lot of the magic computations in our brains happen in exactly how information is integrated from multiple dendrites into the soma to decide whether the neuron would fire an action potential that would propagate the action.
So I would argue that the brain is the most complex system ever studied. It's more complex than, say, studying the universe in the sense that every one of those neurons is interacting with a lot other neurons in very complex ways. I haven't been able to solve this. I don't why it keeps shaking.
So there are 10 to the 11th neurons, approximately, in the human brain. The retina alone has 10 to the sixth neurons. Each neuron connects to about 1,000 to 10,000 other neurons and, in turn, receives input from about 1,000 to 10,000 other neurons. And most of the magic happens in the part of the brain that's referred to as the cerebral cortex, which is about three millimeters thick. And it involves most of the type of computations that allow us to see, to hear, all the sensory modalities, as well as planning and all sensory motor [INAUDIBLE].
There's a wide diversity of different types of neurons, and there's a hope in getting the people in neuroscience trying to characterize different types of neurons, in terms of their shapes, in terms of their particular patterns of genes that are expressed. And by and large, I'll tell you that we know very little about the specific function of different types of neurons and find they [INAUDIBLE] to different types of computations.
One exception to that is the notion that there are some of neurons that are excitatory and other neurons that are inhibitory. And that plays a center role in any computation models. Beyond that, many computational models really get into the details of the different types of excitatory [INAUDIBLE] neurons and how they facilitate different types of computations. It would be a little bit deeper in resolution, but within the neuron, there's a bilipid membrane that has channels.
The flow of ions through those channels is ultimately what dictates the flow of information. And one can essentially describe a neuron by an equivalent electrical circuit that takes this form. That has a conductivity for each of the main ions that pass through the membrane, including sodium and potassium. And for each of those, one could write a very simple Ohm-like equation that, at the end, describes perfectly well how ions flow and voltage is integrated in the neuron.
As information flows through the neuron and needs to be passed on to another neuron, the main way of communication between neurons is the chemical synapses, where if you have a nerve terminal for presynaptic neurons, this is release of neurotransmitters that then open ion channels in the postsynaptic cell. So later on, we'll talk about the basic models by which circuits of neurons instrument computations. And we'll write equations of this flavor, where x represents the activity of this set of neurons. W represents the set of waves, and then the [INAUDIBLE] is integrated to create a given [INAUDIBLE].
To, of course, first approximation, the way most of us think about in terms of biology, in terms of what's happening here, is that these waves, which represent the impact of the given neuron onto its postsynpatic target, have to do with what's happening here at the synapse. We're given amount of activity coming on the presynaptic neuron. There will be a different degree of influence or impact on the postsynaptic neuron. And that's governed by these waves, w in here.
So this is an electron microscopy high resolution view of the very, very small distance that happens in between the pre- and postsynaptic neuron. And that's essential to ensure that the specificity of the communication between many neurons and the next. So I'd like to discuss now some very basic aspects of how we deal [INAUDIBLE] of individuals neurons, at least some of the most basic types of models.
There's a wide variety of models that people have used to ascribe the function of individual neurons, starting with filter-like operations, where you take an input, and that's filtered to produce an output onto the type of integrate-and-fire circuits that I will describe in a second-- what people refer to as Hodgkin-Huxley units, multi-compartmental models, and then models that include every single channel in every single dendrite of neurons.
So as we go along this line here without more biological accuracy, there is lack of analytical solutions. So here, we have some hope of being able to do math to describe the neuron. Here, we largely lose that ability because we are bogged down with simulations and a lot of complex [INAUDIBLE]. So the computational complexity also includes its [INAUDIBLE].
I'm not going to say much about this, but this is just to give you an idea of the basic way of communicating information in a passive way within a neuron, so without the propagation of action potentials. People describe an action or, more specifically, a dendrite as a basic cable. And one can use all the basic equations from physics to ascribe the propagation of current in a cable of this form.
So this is Ohm's law describing the amount of current in between these two points. This is the voltage at this point. This is the voltage at this point. And this is the resistivity of this cable. So one can write what's called a cable equation that describes how voltage changes as a function of space and as a function of time. So in large, within dendrites, voltage is propagated in a passive way, so there's no regeneration of the signals in an active way.
And that means that voltage decays quite rapidly. And this is not the preferred way of communication for signals over a very long distances. So you can't really send signals to move your legs from your brains in this way. You need action potentials. You need the active propagation of signals. And we didn't show this [INAUDIBLE] in particular dendrites. This is a semi-accurate description of how voltage changes [INAUDIBLE] space and back.
So, if we want to think about how to model individual neurons, again, people use a wide variety of different models with different ways of accuracy and resolution. And I'm trying to illustrate those models with these simple diagrams here. Particularly in the context of most of what we're going to be discussing today, the emphasis is in trying to understand what happens at the level of a complex circuit. Most of the models assume that a neuron is a single compartment, meaning that all the computation happens in one place.
I'll ignore all the intrinsic complexities that happen at the level of dendrites as well as ions. If we go in the opposite direction, there are people who try to reconstruct the exact shape of a given neuron, every single twist in every single dendrite, every single axon, and ask, what's the impact of this type of architecture in terms of computation? In between these two extremes are what people call multiple compartmental models, which means that there are multiple compartments. For example, here is an axon, a soma, and three different dendrites. And one can start subdividing that more and more and more.
In general, the central question in neuroscience is, what's the right level of abstraction that we need depending on the particular problem and question that we're interested in. It's not obvious to me or to anyone that more biological complexity is necessarily better. So instead of the fact there's a lot of people who are interested in characterizing with ever-increasing the precision, what happens is more than smart compartments, from the point of view of understanding the ultimate input output relationships for a given neuron.
It's not obvious that we'll always need to have more and more complex models. With that said, it is clear that, for example, there could be a lot of interesting computations happening at the level of dendrites. Single neurons can do amazing computations. For example, they can multiply to inputs. They can integrate inputs in [INAUDIBLE] ways. And a lot of the basic models that we have been discussing and we're here to discuss may be instantiated at the level of dendritic computation.
So one of the most basic ways of thinking about single neurons and how they compute is the notion that their inputs that come from the dendrites from other neurons, how those are integrated a very simple circuit that contains the capacitance and resistor. And after integration of those currents, there's a threshold. And based on that threshold, they're easier to spike. Yes?
AUDIENCE: I have a question on the last slide. With each different model, it seems like one way to test how useful they are to actually explain, it's just like what you're trying to explain with the different models. Have people looked at that and shown how useful it is to use or way, way flexible and versus a more non-compartmentalized one?
GABRIEL KREIMAN: Right, so I think to ask how useful the model is, and-- I think we need to discuss this in the context of a specific question. So we'll come back to this discussion later when we talk about networks. By definition, if you're interested in looking at the non-linearities that happen at the level of dendrites in integrating information, by definition, this model doesn't have that that, and we need to get into this type of--
AUDIENCE: [INAUDIBLE] firing of the output neurons. Because that seems to be--
GABRIEL KREIMAN: So in several cases, if we inject current in the soma of a neuron, and you want to look at the alpha firing rate, this makes a face. And I think I have a slide to show that this type of very simple single compartment models can do extremely well in describing that aspect. So again, it depends a lot on the question. I'm not trying to argue that one should never go this way. Again, depending on what exactly you want to do with the model. This may become less or more.
I want to perform the discussion-- I want to come back later when we talk about networks. We'll talk about, arguably, one of the largest computational efforts these days in neuroscience in Europe, the so-called New Brain Project, where people are trying to build really large scale models of [INAUDIBLE] neurons at this type of resolution. So they really have multiple compartment models with a lot of biological [INAUDIBLE].
And at that point, I'll open discussion of how useful they are or not. If the only question you care about is trying to predict the firing rate of neuron events and the current injection, these models are reasonable. So just to make sure that we're all on the same page, when people model a circuit and the compartments are integrate-and-fire neurons, they typically use an equation like this, which describes this very simple RC circuit. Now if it works.
So i is the current that's injected into the neuron, and that current goes into a [INAUDIBLE] component through this resistance. And It's also integrated through this capacitance. So this is a very simple differential equation. I'm not going to go through the code. This is a very simple implementation.
And basically, what happens here is that the current is integrated. Voltage increases, depending on this input current. And when the voltage reaches a threshold, then this neuron needs spike. Most people also institute a refractory period, meaning that neurons kind of fire immediately after [INAUDIBLE]. It's very simple. It's pretty fast, and it captures some of our essential intuitions about neurons and how they integrate information.
In some ways, it does not consider a lot of important metaphysical phenomena, including spike rate adaptation, multiple compartments. It doesn't really explain the sub-effect on shape of action potential. And it doesn't include the neuron job. So again, I'm not going to go through the code. This is not a perfect implementation of an integrate-and-fire neuron, but just as a very simple way of instituting this with the Euler integration method.
Essentially, in one bind in that lab, one can actually simulate what's happening here. So this is one of the main ways in which people think about similar neurons and put them in the context of more complex [INAUDIBLE]. So as I was saying before in response to a question, if you have this action data, these are recordings from intact in primary visual cortex. What happens when you inject a given amount of current in-- and this is the firing rate again.
So these are the reportings that you get in context, and this is what you get from this integrate-and-fire model. The line is the integrate-and-fire model. The points here correspond to the first two spikes. So the first two spikes emitted by these neurons responds to current injection are well approximated by these very simple models. If you look at all the spikes, one of the things that happens is that in this adaptation, the neuron starts to fire more slowly over time. And that's one of the things that's not well-captured by this integrate-and-fire.
So one notch higher in complexity in terms of modeling individual neurons is the Hodgkin and Huxley model. And this goes back to the very basic equivalent electrical circuit that I described before, where we have a variety of different ionic currents that flow through the neuron. And one is trying to describe the injection current in terms of the integration, as well as the current that passes through each of those different membrane bonds.
These funny numbers that you see here-- this n, m, h, and all these funny exponents that you see here-- capture the shape of the action potential and were actually predicted by speeding experimental data by Hodgkin and Huxley way before the advent of molecular biology techniques will actually describe the exact nature of all of these channels.
This is a relatively simple model scale. It's more complicated and more time-consuming in terms of computational cost than integrate-and-fire neurons. But it can describe very well the detailed shape of the action potential in response to the depolarization and hyperpolarization caused by the flow of sodium and potassium ions.
So again, these are simulations. Again, I'm not going to go through the code in any detail. One can do experiments and record the voltage, and clamp voltage for the neuron at different levels, and then measure the conductances for potassium and sodium and show that those can be described reasonably well by this simple Hodgin-Huxley type of model.
So now, I'd like to switch to describing and giving you an overview of different ways in which people study brains, and give you a flavor for the type of possibilities and challenges that we have in terms of trying to ask questions about computations at the level of brain neurons. So people use a variety of different techniques to study brains, and those are roughly illustrated here, indicating the spatial resolution and the temporal resolution, ranging from studying synapses all the way to studying code rates.
So some of these techniques you may have heard of or you may be familiar with. It's possible to touch clamp, meaning to be able to fix the voltage and study individual channels within the neuron, so that [INAUDIBLE] very, very high spatial and temporal resolution. At the other extreme, there are techniques like scalp EEG and scalp MEG. These have very high temporal resolution. They give you information about brain function at the millisecond level at very coarse spatial resolution.
So this is capturing activity at the level of several centimeters of cortex, so this is averaging [INAUDIBLE] over large examples. Then, there are techniques-- non-invasive ways of interrogating brain function, like PET or functional magnetic resonance imaging, FMRI. And these are techniques that have the advantage of a, being non-invasive, b, being exhaustive in the sense that one can study, at least in principle, entire brains. But they have very poor spatial resolution, as well as temporal resolution.
These are local field potentials, where a kind of thing like inserting electrodes into the brain and have relatively coarse spatial resolution with very high temporal resolution. And then, perhaps one of the gold standards in terms of studying neural circuits is the examination the activity of individual neurons, which can be obtained by holding extra cell from high [INAUDIBLE]. So this is the scale of several tens to hundreds of microns. You aren't going to again get at the millisecond dynamics of firing [INAUDIBLE].
And I'll give you some examples of what can be inferred from several of these different techniques. Before really getting inside neurons and circuits of neurons, as I already alluded to at the beginning, one of the major types of insights that are often derived about function of specific brain circuits comes from studying lesions. And this is one example that many of you may have heard about.
There's a particular part of the brain, particularly encompassing the temporal lobe, and more specifically, the fusiform gyrus, where damage to this area renders people unable to detect faces. This is known by the name of proposopagnosia. And it's a pretty striking effect. Oliver Sharp, the famous writer and neurologist once wrote a book entitled The Man Who Mistook His Wife for a Hat.
So these are people who have a really major deficit that's highly specific. They can recognize all types of objects, but they cannot really identify faces. This is gives you an idea of the distribution of lesion sights in case of people who have prosopagnosia. So here, this is localized. It's not the entire brain, but it's not a simple specific location. There's a wide variety of locations that can release this type of deficit.
In part, it was this and not really experiments that are designed. These are accidents that happened, either due to carbon monoxide poisoning, or it would to due to a virus infection. But there's a wide variety of areas, more or less circumscribed to the areas around the fusiform gyrus, that lead to a major deficit in face detection.
Perhaps somewhat less known is another example. This comes from studying people who have lesions in the parietal cortex. This is the case of people who have hemineglect, meaning that they can see and they can recognize shapes very well, but they have a major deficit in attending to visual recognition in one half of the visual hemifield.
So this is what happens when-- this is an example from one subject that had hemineglect. And this is when he was asked to copy different types of drawings. And typically, these people would copy essentially one half of the drawing and basically completely ignore the other half. So yeah, they can recognize the object, but they basically focus only on one hemifield and not the other.
And, as I also alluded to earlier, one of the remarkable aspects of computations in the brain, as opposed to computers and circuit, is the remarkable ability of the brain to adapt and recover from major lesions. So this is an artist that had hemineglect. And this is his self-portrait two months after he had a stroke. And this is his self-portrait three point five months later. And this his self-portrait nine months later.
So, when he had hemineglect, he was essentially playing very closely on the half of his face, and then, essentially, he recovered in spite of the fact that this person had major damage in part of the right parietal cortex. Another major way to study brain function, of course, comes from the Hubel studies of different visual illusions. And you probably have seen many of these.
Just to focus on one of them, for the sake of discussion, you probably have seen this type of visual illusions, where this green circle seems to be much smaller than this one, in spite of the fact that if you take out all the context, all these circles here, they are exactly identical. So one might imagine that if we have a really accurate description of computational models that can actually describe brain function, actually this may also make this type of mistakes.
And it may also be able to explain a lot of these different visual illusions. The performance of computational models in some of these visual illusions is sometimes taken to indicate the extent in which these computational models are doing things in a way that's similar or not to humans. This is also a pretty famous one, with the Margaret Thatcher edition. It's very hard, usually, to recognize faces when they are upside down. When they are in the right orientation, it's actually much easier.
And to first approximation, I would argue that a lot of the computational models that we've talked about today do not actually cut these types of basic visual illusions. This is not to say that we have to reproduce every single visual illusion in computational circuitry, but this may point us to some constraints in terms of the type of computations that are implemented in-- OK, so one of the main techniques to investigate human-- to investigate cortical function involves recording the activity of neurons.
And this is a schematic from David Hubel showing an electrode in the extracellular space of a neuron. And by using high impedance penetrating microwires that are advanced into different parts of the brain, it's possible to capture the action potentials that the neurons are firing. So if you record extracellular data from an electrode like this, you get a signal that looks more or less like this. This is the voltage as a function of time.
And then, investigators typically will hide or filter this signal. We've got a signal that looks like this, which contains all of these all in one spiking events. And there's another signal that looks like this that contains the so-called local action potentials. Depending on how close this electrode is to the neuron, one can get singular unit activity in this report from a single current. Or, as in this case, what we would refer to as smooth unit activity.
This probably means that there are several different neurons that may contribute to the single, that maybe one neuron here and there may be another. And there are techniques that people use, usually referred to as spike sorting, to try to dissimulate this type of recording and be able to disentangle and separate the different neurons that contribute to this signal.
This local circuit potential signal is less well-understood in terms of the biophysics. I argued earlier that to understand the biophysics of the generation of action potentials, we'll [INAUDIBLE] follow that very well in the Hodgkin-Huxley equations. There are a lot of people trying to understand biophysics that underlie the generation of local action potentials, but that's not as rigorously understood as the [INAUDIBLE] spike.
Yet, this type of signals may also be quite informative in terms of understanding the presentation of information, not by a single neuron, but at the level of a cluster of neurons. So just to give you a couple of examples of the type of recordings that people have done, the type of information that can be gathered from listening to the activity of neurons. This is the work of Bob Desimone when he was working with Charlie Gross many years ago at Princeton and previously also at MIT.
And essentially, by chance, they landed on the notion that there are neurons in higher parts of visual cortex that respond in a very selective fashion upon presenting pictures of faces. So Bob Desimone, by the way, will be here on Saturday, and he will tell us more about his work. In this particular case, what they were doing was recording from a neuron in the inferior temporal cortex. And this is a typical way in which investigators represent 50 of the neuron response to a human picture.
So they show a picture of a hand in this case, and then the x-axis here represents time. And the y-axis here represents the number of slides per second, so it's a scale here. So this neuron fired vigorously in response to this picture, for example, and less vigorously in response to this picture. So, this is an example of the type of recordings that people do to try to interrogate how neurons in circuits respond to different shapes.
Here's another example. This is a neuron that responded to the macaque monkey-- to monkey faces, predominantly more so than to other stimuli, and more so, the neuron responded to particular viewpoints of the face with respect to other stimuli. As I said, this was largely discovered by [INAUDIBLE], who still lacked a systematic understanding of what exactly drives neurons in higher parts of the visual cortex.
In particular, the time to this recordings were done, Bob Desimone and Charlie Gross were studying neurons in inferior temporal cortex, and a few years earlier, they had come up with the discovery that neurons respond to complex shapes such as faces. After a very frustrating day, when they were using very simple stimuli such as lines, inspired by the work of people like Hubel and Wiesel, was shown that neurons respond to simple contorsion-oriented lines.
And they discovered that one of the neurons they were recording from would fire particularly vigorously in response to whatever the investigators passed in front of the monkey. And that made rise to a whole industry or a whole set of studies that concern what exactly is neuron that continues to do this. So we know that there are neurons that respond to this type of complex shapes, and we can interrogate some of the basic properties of the circuitry by this type of [INAUDIBLE].
Another technique that people to study brain function involves electrical stimulation. So it's possible to inject current, using humans as well as monkeys, and then behaviorally evaluate what the effect of those are on each. So here's the classical work that William Penfield did. William Penfield was a neurosurgeon working with patients that have epilepsy.
And as part of the procedure to treat epilepsy, they would insert electrodes into different parts of the cortex and inject current and behavioral evaluate what the effect of currents are. They did this in a wide variety of different places in the cortex. Here, I'm showing just one example of what has come to be known as the mononucleus, meaning the idea that there's a particular street in the somatosensory cortex that represents different parts of the body, meaning that when Penfield stimulated in one of those areas, people reported having a sensation in that particular-- so there's a map of the entire body along the somatosensory cortex that was elucidated by electrical stimuli from [INAUDIBLE].
In connection to the previous slide, where I showed you that there are neurons in the inferior temporal cortex that respond to complex shapes such as faces, this is a recent example of work with stimulation, where a [INAUDIBLE] were stimulating an inferior temporal cortex, the same area that Bob Desimone and Charlie Gross were studying that I showed in the previous slide.
And they were asking whether upon reporting from a neuron that responds more vigorously to faces than to other stimuli, what would happen if you inject current in the vicinity of that neuron? Mind you, when you're stimulating both in this case as well as in this case, you're not really stimulating the activity of a single neuron. You're stimulating the activity of a large cluster of neurons.
So here, I'm showing the psychometric behavioral curve, showing how many times the monkey reports seeing the face as a function of the amount of visual [INAUDIBLE]. So they use stimuli that range from largely faces with minimal amount of noises, faces with no noise, signals that give no information about faces, and then signals that contain information about other objects. And that's what's represented in these actions here.
So as you go to the right on this figure, you should be able to discriminate faces better. And indeed, in the absence of stimulation, the monkey here reports most of the time that he's seeing a face. As you go to the other extreme, this picture here, in this part of the diagram here, the monkey reports that he almost never sees a face. This is really easy.
In particular, in the middle, the monkey should be approximately a chance when they showed this picture. The monkey reports 50% of the time they see a face and 50% of the time that they don't. This is [INAUDIBLE]. Give me one second and I'll answer your question. So here, upon electrical stimulation, that is, upon injecting current within the cluster of neurons that responds to faces, the investigators showed that they could bias the monkey's performance, let [INAUDIBLE] the monkey's performance a little bit in the direction of reporting faces more often than under the usual visions. Yes?
AUDIENCE: So you're saying that when there was no information at all, you were more likely to say that there was a face than when there was information about another kind of object?
GABRIEL KREIMAN: So, let's focus on this condition here. This is the [INAUDIBLE] condition. Here's when there's no information at all. So, if I ask you is there a face or not, you would say, I don't know. But if I force you to say yes or no, in this case, monkeys are reporting where there's a face or not at chance. Basically, this is this point here.
If you look at the blue curve, which is when there's no stimulation, the monkeys reported 50% of the time that they see a face, 50% of the time that they don't. They have to report something. They cannot just say, I don't know, there's nothing there. So they're forced to either say yes or no. OK? When they are stimulated, when they inject current via this electrode, this is what happens. So there's a small-- but going to be also significant --increase in the number of times that the monkey is reporting feedback.
So in this particular case, about 60% of the time, the monkey says, I see a face, even though there's absolutely nothing there. Presumably, as a response to the subjective sensation elicited by injecting current in a part of the brain that's responsible for recognizing faces.
AUDIENCE: I don't understand why do they say 50% of the time that they see the face if there's no information at all.
GABRIEL KREIMAN: Why 50% as opposed to--
AUDIENCE: To 11.
GABRIEL KREIMAN: As opposed to zero?
AUDIENCE: I mean, [INAUDIBLE] be in some kind of 50% noise level--
GABRIEL KREIMAN: So to say zero-- so here, they have to say, it's a face or it's not a face. So zero would mean that they're always saying, it's not a face. So here, zero would be that they're always saying it's a flower. OK?
AUDIENCE: I said it's face or flower [INAUDIBLE].
GABRIEL KREIMAN: Well, no, it's not face or object. It's face or something else. That's what the monkeys--
GABRIEL KREIMAN: And they cannot say nothing. So in the other case, the alternative in this particular case is flower. [INAUDIBLE] another stimulus that's not a face. OK? So here, zero would mean that here, they always see flower, for example. OK?
AUDIENCE: The reason [INAUDIBLE] they didn't get the reward if they get a right answer, it's kind of random, whereas [INAUDIBLE]. If I remember that study correctly.
GABRIEL KREIMAN: Say it again?
AUDIENCE: If I remember this new experiment correctly, for the zero percent visual signal, it just randomly assigns whether or not they'll get a reward if it gets fixed for that.
GABRIEL KREIMAN: Correct. So monkeys have to be trained to do this type of task, so the question is how they are rewarded. Exactly how you reward is critical in terms of what monkeys do. The essential point here is that there's no information and they shouldn't-- if they're away from 50, they have a bias, and in this case, they're not. OK?
So I'd like to now give a brief preview of some of the different techniques and different insights that people in Thrust two, in the study of [AUDIO OUT] are doing as part of the [INAUDIBLE], and how this can inform several aspects of computation. So I'll start by very quickly describing the work of Winrich Freiwald. Again, Winrich will be here on Saturday to give a full presentation.
So Winrich has followed on this tradition of examining different parts of the temporal lobe, of the inferior temporal cortex in monkeys. And what's particularly exciting about his work is that he's combining two of the techniques that I mentioned. One is functional imaging, and the other one is recordings via electrodes.
And what Winrich can do is start by looking at the brain in a very coarse way with functional imaging, and thus discover particular parts of the brain that are responsible for discriminating faces from non-faces, and then target that particular area with electrodes. So this is a coarse to fine approach if you want. You first identify an area that's involved in a particular task of interest.
In this particular case, in the context of seeing and recognizing faces with transformation in various way. So he can identify many of these what he calls patches of cortex that are involved in discriminating faces from non-faces. And then, he can target those areas with electrodes and then record the properties of those neurons and, for example, show that there's some patches that are important in discriminating faces.
But they do not have tolerance to the specific viewpoint, but that they respond specifically to one viewpoint of the face, similar to the neuron that I showed you recorded by Bob Desimone and Charlie Gross. Whereas there may be more advanced parts of the visual hierarchy, where there are neurons that respond to different faces in the transformation [INAUDIBLE], if the neuron can respond to all of these different versions of the face in contrast to this one.
So, the combination of function imaging and neuron reporting can be extremely powerful because it allows us to take complex tasks, where you initially may not have any clear idea of where to stick electrodes to investigate the neural circuit properties, so you can pin-point the essential aspects of the essential geography of where those computations are happening, and then go in with electrodes to really try to uncover the mechanisms and try to ask people questions about potential algorithms that are involved in that computation.
As I said, Winrich will give us a full presentation of that line of work on Saturday. This is work from my own group. We also investigate physiologically what happens inside the brain, but instead of working with monkeys, we work with humans. And the way we can interrogate invasively the human brain is by collaborating with neurosurgeons who work with patients, typically epileptic patients, that have seizures and where the seizures cannot be treated by pharmacological means.
And in that case, the neurosurgeons typically will implant electrodes for two reasons. One is to mark where the seizures are coming from and also to be able to basically functionally determine the properties of different parts of [INAUDIBLE] in order to try to reset the part of the brain that's responsible for the seizures. Because the location of the seizures is not known a priori, the neurosurgeons end up inserting a large number of electrodes in different parts of the brain, only a small fraction of which ends up in the adipogenic area.
This gives us the opportunity to interrogate the human brain in the context of complex [INAUDIBLE] tasks at very high spatial and temporal resolutions. Depending on the patient, we often record field potential signals, such as the ones that I'm showing here. This is a recording from what's called an [INAUDIBLE] electrode, a two millimeter low impedance electrode that records an intracranial field potential signal as a function of time.
And here, each color corresponds to pictures from a different object category. And this is showing a more vigorous response in this electrode in the blue curve, pictures that contain faces and gives a look at the environs of the response to different scales as well as different viewpoints.
So this is a response that happens in the inferior temporal gyrus. And where this electrode showed selectivity, meaning that they responded more vigorously to some pictures compared to others. And it did so in a way that's largely invariant to this type of stimulus manipulation. Here's another example.
This is a case where we recorded from the amygdala using microwires, in case where we could obtain the action potential responses. And each of these pictures was shown to the patient. They were shown in random order. The patient was discriminating the presence or absence of face. And each of the peaks here that you cannot see very well represents an action potential from that neuron.
There are multiple repetitions of each picture. And these posthumous histograms are shown in the same format that I showed it before for the macaque monkey [INAUDIBLE]. The response of the neuron, the y-axis is the fine rate as function of time. So this is a neuron that had a more vigorous response in response to these three different pictures.
And what's quite remarkable about this is the degree of a, selectivity. The neuron cannot just respond to any face. It responds to this particular person. But also the large degree of environs it responds. These pictures are as different as you can imagine, in terms of color, size, contrast, and many other properties. And yet this neuron responded to these three different pictures that included former president Bill Clinton.
So when interrogating the activity of circuits in a human brain, we can start to get some insights into basic computations that can give rise to visual selectivity and visual [INAUDIBLE]. Arguably, one of the major revolutions in neuroscience over the last several years is the work of people like Ed Boyden, who developed what's called techniques that are often referred to as optogenetic, meaning that it's possible to target specific types of neurons by virtue of using the specificity of molecular biology.
And in that way, be able to either stimulate or silence specific types of circuits within the neural circuit. And this is work that Bob Desimone did in collaboration with Ed Boyden. I'm not going to go into details. Again, we'll hear a full presentation from both of them on Saturday. The notion here is that by using light and a particular photosensitive molecule that's controlled by a promoter that's expressed in specific types of neurons, one can shed light onto the circuit and then activate a specific type of neuron, or, in another type of experiment, silence a specific subset.
So this is illustrated here in a cartoonish type of way. You have a circuitry with lots of neurons that become activated. And then, by using light, one can control a specific subset of neurons that are being silenced. And the number of studies using these types of techniques is increasing on a daily basis. And the reason I'm very excited about this is that this can provide us with the opportunity to really take computational models and really test them at unprecedented resolutions and also take biological date and try to infer what are the main [INAUDIBLE] of studying involved in these computations.
So back to the questions about facial recognition-- one can imagine that when we start to dissect the type of circuitry that's involved in generating [INAUDIBLE] activities and environs. Another thing that Ed Boyden is doing is developing a new set of high density multi-electrode rays that could be used across different species to interrogate the activity of large numbers of neurons. One of the limitations that we have in neuroscience is that we can only interrogate small chunks of circuit at a time.
So many of you may wonder, how far can we really go by looking at one neuron at a time? So this may give us the opportunity to perform hundreds, if not thousands, of neurons significantly. Ed Boyden is a fantastic neurotechnology inventor and [INAUDIBLE]. He is far more grandiose than I am, and he likes to say that we're going to be able to perform hundred thousand neurons at a time. And I hope that will happen, and maybe some of you will lead the revolution and will do that. But even if we were able to perform 1,000 or 100, or several hundreds of neurons simultaneously, I think that can play a transformative role in terms of how we can dissect [INAUDIBLE] from neuro function.
AUDIENCE: I just have a terminological question. Does interrogate mean something specific?
GABRIEL KREIMAN: What I mean is record voltage, record voltage as a function of a millisecond precision, extracellularly in [INAUDIBLE] activity of neurons [INAUDIBLE]. Two more examples, and again, we're going to hear full presentations on these based on Saturday. This comes from the beautiful work of Matt Wilson, who's recording the activity of neurons in the hippocampus as well as many other areas, trying to understand spatial navigation in [INAUDIBLE].
So this here pertains to the question of perhaps trying to answer one of the [INAUDIBLE] challenges, where am I? How can we get information about location from the activity of a set of neurons. So Matt Wilson has pioneered the description of hippocampus function at the level of examples of neurons. And the recording the activity of an example to be able to describe the position of a [INAUDIBLE] as it's navigating [INAUDIBLE]. And more recently, also, he's been doing a lot of interesting work, trying to [INAUDIBLE] interactions between different areas in the context of spacial [INAUDIBLE].
This here is to illustrate work done by Bob Desimone in collaboration with [INAUDIBLE], where they were looking at the question of, what's there? That is, how do they code information about a set of objects. One of their projects in Thrust two has to do with the coding activity from a population of neurons or from field potential data to describe a set of objects. And in particular here, they were interested in decoding information from an example of neurons in areas before and in the inferior temporal cortex, in the presence or absence of attention. So the monkey eats or is not paying attention to the stimuli.
So here, the basic set-up is that they are recording from a set of neurons. In this particular case, they are putting together [INAUDIBLE] reported different sessions. They usually prefer to do these as suitable patients. This is not a [INAUDIBLE] population of [INAUDIBLE] recorded, but rather a set of individual recordings that have been concatenated. And then, we have an example of many dimensions, meaning multiple neurons, and instead of labels for which picture was presented.
And we typically use machine-learning techniques, similar to the ones that [INAUDIBLE] described a few days ago, to try to decode the activity of these neuron populations. And the basic finding here is that a, you can decode the activities of a population of neurons when objects are presented in isolation, b, when the objects are not attended, the decoding performance is significantly lower, due to the measure of the decoding performance as a function of time. And when an object is attended to, the performance of this decoding procedure is much higher.
So, attention is a major way of filtering information that can significantly enhance the possibility of decoding information from a population of neurons. So to bring this back to the main goal of trying to understand computations that pertain to intelligence, and take as an example one that is typically dear to my heart, which has to do with visual recognition, what can we learn from studying neurons and circuits of neurons?
And I just gave you a very brief tour through a variety of different responses that one can obtain from code. So, why do we want to think inside the brain? And what can we learn from decodings in the brain about these type of computations? So, one of the reasons why I'm optimistic about the notion that studying brains can inform our endeavors to look at computational algorithms that solve intelligence problems is that more recently, we had amazing resolution to get at wiring diagrams and the connections between grounds that were not available ever before.
This is done at low resolution in humans by using non-invasive imaging techniques, such as diffusion sensor imaging, in combination with functional imaging. They'll give us sort of the lay of the hand in terms of the interactions between different parts of outputs. And more recently, there's a whole set of people that are very excited about the notion of what people refer to as [INAUDIBLE].
They're trying to understand connections in the circuit at every increasing resolution. And this is by no means the answer. It's not that ones we have with wiring diagrams will be able to immediately infer computations, and how we see and recognize how Messi plays soccer so well, and so on. But it will provide a set of constraints for computational models that were not available before. It will really guide our thinking about architectures and computations in ways that were not possible a few years ago.
The other aspect that is rapidly changing, and while it's exciting to be young and in this field now-- I'm referring to you, not to me-- is the strength in numbers. So, this rapid progress in the ability to record the activity of large numbers of neurons. And we have a [INAUDIBLE] of mathematical tools to be able to try to program all these types of responses. So I would say that the last three or four decades we have been painfully studying one neuron at a time, or a few neurons at a time. But there are major aspects of computations that will be elucidated over the next several years by being able to study many neurons simultaneously.
And finally, I like the notion of being able to look at the source code. And by that, I mean the possibility of manipulating neurocircuits. And it will happen-- particularly problems and the task to be able to examine necessary and sufficient computational elements. So this alludes to the work that I very briefly describe that was pioneered by Ed Boyden. While we can really target specific types of neurons, and people are using this one more variety of different systems.
So, in the same way, for those of you who speak computer science, if I give you a binary code, and I ask you to figure out what it's doing, you can try different inputs, different outputs and try to repaint or change different binary, but it's not that simple. In opposition to that, if I gave you the source code, you should be able to read that in a much easier format. So here, we have an opportunity to debug the system, to be able to control the system, and manipulate the system, in ways that were completely unheard of a decade ago.