TOMASO POGGIO: I think most of you probably know Christof's name. Christof Koch research in computational neuroscience have been broad and influential from work on individual neurons and dendritic trees up to consciousness. At the lowest level, he has been working on The Biophysics of Computation and Information Processing in Single Neurons is, in fact, the title of one of his books. And he has been very productive, a lot of publications, unique contributions such that news and views in nature and science, his radio and TV interviews, the appearance in the National Geographic, the LA Weekly and Playboy magazine.
As you probably know, I believe that we need to understand the brain on many different levels, from the level of the hardware and the neurons and the synapses up to the computational level. And Christof is one of the few speakers who could speak about any of these levels and all of them and probably would have spoken today about the work he is doing at the Allen Institute. Christof was for a long time a professor at Caltech. And then he is now the president of the Allen Institute, which is funded by Paul Allen, probably in the order of half a billion or so. And it's several hundred people working on understanding, among other things, the brain of a mouse.
But instead, I challenged him by inventing a title for his talk, something I believe and he does not quite believe, whether the Turing test for intelligence is the same as a Turing test for consciousness. So he will tell you why I'm wrong. And you can learn more for sure in his popular book about the quest for consciousness, reporting work done with the late Francis Crick on the problem of consciousness and the neural substrate for it.
On a personal note, I'm more than proud to introduce Christof. He was my first graduate student back in Germany at the Max Planck Institute. He is now one of my most trusted friends. He worked with me on many different papers and projects. And it was a lot of fun to work together. I'm quoting Paul Samuelson. He's one of the real intellectual giants I had fortune to meet. He wrote, "One of the great pleasures in academic life is to see a younger student develop, evolve into a co-author and colleague, and then, best of all, have the right sight of a friend in science who forges well ahead of you." Christof Koch.
CHRISTOF KOCH: Thank you, Tommy. So Tommy gave me this question, which I'm not answering here. It's just a question. It's your lucky day. It's Friday the 13th. And you'll actually get an answer to this question. Without reflecting more about it, how many people here believe that the answer to this question is yes? And how many believe the answer is no?
OK, so it's the first time I'm giving this talk. Because I took-- it's Tommy's challenge is to design a talk around his title. So is there a way we can turn down the lights a little bit? So what do you see here? Just tell me. What do you see? What's going on here?
It's Halloween. But there's more here. Come on. We're all intelligent. What's the story here? This was taken last Halloween, November 2017.
CHRISTOF KOCH: [INAUDIBLE] yes.
CHRISTOF KOCH: Yes. They are all in trouble. They're wounded. Right? So you can also glance quickly or less quickly. Just by looking at this picture, you can reason about it. You can also tell me something about the age of the people, and what are they doing, and what's their relationship to each other, and where are they, and things like that, a whole set of questions you can ask using your own natural intelligence.
Now, what do you see here? Now, in both cases, you have a conscious visual experience. Here a very simple visual experience, but it shares with the previous image exactly the same subjective experiential, phenomenological qualities. It's just different names for the same thing, ineffable, irreducible thing we call consciousness, or experience, or feelings. They're very different words. They all denote the same thing.
Experimentally, so how do you measure consciousness, for example if you're in a scanner or if you're dealing with animals? So typically, for example here, it's a cartoon, of course. You take an image like this. And people see it. This person's looking at it. And you can look at the brain signatures. You can track the footsteps of consciousness throughout the brain of people.
And you can isolate by, for example, comparing what happens if the person doesn't see it-- because he's still looking at the image-- but now, the stimulus is being masked or he's being distracted. So although it's physically present on the retina, the subject doesn't perceive it. And this is one of the more popular techniques. In fact, it's what Francis Crick and I called, what's today called the neural correlates of consciousness, the minimal neural mechanism that are jointly sufficient to generate any one conscious experience.
And you can do it in different forms. So you can also do it with imagery, right? You ask the person, close your eyes. And then I just showed you this image of these yellow butterflies and said, try to remember it. You can do-- well here, I am sitting. This is for the higher order thought. Here I am looking at an image that's yellow. You can think about thinking, right? And you can think about thinking about thinking, maybe second, maybe some people can do it third order.
You can think about other abstract concepts that don't have a sensory component, like beauty or truth, or 42, or equals mc squared. And then you can also do this. You can have what many traditions, both in the west and in the east, particularly Buddhists, call a pure experience or a no content experience, when you are clearly not asleep, by [? approximate EG ?] criteria, you are not dead, obviously, you're conscious, but there is no content to your consciousness.
I had this recent experience. It was quite transformational for me. I was in Singapore. And I was going to an immersion tank. How many people have been in an immersion tank? OK, they're really cool, right? So what it is-- you strip completely naked. You lie inside this warm water that's body temperature. 600 pounds of Epsom salt are dumped into the water. So you float.
You shut the lid. You're complete in the dark. There's no sound. So you very quickly lose the sensation of your body. You become bodiless. Since there is no light, you become sightless. Since there is no sound except early on, you can actually hear your heartbeat. It's the only time I ever heard my heartbeat.
But then quickly, you adapt to it. It also goes away. Your sense of passage of time goes away. And then if early on you're somewhat anxious about it and what's going to happen, and I'm going to get scared, am I going to get bored, then you should try to not think any thoughts. You also become mindless.
So after some timeless interval, you go into the state where you're bodiless, soundless, sightless, mindless, timeless. And it's a fantastic state. It's an exceptional state. People have written rapturous things about it. I don't want to do that. I'm trying to be, to the extent that I can, an objective scientist. We're trying to think how to do experiments on this.
But it's a very it's a very special state. And you exit out of it. So two hours later, my daughter frantically knocked on the lid. Because she said, dad, dad, are you all right? Because I just literally lost track of it.
So this is a remarkable state where you have no experience. A lot of people call this a mystical moment. Now, there's another state that, if like I do you hang out with neurologists or you occasionally go to the emergency room and clinics and encounter patients-- and I'll show you some data later on-- who are grievously injured.
For example, a patient, some of you older may remember Terri Schiavo. She was in what's called a persistent vegetative state, where your eyes open and your eyes close. People occasionally, they have movement left. They have what neurologists think of as brainstem reflexes left. But there's no lawful relationship between what the person by the bedside trying to communicate with this patient says and any response of the patient.
So as far as we can tell, the patients are completely unresponsive. But we know, for various techniques, that probably 20% of these people actually have experience. They're actually conscious inside their body. There is actually some mind inside their grievously injured body.
And so in both cases-- in the case of the clinic and in the case of this here-- you have, from a computational point of view, from any point of view of intelligence, there's no computation going on. There's no information processing going on. There's no mapping going on. There's no function going on. Yet, here you are. You have an experience.
So what is a consciousness? What is consciousness? So this is a famous definition by Thomas Nagel in his essay, "What is it like to be a bat?" "If there's something that is like to be that organism." That's just one of many definitions. Ultimately, the only way to access it-- this is the one thing that makes consciousness more difficult to study than black holes, or intelligence, or viruses, or brains is that it has this internal aspect. It feels like something. And you can't really describe it if you don't have it.
Now, what is intelligence? There are lots of different definitions I described. Last night, in preparation for this, I grabbed the one from the Wikipedia site on intelligence. It's as good as any. It's sort of the general ability to process information, to learn, to quickly learn from new situations, and to figure out how to behave or cause various timescales.
So when you inferred from that picture of that couple that they're protesting the election of Donald Trump, you're using your native intelligence. Now ultimately, intelligence is really about function, functional capability. So ultimately, it's about becoming.
Well, conscious experience-- and this is also the point of the theory I'm going to present you briefly-- is about being. It's something radically, fundamentally different, although they interrelate. All right, so one is about intelligence is always performing some function, taking some input, processing, storing it for later recall, doing planning, and then ultimately executing some behavior, immediate or in the future.
Well, consciousness is partially associated with that. But at the root of consciousness, the experience is really about being. They're two different things. We shouldn't confound them. So certainly conceptually, you can disassociate intelligence from consciousness. The challenge is, in natural systems, the only one right where we're sure we have consciousness-- i.e. in us and maybe closely related species, such as human and other mammals-- they're entangled.
In other words, a lot of the neuronal structures that we know from the clinic or from subjects-- a lot of the structures like cortex that are involved in generating conscious experience, for instance where you can put an electrode in during neurosurgery and you can evoke specific visual or haptic or somatosensory precept or a precept of wanting an action or willing an action, those are also some of the same structures that seem to be involved in some of the computation underlying what we consider various forms of intelligence.
So it's not easy to untangle them. And then I'll talking a bit about computers which are an engineered system. What's the situation? What's the relationship between consciousness and intelligence in engineered systems?
So I'm going to skip this. This is typically what I talk about, the neural correlates of consciousness. What are the minimal neuronal mechanisms, neurons, subneuronals, larger ensembles, whatever it is that are necessary for you to see a German shepherd. It's a causal concept. In other words, when you artificially induce this neuronal correlate of consciousness by the surgeon's electrode, or by TMS coil, or by optogenetics in an animal, the animal or the subject should have that experience.
And conversely, if you remove that, if you inactivate, again, by doing [INAUDIBLE] [? adoption, ?] or by doing a lesion, or doing some other inactivation study, even though the stimulus may be there, the German shepherd may be there, and you're looking at it, you shouldn't have it. Right? So it's not just an observational thing. It's causal. The idea is causal. These are the set of physical mechanisms that give rise to any one specific conscious content.
And then of course, you can ask what's common about the one that codes for color versus motion versus sound versus abstract concepts. Where are they in the brain? Are they of a particular cell type? Yada, yada, yada. And a lot of people do that.
So I'll give you two brief insights of just the current state of affairs in two particular neuronal studies before I go to machines. So this last paper that I wrote with Francis Crick, in fact, he dictated. Literally two hours before he passed away, he dictated corrections on this paper to his secretary. And he was incredibly excited about it. I mean, he was a scientist until the very end.
So there's this mysterious structure called the claustrum. How many of you have heard about the claustrum? How many of you know that you have a claustrum? OK, so you all have a claustrum, all mammals do. You share this with mice. So the human claustrum is maybe that wide from your anterior posterior. You have two of them. They're tucked underneath the cortex, the insula. They are above the basal ganglia. And they're in the white matter between the external and the extreme capsules. My true belief is really that the cortex is there to protect the claustrum.
What's remarkable about it is it's similar to the thalamus. They have massive-- so this is here, a DTI in human. There's massive bilateral connections, ipsi and contralateral to every cortical region. So a claustrum projects to cortical regions. And every cortical region, or most every cortical region, projects back.
We now know this. This is Diffusion Tensor Imaging at a [? code ?] scale in humans. We've published a long paper several years ago at the Allen Institute in a mouse, where we do an [? AV bios ?] injection both in the claustrum and in the cortex. And we can map numerically with detailed numbers that all the different cortical regions are connected bilateral to the claustrum.
Now, you can ask, well, isn't that like the thalamus? It is. Although, the thalamus is, of course, not layered structure. This is layered. And you have roughly 50 different nuclei that in the thalamus. And they're relatively isolated, while the cortex is a continuous, of much more sheet-like structure.
So this is it in a mouse. A mouse, like any other mammal, has a claustrum. It's roughly here. We did a very detailed reconstruction of it. This is 150 micrometers. So it's like three millimeter long in the mouse. In a tiny mouse, it fits comfortably inside a sugar cube, right? It fits inside a one by one by one centimeter cube. So it's probably 1,000 times smaller, 71 million neurons versus 86 billion neurons in the human brain.
So underneath the cortex, you have the claustrum, one left and one right. And this is one of the things we do routinely. Once we have a structure, we can look for the genes that are expressed in there, all 20,000. And we can identify genes that are very specifically expressed there. And six months later, nine months later, we have a transgenic animal. And then we can then couple that transgenic animal to a reporter like a GFP. Or we can put a driver like channelrhodopsin there. So in other words, we can manipulate this structure.
Because in humans-- so the obvious question is, what happens if you lose this structure? So according to Francis and I, the metaphor we use, the claustrum is the conductor of the cortical symphony. That's not a theory. That's just a metaphor, right? So the idea is you have all these different regions in cortex. Some are responsible for theory of mind. Some do social computations. Some compute color, or motion, or sound, or voices, or whatever.
But of course, what's common to any conscious experience is the fact that it's one, it's holistic, it's integrated. And so you need something, a structure, that can rapidly gather all that information and integrated. And of course, in a symphony, that's exactly the function of a conductor.
You have all the different players. And what the conductor does-- he or she reads information from the player. And then, by his gestures, feeds back information to all the individual players in the orchestra. We believe the function may be similar in humans and non-human animals.
So what happens if people lose it? It's very difficult. There are very, very few cases of people who have lost claustrums. Because it's a very thin and very elongated structure. It's supplied by two arteries. So there isn't a stroke that will take it out. It's very difficult to do fMRI imaging on it, since it's under one millimeter. So it's very, very thin. And it's routinely confused. Bold activity in the claustrum is routinely confused with underlying wall activity in the stratum or in the overlying insular cortex.
There's one patient, one patient-- I know. It's one patient-- an epileptic patient. They put electrodes in her to do the usual function authorization. And every time she was stimulated, she turned into this zombie. She wasn't paralyzed. Because if she did this, and then they stimulated her, she went into this zombie stage. And she had no memory of those episodes for as long as she was stimulated in this one electrode that was in the claustrum. There was no evidence of any secondary discharge. So it's not like an [INAUDIBLE] seizure or something like that.
And then there are a few cases where people have encephalitis that are unique because of a gene that's specifically expressed in the claustrum to the claustrum. And typically, they're in a very confusional state. It's a very difficult to assess, high-fever, confusional state.
So in animals, of course, with the power of optogenetics, once we have genes that have localization to a structure like the claustrum, we can turn them on and off. But before I do, I wanted to show you this awesome thing.
OK, what this is-- these are six neurons. These neurons are specifically located in the claustrum. They are so-called GNB4 positive neurons. So they are expressed in the claustrum. The neurons themselves are tiny. You can see here. Those are the dendrites, 1, 2, 3, 4, 5. And somewhere, there's a sixth. So this a quarter of a millimeter.
These are the axons. They're the biggest axons I've ever seen, the longest axons I've ever seen anywhere. This is a mouse. Yeah, because what this is-- it's a so-called [? F-Mouse ?] whole brain technique that we're doing routinely now together with collaborators in China. What you do, you take a mouse.
You use transgenic to fluorescent, a subset of neurons, like these GNB4 for neurons, or they are visual cortex layer one neurons, whatever. And then you cut the mouse into 14,000 slices. And you do imaging, block face imaging on each slice. You cut it a micrometer through imaging. Then you do that 14,000 times. And then you reconstruct it in 3D.
So you can't do that right now in humans. And the point I'm trying to make-- we call these crown of thorns neurons. Because just like the religious imagery, they project only to cortex. So they're excitatory, spiny neurons, glutamergic. They project only to the cortex, but massively. Some project to ipsi and contralateral.
So it's not specific, at least in the mouse. It's not that some set of claustrum neurons only go to vision and some other go to auditory. Individual claustrum neurons, they project very, very widely within the cortex. Here we've reconstructed 26 of them in top-down projection. They're really beautiful.
All of this is cortex. They leave out the primary center-- or there's very little here. The visual cortex is here. Some other sensory cortex is here. The primary auditory-- so they leave out the primary areas. But they're very heavy on the singulates, the [? tectospinal, ?] prefrontal, and higher order motor areas, massive.
And what we can do now-- so we use it in endoscopic technology, these little mice that carry microscopes. And then we can do a recording either using neuropixels using these very high density electrodes that we've built together with people at Janelia. Or we can use two photon calcium imaging.
So again, here you see, in real time, as the animal's moving around, you're seeing activity in the claustrum neurons as the animal is moving around. And so we can analyze all of that. And then what we're doing now, we're doing different perturbation experiments, either conically with threads, where we use a long-term technique that turns the receptor off for a day or two, or we use a short-term [? halo ?] adoption, or we use either excitatory or inhibitory options to turn cluster neurons on or off.
And what you can see here is a massive phenotype. So This is a normal mouse in an open field test. Where is it? It's just a normal mouse with saline injection. It runs around normally. The mouse with CNO control where it's a different receptor in here-- all right, so you just have to believe me. In this case, when all the neurons that express this particular gene, GNB4, are turned off using this CNO agent, the mouse essentially just crouches down and goes to one corner and stays there, compared to the control mouse. So it's a very dramatic phenotype.
We are not 100% sure this is a pure claustrum phenotype. So that's what we're now going about. Because occasionally, we see some expression in layer six of the auditory cortex above the claustrum. So we have to get more specific. But it shows us the power of what you can do in a mouse compared to what-- I mean, of course, it would be nice and more direct if we can do this in humans, but we can't.
And so we're doing these experiments to study the neural basis of behavior, in this case something that in humans would be closely associated with consciousness in animals. And that's the only NCC story I wanted to tell. Because I wanted to make a few larger points.
So ultimately, in the fullness of time, we'll have the NCC. In other words, sooner or later, we will know that these neurons activated in this way in these layers in that part of the brain gives rise to any one conscious experience or to a particular experience, like seeing color, or hearing my mom's voice, OK?
All the evidence points that the most critical structure for that is the neocortex. So we have to ask, what is special about the neocortex that gives rise to conscious experience? Why not, for example, cerebellum? There's lots and lots of evidence that you probably all know. 80% of the neurons are in the cerebellum. They are some of the most gorgeous neurons. They're the so-called Purkinje cells.
The most numerous neurons are the tiny little granular cells. And there are 60 billion of them, more than any other neurons. You can lose them, like in this patient. She never had them. She's one of the rare cases of cerebellar agenesis. She was born with cerebellum. Or you can talk, like I've done, to patients who just three months ago lost three by three by four of their cerebellum was taken out by a neurosurgeon because they have a glioblastoma.
You can talk with them. They feel no change in their conscious experience of the world. They hear fine. They see fine. They have consciousness of self. They have memory. Whatever the cerebellum does, it isn't involved in consciousness. And so you have to ask, well, what's wrong with it? It has loops. It has input maps. It has output maps. It has glorious neurons. It has dendritic spiking activity. It has calcium spike, everything you expect of neurons. Yet somehow, it's not sufficient to give rise to consciousness.
[? Human ?] cortex-- in cortex during an epileptic seizure when people typically become unconscious, there's lots of neural activity in the brain that's hypersynchronized for those people who believe synchronization is important for consciousness. But you lose consciousness there.
In sleep, I've recorded many, many here with Gabriel Kreiman. In patients, he's recorded many, many neurons during sleep. You can see neurons fire during sleep. There's activity going on. Yet in certain phases of your sleep, like slow wave sleep, when the neurons still fire, it's not that the brain goes dead. There is no conscious experience.
Where's the difference? It's not that during sleep my brain dissolves into some sort of soup. What about a case like that? This is of great practical importance. I'll come back to that. This is a patient of Nico Schiff. When you have a patient like this who doesn't move, he can only see one thing. He says one thing over and over and over and over again, 100 times a day. It's the only thing he ever says. It doesn't feel like anything to be this patient. Is this patient still there? There's still some islands of isolated activity. This is a 2D PET scan. Is the patient still having any experience?
Then we need a more hard question for which we need a theory. We need a theoretical, foundational theory ultimately grounded in physics that tells us, for instance, when does consciousness happen? When was the first time you ever had a conscious experience? Was it when you came out of your mother's womb, when you had to start breathing yourself because you weren't on your mom's lifeline anymore? When you saw a strong visual input? When you heard outside? Was that the first time? What about a pre-term baby like here 24 weeks old, clearly alive? Does it feel like something? And how do we know? How can we test?
There are very strange cases of consciousness. This is two little girls that live north of Seattle. They're identical twins. And they're grown together at the level of-- they have a so-called thalamic bridge. So their two skulls are grown together. And it's amazing. You watch these videos. And you see these girls. They run around like any other girl of that age. But they're always stuck together with their sister. And the neurons are contiguous at the level of the thalamus.
We don't know much about them. The parents don't want them to be studied scientifically. And of course, you can't put them in a normal fMRI scanner. There's some evidence from the New York Times journalist that visited them that this girl can look over here and this girl can look over there and this girl has access to visual information that this girls sees. So under some strange conditions like this where the brains are partially intergrown, do we have two consciousnesses? Or do we have one consciousness?
Cases of sleepwalking and other parasomnias where it's not at all clear whether the people are conscious or not, yet they perform highly routine stereotype things like driving, like walking, or going to the bathroom, opening cupboards, et cetera. [? What about ?] it's now becoming a problem. NIH just convened a workshop that I attended where we talked about brain in a dish, right?
So you take stem cells from under the arm. You put it in a dish. You add four transcription factors. You wait six months. And you get brain tissue, four brain tissues that express a lot of the same genes that are expressed by cortical neurons in, let's say, second term, second trimester fetal tissue including some claustrum activity, all right?
Right now, they're still very small, they're a millimeter or two, like Paola Arlotta at Harvard, or people at Stanford, or elsewhere. People are now building three-dimensional scaffolding and doing cellular engineering. So in principle, in the not too distant future, we'll be able to grow large, large networks of this, just like cortex, right?
Cortex is like a pizza, a slice of pizza two to three millimeters thin. But now, imagine you can grow this in a dish. You perfuse it with various factors so it keeps alive. It shows neural activity. Well, at what point does it start feeling like something? This is the question that the field now is confronted with. We don't have good answers.
Then what about animals that are very different? So this is the bee. The bee is an amazingly complex creature. It can do single face recognition. It has waggle dance. It has a very elaborate social routine when they have to pick a new home, when they swarm in the spring. They're mushroom bodies here are 10 times denser than anything we have in our brain. They only have a million units. But their circuit complexity is higher. Does it feel like something to be a bee? Does a bee have any experience?
What about a group mind? If you have a group of people tightly working together, like special forces soldiers or a group of people that work together towards a common goal, people often assert that there is a group mind. In other words, there is some entity above and beyond the individual that feels like something. Is that true? Can you measure that?
And then, of course, I'll come back with facing computers like Alexa that invades our home. And within our lifetime, over the next 20 years, there's no doubt Alexa will be as clever as you and I. In fact, Alexa will remember everything. She'll remember all the jokes. And she'll play them back at you. And she'll have perfect poise and intonation, much better than your spouse, much more patient. And of course, you can turn her off at will. You can't do that with your spouse.
So how can we possibly deny-- and of course she'll say she feels like something, right? Don't hurt my feelings. Of course I'm conscious. So how do we know, really? So ultimately, what do we need? We need a physical-based theory, a fundamental theory that tells us whether any piece of physics like I'm looking for my cell phone, like this piece of physics, or this piece of physics, under what condition does a piece of highly organized matter give rise to conscious sensation?
Ultimately, the brain is a piece of furniture like anything else in the universe subject to the same laws of quantum mechanics and relativity. We don't believe in special soul stuff anymore. But still, there seems to be some dramatic difference, I believe, between this and this, at least at the current state.
And so there is a psychiatrist and neuroscientist, Giulio Tononi in Madison. And he has worked on this theory of integrated information theory of consciousness that I'm not going to explain. Because typically, we give lectures on this week long, in fact, two months from now in Venice. It's a complicated mathematical formal theory.
So the theory starts with some fundamental postulates. It describes what is common to every possible conscious experience. It's sort of like Kantian transcendental properties. It asks, what are the properties of any experience? It exists for itself. It doesn't require anybody else. It is structured. Even an experience of pure space, just in total blackness, left and right and up and down, in the neighborhood and far away, in sectors, all of that.
It is distinct. It is one experience out of trillions of other experiences I could be having. It is one. It is integrated. It's holistic. And it's definite. It has definite borders. In other words, there are certain things that are in this experience. And there are certain things that are out. When I have an experience of pure space, I don't at the same time also have an experience of my blood pressure or experience of the color red or being upset. That's a different experience. Each experience is the way it is. It's very, very particular, very peculiar.
And then the thing is also fundamental. It makes for some fundamental claim about what exists, about ontology. So it goes back to Aristotle and to a statement in Plato's Sophist where there's this conversation between a young mathematician and the Eleatic stranger from the Italian city of Elea, which is a stand-in for Parmenides in [? Xenon. ?]
And Plato expresses through this Eleatic stranger the sentiment that anything exists in the measure that it makes a difference. The more you make a difference to the world, the more you exist. And if you think about it, what we think exists in physics, for instance, or things that we used to think exist but don't exist anymore, like the ether in physics, ultimately we have this implicit principle. What exists is what has causal power upon others. If you claim there's a little angel on the backside of the moon but it doesn't affect me and I can't affect it in any way, then it may as well not exist. Because it doesn't make any difference.
This theory postulate that ultimately what consciousness is it's causal power upon itself. Any system that has causal power upon itself, that is what conscious experience is. So you can think it's really part of physics. In fact, you can take a Russellian Monist position. You can say, well, physics studies the relationship among things, whether they're quarks or electrons or genes or whatever it is. It studies the relationship among things. It doesn't really study the inside of things.
And conscious experience is simply how physics feels from the inside. So my conscious experience is how the physics of the brain that is a system that has a huge amount of cause/effect power upon itself-- in other words, it's current state can determine its future state. And its previous state determines its current state. The more causal power you have upon itself, the more intrinsic cause/effect power you have upon yourself, the more you exist, the more consciousness you have.
So the theory develops this into mathematical calculus that I won't attempt to describe. It's well-determined calculus. Essentially, it says, you let you look at any system, like neurons here or four transistors here, and you look at their connectivity, you look at some of their transition probability matrix. For any particular system in a particular state, you can write down how much cause/effect it has.
The theory then makes this assertion. It's an identity theory. It's a very strong claim. It doesn't say that this is proportional to consciousness, or it relates to consciousness, or it's necessary for consciousness. The theory says ultimately this intrinsic cause/effect power is what conscience is. It's an identity theory.
In other words, any property of any conscious experience has its direct counterpart in this cause/effect space. All right, the structure in this cause/effect space describes the quality of any conscious experience, why color feels different from motion feels different from pain is different from pleasure. There's also a quantity associated with it that some of you will know called phi.
So phi essentially measures the irreducible of a system. You have a system. How much does the system exist for itself? Well, it only exists for itself, if it exists as a whole-- so if phi is zero, the system doesn't exist for itself. It may as well be decomposed. Because there isn't anything above and beyond the individual parts.
For any system that has a phi, this non-negative number, has some conscious experience. How big the phi is tells you how conscious that system is. And now, you can say, well, that's all pretty crazy and far out there. And it seems like it, I grant you that.
Although, partly it is. Because we take these things so for granted. We take experience for granted. We rarely ever think about it. Because it's the only world we know. The only world we know is the world that we see and hear and touch. And so we take it for granted. And in science, we rarely think about fundamentally what really exists and how do I know anything exists.
All right, so the theory has a number of experimental predictions. One is the following. So it says, any system, like the brain or like a computer-- but here we are talking about brain-- that supports consciousness has to be integrated and differentiated.
So already 12 years ago, a whole bunch of people decided, well, let's test this experimentally. So they built what's called zap and zip conscious meter. So essentially, what it is-- you put a TMS pulse onto a cortex, various location cortex. You pulse it 50, 60, 70 times. You have to average it. You record high density EG, 64 channels. Although now, people do 32 channels. And you look at the resulting movie you unfolded. And you essentially compress it. You use the [INAUDIBLE] compression algorithm-- that's where the zip comes from-- to compress it.
So you essentially ask, how complex is the response? How integrated and differentiated is this response? If each one of those EG electrodes records something totally independent, it's maximally complex, but it's not very integrated. In fact, if it's like an epileptic seizure, then everybody would be highly-- you get these EG waves that are in lockstep. Because this spatial-temporal movie-- again, that isn't really complexity.
So they defined a number called PCI perturbation complex indexes that's between 0 and 1. And so now what they do, they take normal people and put them in when they go to sleep, in deep sleep, and when they don't have conscious experience. And you ascertain that by waking them up and saying, in the last 20 seconds when I zipped you, did anything go to your mind?
You take people, volunteers, and anesthetize them using different types of anesthetic-- xenon, ketamine, and two other anesthetics. You take brain patients that have lesions that don't impair consciousness, like for example aphasic lesion. The patient can't talk anymore. But the patient is fully conscious.
And in each case, you measure the brain complexity using this measure. And so you can do that. So these are volunteers that were anesthetized with [INAUDIBLE] and propofol. Here you put people that go to sleep. Here you put people under light does of ketamine where they are taking it as a drug, so not as an anesthetic. Here they're conscious. Here you take normal healthy subjects.
So what do you do in each case, you do this procedure five times. And then you pick the highest number. So each one of these is a subject or patient. And here you have controls where you're very sure, like a [? locked-in ?] stage or, as I said, lesions that don't impair consciousness. And so empirically, you find a threshold, in this case 0.31. It doesn't really matter.
So with 0.31, with 100% specificity and 100% sensitivity, I can find all the true positive. And I can detect all the true negatives. In other words, I can always tell from the outside by looking at your cortical response whether right now for the last two minutes you were conscious or not. This study is now being repeated in a number of clinical centers including here at MGH for the possibility of building a general conscious meter.
Here now, this is more tricky. Because now you have patients-- so these are called MCS plus. They're you're very confident that they have some minimal-- so what this stands for is minimal conscious state. Here people are very pretty sure they're conscious. Here they are less sure. And these are so-called vegetative states.
So of course, here you find that, as I said before, people who by all other means you cannot communicate with them, at least in the context of a clinic-- this is not quiet fMRI lab. This is in the context of a clinic which imposes all sorts of constraints. These patients are considered vegetative state. By their response of their brain to this magnetic pulse, they seem to have the same complexity as a normal person who is conscious.
So if you take this seriously at face value, it seems to suggest, yes, there's roughly 20% of people that get misdiagnosed, which is compatible with other evidence from the clinic. So as I said, there are a lot of people who are trying to generalize this to look at other patients like catatonic patients, like pediatric patients and children, and also doing this in animals with cortexes like a monkey or a mouse even where you can experimentally turn consciousness on or off.
All right, so that's practical progress. No matter what you think about your ideological position on consciousness, it helps. There are thousands of these patients where we are not at all sure whether they're conscious. And this is a device that can get at that. Now, I wanted to come to computers. So A, per integrated information theory, what is the function of consciousness.?
So per se, this is a physical theory, IIT. It's just a theory like Coulomb's theory or the standard model of physics, right? And you don't, of course, go around and ask what's the function of quarks or what's the function of up and down or color or charm, right? It's just what we find ourselves in the universe.
However, what you can do is you can do a simulated evolution. So you what you can do-- I used to do this when I was still at Caltech. You can have these simple creatures. They move through mazes. They have eyes and motor, like [INAUDIBLE] vehicles. You can give them a genome. You'll randomize it. And you send them into a maze. You send thousands of them into a maze. You pick the 10 best performer. You perturb their genome randomly. You generate the next generation. You put them into the maze again. You always pick the winners. You do this 60,000 generations.
All right, and then you get creatures that are adapted, more or less adapted to this particular environment. And what you can see very nicely-- so this is the degree of adaptation of these. In other words, this is how quickly they transverse the maze. This is the so-called Einstein. This is a creature that we designed to go optimal by the fastest route any maze. The mazes are randomly generated. They're not the ones that were evolved on. And so this is perfectly adapted.
And you can see there's this nice relationship between the minimal amount of phi and the degree of adaptation. It makes sense. Integrated information theory is about integrating all the different components, all the different modules, the different sensors, into one whole. And of course, that's going to make it much more efficient rather than having a whole lot of specialized modules. And you can see that.
So the idea is that, in a world where natural selection acts, you select for brains that are most efficient. We can call this adaptation or some level of intelligence, so evolutionary time scale. But it also happens to maximize the amount of consciousness these simulated creatures have.
So this is the one slide that's really disconcerting for a functionalist. So here what you have-- you have to two simple networks, a nonlinear one. These are simple cell neurons. In this case, it was a threshold, a simple threshold connected with excitation inhibition. They have a well-defined-- you give some input state. You march through. And you get some output state.
Here you take the network and fold it, turn it into completely feed-forward network that functionally has exactly the same input-output mapping. So in other words, you have two boxes. The boxes have some simple neural networks inside. In both cases, you feed some input. And you get for the same input two boxes the same output. So from a functional point of view-- functionalism is dominant today, of course it's a zeitgeist.
From a functional point of view, they are fully equivalent. Because they perform the same input-output function. IIT says, however, you can compute this very explicitly. I mean it's thing about having a quantum theory. You just compute it. This system has cause-effect power upon itself.
What do I mean by that? Because it's feedback-connected in non-trivial ways, the current state will influence its next state. The previous state will influence its current state. Well, this is system is the feed-forward network. So it s state will not influence its next state. And you can compute the phi of this as zero.
So this system has no cause-effect power. This system has. So it's a theory's tool. And a theory is amenable to empirical tests. It could be false, could be true. That says that in principle there cannot be a Turing test, in principle, for intelligence. Because if you believe intelligence is ultimately a function, an I/O function, then at least in principle you can build two automota that have the same input-output function. One has high causal power, one has no. One is conscious, the other one not.
Now, you can also try this. This is a work in progress. You can try to take a simple model of a [INAUDIBLE] machine, a minimal [INAUDIBLE] machine with a little clock, with a clock, with data register, with a simple 7-bit transition matrix here, with very simple logic that does a simple computation here. It's a very simple computer. But of course, it's universal.
You can generalize it from seven states to 11 states to 13 states. And again, you can show per IIT you can compute the integrated information of this creature. It is very, very low because of the wiring. So if you look at different networks and you ask how much integrated information is there, typically you get very large numbers if you have networks that have high [? fan-in and fan-out ?] are highly heterogeneous where each of the gates are doing something different. You get typically very low phi in networks that are sparsely connected and that are all the same. It's just the property of this particular metric.
So what that means that what IIT says that, A, you cannot do a simple input-output test, at least conceptually, a simple Turing test. Because of course, once you know what the Turing test is, you can always teach the network how to pass that particular Turing test with machine learning. And you know it can do it. And also because of this property that two systems can have the same input output mapping but one can have a high phi and the other one not.
Furthermore, this theory says that simple digital computers, and the proof will generalize-- of course, they have some causal effect power, some minimal causal effect power. But it's tiny. And it doesn't depend on the program that's run. So you can use that little network and compute phi for different mappings. You can compute it for one type of mapping and for a different type of mapping. The phi and the conceptual structure, the cause-effect power doesn't change.
So in other words, it makes this very powerful prediction that, even if we built a computer that can perfectly well simulate the human brain, either via Alexa-- because it talks like us. Or because Henry Markham's wildest dreams have come true and he's built this perfect replica of 100 billion neurons. And every synapse and every neuron is there, just like in the human brain. And you turn on the computer, this gigantic computer simulation. And of course, it says I'm conscious. But don't believe it. It's all fake, OK?
Because you cannot ultimately simulate it. Consciousness is not a particular type of computation. You have to build it into the causal structure of the system. To see whether a creature is conscious, you have to look at the hardware. You have to look at their brain, if you want, their CPU. And you cannot simulate it on a digital computer.
And that the way to best explain this to physicists and engines is the following. I have a friend. She's a theoretical astrophysicist. She writes down code on her laptop that simulates Einstein's equation, the nine field equation linking curvature to mass distribution.
So when she runs her program on the central mass at the center of our galaxy-- it's 10 to the 6 solar masses-- she predicts that it's a black hole. In other words that the computer program running on her laptop predicts that the gravity is so strong that not even light can escape. But funny enough, she does not get sucked into this computer. She doesn't get sucked into the laptop.
Now, why is that? It simulates. It's a perfect simulation of it, right? Because general relativity describes the link between the curvature and mass. Why is it? Well, it's a difference between the simulated and the real. The real actually has causal power. In this case, it has causal power to bend space time, to curve space time.
The computer simulation simulates high level mathematical mapping but doesn't have the same causal power. And so it is with consciousness. Yes, you can perfectly well-- and Alexa shows us-- you can perfectly well simulate a lot of the behavior that we in humans associate with consciousness. But that's not the same as actually generating consciousness.
And so I want to end with this slide. We find ourselves in a universe where historically the link-- so if here we plot some measure of intelligence, like generalized G factor or something like that and here we plot phi, this measure of consciousness, we live in a universe populated by evolved creatures where you have this, as the brain gets more complex from [? Medusa ?] to a bee, to a mouse, to a glorious Bernese mountain dog to a human we believe that as the brain increases in complexity the capacity for having highly complex conscious states increases, right?
But that's an evolved system that has particular causal structures. Now, we find ourselves we're building. We are adding stuff to the universe. We build things. This is the-- what's the chess computer, 1999? Yeah, IBM's Deep Blue that Kasparov in chess. This is, of course, [INAUDIBLE] The Google cars, Uber cars, Alexa. All of them will be along this axis. In fact, we don't know whether there's natural limit, whether this axis extends indefinitely.
And conversely, so there's this double disassociation between intelligence and consciousness. Because I claim these creatures don't feel like anything. And even if we reach artificial generalized intelligence, AGI, we'll be out here. And they'll be as smart as us. Or maybe they'll be smarter than us. But there's no consciousness.
Conversely, certainly, the IIT predicts that there will be structures like, for example, if you build these neural nets out of brain organelles, and you make them large enough, if they have a sufficiently high degree of connectivity, they may well be conscious but having very limited input-output capability unless you provide them with sensors, and with arms, and with actuators. If you don't provide them, they will be conscious but without having any function. And so that's what I wanted to leave you with. Thank you much for your attention.