Language in the Brain
Date Posted:
August 19, 2020
Date Recorded:
August 18, 2020
Speaker(s):
Christos Papadimitriou, Columbia University
All Captioned Videos Brains, Minds and Machines Summer Course 2020
Loading your interactive content...
PRESENTER 1: It's a great pleasure to introduce Professor Christos Papadimitriou. He's a professor of computer science at Columbia University. He is particularly well known for his contributions to the theory of computer science. He's a true renaissance man who has contributed to wide, diverse areas applying his knowledge from computer science to game theory, to biology, to evolution, to the internet.
And now, he's going to tell us about, perhaps a more recent passion of his, the intersections between computer science and artificial intelligence and [AUDIO OUT] is arguably one of the most difficult challenges for us, in terms of understanding brain function. So thank you very much for joining us. And please go ahead.
CHRISTOS PAPADIMITRIOU: Thank you very much. Thanks for the kind introduction. I'm delighted to be here with you. I wish we were all together. But this has to do for now. And we will resume next year.
So I'm going to tell you about language, which, as you know, is what is happening right now. And if you think about it, what is happening right now is nothing short of miraculous. Now, what I'm doing is I'm creating air waves in my space. And these air waves get in your space through a miracle that we're not going to talk about. And once they get into your space, your tympanum decodes them in your auditory cortex into words, OK?
And then the really amazing part starts, OK, that your left brain takes these words and puts them together and makes them into sentences and then looks at them as a whole and understands them and remembers them. And if I'm lucky at all, some of you will remember them for the rest of your life, OK? So that's-- This is incredible, OK? We are all studying the brain. And as students of the brain, we know that the brain does amazing things, OK?
I'm proposing to you that nothing is as amazing as this, OK? So how is this done? OK. That's-- And we know it happens here in the left brain, the left hemisphere. And in fact, we know these are the four areas that are most certainly implicated in language.
The purple one on the lower right is the medial temporal lobe, where the lexicon is stored. Then the green one is the superior temporal gyrus where words are used in the context of syntax, of creating a sentence. And the other two areas are Broca's area where syntax happens, where apparently these words are structured into trees, OK? So at least, that's what linguists have been telling us, OK?
But let's see. In fact, this is a book that opened my eyes as to what is known about language. It's incredible, you know, how the knowledge about how language happens in the brain has multiplied in the last few years. So I talk about the lexicon, Wernicke's area in the superior temporal gyrus, and Broca's area. These we have known for 150 years, OK, since Wernicke and Broca.
But now, some amazing experiments are happening, OK? Let me tell you one of them. Franklin and Greene, basically, they discover through fMRI that different areas of Wernicke's area, different sub-areas of the superior temporal gyrus responded to "truck" in these two sentences. The ball hit the truck. Or the truck hit the ball. This means that within Wernicke's area, there are different subregions of areas where the roll of the word as a subject or an object is decided.
And I mean, if you want something which is really cool, the first area also responded to "The truck was hit by the ball." In other words, it's not the superficial subject and object, but it's the deep subject and object. In other words, the brain says, truck is the object of this sentence, never mind the stupid passive voice maneuver, OK? So this is amazing, OK, you know, that something like that is happening.
But watch this. Here's another experiment, other recent experiments. At 4 hertz, which is the frequency with which I'm speaking to you now, they gave hundreds of subjects single syllable words in six different languages. And then they took the data and they do the Fourier transform, and guess what? They found a peak at 4 hertz, which is completely understandable. Because four times every second, the subject must fetch a word from the lexicon.
Then they did something really clever. They repeated the same experiment, except that every four words made sense. OK? They were a sentence. And now, here's what happened. There were three things.
There was the same one at 4, because, of course, they have to do the same thing. But there was, at 1 hertz, a peak. What is this? This means that once every second, the subject had to do something special. What is this special thing?
I think it's creating a sentence. And twice every second, the subject had to create a phrase. That's the 2 hertz. OK? So to my mind, this is what has been happening, OK, s you know, that we are building trees in our brain, OK?
In other words, we create data structures that convince us that what we heard made sense. And basically, we extract the meaning. And of course, we memorize, reuse, and so on. OK? Now, you know, a few years ago, I had lunch with Chris Manning, OK?
And I don't know what came to me, but in the middle of the lunch, I left my fork down. And I looked him in the eye and I told him, something like, for the love of God, Chris, do we have trees in our brain? And Chris Manning, which, among the linguists I know, is among the least likely to answer yes. He told me, resoundingly, yes, we have trees in our brain, OK?
So, you know, this completely changed my attitude, OK? So we really have to understand how these trees are made, OK? And then I found out about this experiment. So the question is, how are these trees created? And they must be created by neurons in about a dozen spikes per step.
Why a dozen? Because dozen is an interesting number. It is the ratio of the two natural rhythms, OK? Gamma is the rhythm of spike in neurons. And theta is the rhythm of language, as well as all the interactions of the animals in the world. OK?
So another experiment, the completion of phrases, and especially of sentences, activates parts of Broca's area. So the creation of the inner nodes of this tree, the tree, you know, has a neural basis. Something is happening in Broca's area, in a different part of Broca's area for sentences and a different part for phrases, OK?
So these are all recent experiments. I mean, they completely opened my eyes, OK? So basically, now we know much, much more. We know that things are happening and we know where they're happening. Because we find this out by fMRI.
But in some sense, this is a static architecture, OK? The dynamic architecture of how this is happening are still a mystery, OK? What we seem to be understanding now is that there are populations of neurons that are created on the fly to encode syntactic elements, words, phrases, sentences, and to communicate with each other. Because the sentence has to communicate with its words and so on, OK? And the question is, how is this done, OK? In her recent book, Friederici conjectures that there are language mirror cells like the mirror cells we have in our frontal lobe for mirroring motion by other animals.
OK, let's take a much broader view now, OK? Language, as I told you, I believe it's one of the most challenging things that the brain does to decode, OK? But the truth, OK, the situation in the whole field is extremely interesting. In some sense, every year, we double our knowledge. OK, every five years, the Kandel and Schwartz book is repeated. And it's 400 pages thicker, OK?
And yet, we don't seem to be progressing in understanding. The more we know, the less we understand, OK? And here is how Richard Axel put this. "We do not have a logic for the transformation of neural activity into thought."
OK? So when I read this a couple of years ago, a year and a half ago, I almost turn over my chair. Because, I mean, I was feeling like the pope was blessing me, OK? You know, because this is exactly what I thought is worth working on, OK? And he continued to say, "I view discerning this logic as the most important future direction of neuroscience."
And of course, neural activity into thought is exactly what we need in order to understand language, OK? So the question is, what kind of formal theory would qualify? OK? And I have a modest proposal, OK? You know, we are computational neuroscientists. We believe that computation is taking place in the brain.
Sometimes a psychiatrist will ask me, why, professor, do you think that there is computation taking place in the brain? And I mean, the only reasonable answer I can give is that if I assume that, then it's easier for me to think about the brain, OK? So let's make this assumption, that there is computation. But the question is, what level?
Of course, molecules compute. Of course, spiking neurons and synapses compute. And many people who are reasonable will tell you, and they are right, that the real computation happens in dendrites. And of course, I mean, as you saw in Josh Tenenbaum's lecture earlier, the whole brain does compute. I mean, because cognitive scientists, they say, you see? The experiment shows that these people act as if their brain was executing the following program, OK?
So these are all places where computation happens. But there seems to be something in the middle that is missing, OK? And this level, this level of computation in the brain is what is needed in order to fathom language, to understand language better than our understanding right now, OK? So you know, I have an idea.
I have a possible answer. Let's call it an assembly hypothesis. It's something that we came up with with a bunch of collaborators. And that there is an intermediate between these levels, OK-- where the question mark was sitting level of brain computation. It is implicated in carrying out higher order cognitive functions, such as reasoning, planning, language, storytelling, arts, math, music, the good stuff. And assemblies of neurons its basic representation, its main data structure.
So what is assemblies of neurons? They are large populations of neurons that are density interconnected. They are very stable. And they all reside in the same brain area. So all these neurons are firing from the same area.
And the firing of this assembly in some pattern is tantamount to the subject's thinking of this particular memory, concept, object, word, episode, et cetera. OK, so this is not news, OK? Hebb had already the conjectured this 71 years ago. They had been sought by a lot of people, heroically, for 60 years, for more than half a century, until they were discovered, OK?
So with more technology, we reached the point where they caught animals creating and manipulating assemblies. And nowadays, it's beyond doubt that they do play an incredible role in the way animal brains interact. I believe that they play especially big huge role in humans. Here is why. Because I believe that the place where assemblies do their work is what is known as the association cortex.
What is the association cortex? It's everything except for the motor cortex and the sensory cortex. The association cortex in rats is tiny, OK, maybe 20%, 30% of their brain. For us, it's 85% or something, of our brain, OK? So you know, I believe that the assemblies are a good part of the story of how the association cortex works and especially language, which is the topic of this talk.
Buzsaki, for example, you know, a neuroscientist who, actually, his group discovered assemblies beyond doubt. He calls them now the alphabet of the brain. OK? So you know, the older ones among us remember this pseudo dilemma, OK?
So you know, is intelligence symbolic or sub-symbolic, OK? So Buzsaki says the assemblies is where the boundary is. OK? So we know that that's where sub-symbolic becomes symbolic. OK? All right, how are assemblies created?
It turns out that it had been known for at least 25 years that this very simple circuit, OK, there is a synaptic input to a neural mass of excitatory neurons. And these excitatory neurons is in an excitatory/inhibitory loop with another neural mass. And if this happens, a stable assembly will form over there, OK? So this is really the basics.
But the way, in my group, you know, the way it is explained in the paper that I posted before this talk. We have a mathematical way to say it, which I believe is a useful way of looking at the association cortex. In other words, you assume a finite number of brain regions. Each one of them contains the same number, let's say, of excitatory neurons.
And inhibition means that among, you know, we only implicitly model inhibition. The effect of inhibition is that in every area, only k of n excitatory neurons fire. And some pairs of areas have sparse random connectivity between them. These are the red arrows. All of them have recurrent random connectivity with some connection probability.
The neurons fire in discrete steps. So you know, that's an assumption that is as convenient as it is indefensible. And at each step, as I said, k-- you know, I think of k as square root of n for [? fixed ?] [? ideas. ?] And areas can be explicitly inhibited and disinhibited.
And then you add over that Hebbian plasticity, which means that if there is a connection, a synaptic connection from neuron i to neuron j and i fires and in the next step j fires, the weight of i-j is multiplied by some constant larger than 1. They say 1.1.
OK, so this is our whole model. And we did this model. We can prove that through both theorems and simulations that assemblies do happen and work their miracles, OK? And the typical values of the parameters that we use in our work on paper are these.
OK. So one thing that is sort of the basis of this is something that had been known for at least 60 years, a powerful computational primitive, which is you have a random projection from one neural population to another larger area, OK? And out of this, you cut off, you select the k, you know, the neurons that have the largest synaptic input, OK?
And this is, it turns out to be, a very interesting primitive. It turns out that it can improve several aspects of deep nets, if you use it instead of ReLU, and so on. OK? But I believe that it is an interesting primitive for the study of the way the brain works. OK. So assembly projection is the following.
You have this set of spiking neurons that project to a different area. So once these neurons spike once, through a random projection and cap, a new population will fire in the adjacent area, in the downstream area. Now, once both of these fire again, the situation becomes more complicated. Because now the neurons in the area on the right, they receive inputs both from the original input and from the blue population.
And as a result, a new population will emerge. [INAUDIBLE] a new population will emerge, and so on. And of course, you also have Hebbian plasticity, as I mentioned. And the theorem says that this process converges exponentially fast, OK? And now our experiment, it converges in about a dozen steps and creates a new stable assembly.
So stable in the following sense that you have pattern completion, that future presentation of the same or similar synaptic input activates the new assembly, the same assembly, and so does firing of a small subset of the new assembly. In other words, if it so happens that the small subset of the neurons of this assembly fire, then the whole assembly, with high probability, completes. This is the result of densely intraconnectivity and the ensuing stability. Good.
There is another fun operation. Imagine that there were two other assemblies, the ones on the left, which have created these two assemblies in the same area on the right. And now, if these assemblies fire together, in other words, they co-occur, if there is evidence that these four assemblies are related, it turns out that the assemblies on the right, the copies, the projected assemblies, change their ways, OK? And they change their support. And they become closer.
Some of the neurons are shared and some of the neurons migrate from one to the other, OK? And so as a result, you have a large intersection. And these are all things that have been found both experimentally in subjects, but also by simulations and theorems. Good. So I started by telling you that computation in the brain. It's a good thing to think of it as, you know, I believe that one way that this could be happening is through assemblies.
And you know, this is the data structure, the assembly. And it has operations. The operations are projection, pattern completion, association. What else? OK? And now language comes in, OK? Because linguists, starting with Chomsky, and actually never-ending, OK, so you know, have been telling us that one very important, very central sine qua non of language is something they call Merge.
And what is Merge? Merge is essentially the creation of trees. It's the creation from two leaves of an internal node of this tree. In other words, there is a creation of a new node which now is integrally, is very closely connected with the other two. And it turns out that this is another operation that we can prove happens, OK? We can prove it by theorems, again, by simulations.
Indeed, if the two assemblies on the far left have created the two copies by projection in the middle, then by firing again and there is strong forward and backward connectivity with the fifth area. Then, by firing, a new assembly will be created in the fifth area, which has strong synaptic connectivity to and from the two assemblies in the middle. OK? And that's tantamount to what the linguists need in order for language to happen. OK? Excellent.
So it turns out that proving that the sort of establishing that Merge works in simulations is by far the hardest both theorem and simulation, in the sense that it requires more plasticity, more synaptic connectivity, more dense synaptic connectivity than the other operations, OK? So the question comes to our mind, if it exists in the human brain, could it be that it needs enhanced hardware?
And it turns out that there is a mysterious fiber on the left hemisphere, which is, in humans, much larger in the left hemisphere than it is on the right. And this fiber connects Wernicke's area with Broca's area, in other words, connects the place where words and language reside and where the inner nodes of the tree are created, OK? And I mean, the $60,000 question is, does this facilitate, create Merge? OK? Is this the hardware that helps Merge? OK.
So I presented to you, you know, this alleged data structure and these alleged operations. The question is, in what sense are they real? OK. It turns out that they correspond to behaviors of assemblies that have been observed in experiments or are strongly suggested by other experiments, as in Merge.
In some sense, OK, they constitute a high-level language, OK? Which, to our credit, we did not call the assembly language, OK? But you know, they are like a computational language in the sense that each one of these operations can be compiled down to the activity of neurons and synapses, both mathematically and in simulations and in simulations of neuro-realistic spiking or spiking neurons, OK?
So there is some evidence that these operations are indeed real. So here they are, project, associate, pattern complete, merge. Plus, we have some control operations to make it into a real programming language, OK? So you know, these, the first four, I can defend. Well, these are just the technical necessities for what will follow.
Activate means that for every assembly, I have a secret way to fire it at any moment. Read means that I have a readout mechanism that from each area, reads the name of the assembly that has fired in this area. And disinhibit means that I have a mechanism for disinhibit. You know, the VIP neurons, for example, are such a mechanism in several well-known circuits.
So if I have all these, the question is, this turns out that it gives you a complete computational system, OK. It can perform arbitrary square root of n space computations. And those of you who remember your complexity from undergraduate computer science, you remember that square root of space computation is extremely powerful. It basically says that if n is a million, OK, it literally tells you that any computation that can be performed in, let's say, 100 parallel steps with 100 registers can be done this way, OK? And this is a lot, OK?
So you know, I believe it's sufficient to explain human cognition, OK? But this is a mathematical result. OK. OK, let's go back to language, OK? Because the reason I introduced it and the reason I worked on it is because I believed that something like that was needed in order to explain how language works.
So let's take a very, you know-- of course, I would like to explain to you how assemblies can elucidate the miracle that is happening right now, OK? You putting the words that you hear that I say in order and creating trees out of them that help you comprehend them and hopefully remember something afterward, OK. And the point is that, you know, I would love to do that. I would love to tell you how the brain parses, OK? And I'm working on it, OK? So I'll tell you a little more about it later.
I'll tell you something much simpler, OK, which I think I understand a bit better. And that's the following-- how I can generate a sentence, OK, how I can think of a fact, OK. And this is an interesting-- you know, in other words, what is the most elementary operation of language, OK? Here's what it is, OK? So you know, Wittgenstein, he's famous for having said that the element of philosophy is not the being, as the Greek said, but it's the fact. OK?
So how can I put together a fact? What is a fact? A fact is that somebody or something does or is something, OK? So here is a fact, OK, a fact, in the most primitive sense is an image, OK? An image, perhaps a mental image, because I may be lying to you or something that I see. And what I want to do is I want to put together out of this fact a sentence, OK, a well-formed sentence. How do I do that? OK.
And here is how I believe that-- that's, I think, really, a reasonable explanation. OK? There are tons of things to be filled in, tons of steps that we don't understand. But here is how it could be, OK-- that the lexicon is a searchable data structure. So I see this and I say what I'm looking now is the act of hitting. No, of striking. No, of kicking or kick, OK?
I find the verb kick in my lexicon and I project it to the superior temporal gyrus, the Wernicke's area, so that it's there, available to create as elementary part of the sentence. Then-- not then, all of these things are happening in parallel. Who is doing that? That's the second most important question. And the answer is, it's a kid, it's a boy, it's a girl or boy. OK. So I found boy, I also projected to Wernicke's area.
And what is he kicking? He's kicking a ball. So now I have these things that live in the verb, subject, and object sub-areas of Wernicke's area. Then, a merge happens, OK? I create a verb phrase out of kick and ball, OK? And finally, I create a sentence out of all three parts, OK? And the last two operations are merged.
So in three parallel operations, which means in about not much more than a second, I have already produced this structure, OK, so, you know, if you buy exactly what I told you earlier. OK, all right, now comes the other part, the reverse part. Now, I have to articulate, OK? Frankly, many linguists-- and I sort of believe them-- will tell you that articulation came later, OK?
We have been able to construct such facts for a million years, but to articulate them, probably for 80,000 years. But let's say that we are modern humans and I want to articulate this. How would I do it? So as the sentence node, the root of the tree would fire, these would mobilize the rest of the tree. And eventually, the three leaves will fire.
In which order? OK, that's the important question. Well, it has to be-- up to now, order is not important at all. But now that I have to have to articulate, I have to sequentialize. OK. So different languages have chosen different ways for this. And I'm going to, in fact, talk about that a lot in the remainder of my talk.
But in English, it will be, "The boy kicked the ball." OK. Or "The boy is kicking the ball," or something like that, OK? And then, through the firing, the original entries of the lexicon are going to be excited. And these are going to mobilize motor programs for articulating these words, OK?
So in a nutshell, this is sort of, you know, a cartoonish way of describing how a very simple part of language, which is basically building the simple syntactic scaffold of a sentence that I want to articulate, could be carried out, OK? And you know, what the good thing is that this is consistent with things we know from various recent experiments and understanding of how this process might be happening. OK.
So for the production, so, you know, the generation of the sentence, as I told you, without articulation may be much more ancient than communication. And also we have the merge pathways. You know, in other words, through merge, structures are created. These don't have to be just sentences, but can be whole paragraphs, dialogues, could be stories, could be plans, OK?
And so it makes sense that this ability predated sort of actual language as communication. And with Mike Collins, my colleague at Columbia, who has written some of the most famous parsers in NLP, we are trying to put together a good parser for English based on-- you know, in other words, which could be implemented through assemblies, what we know, what we conjecture about assemblies, in the brain areas that I mentioned before. OK.
So let me take you through a quick dive into linguistics. As I told you, the generation of a sentence is of a fact, OK? How to document a fact in my left brain, that the boy kicks the ball, is order independent. OK? So you know, I don't have to articulate it. I just create this fact. But then if I were to say it, if I'm Japanese, I would say it completely different than what I'm saying here, or German.
So it turns out that different languages have different subject-verb-object, SVO orders, OK? And it turns out that these are the statistics. SOV is 45%. SVO is 42%. These are soft statistics, OK? First of all, many languages like German and Russian, they are very flexible, OK? They have no SVO order.
But all of them, they have a standard SVO order. And this is what is counted here. And also the question is, what is a language so that it's an entry, and so on so? It's soft data, but it's data. I got this from Wikipedia. And there are data elsewhere that are very compatible with this. OK? So these are the statistics. And the question is why? OK?
And the linguists have come up with a lot of very clever explanations, ingenious explanations. What linguists say is, you know, obviously, you know, I believe that from linguistic principles that the order of articulation would have to satisfy these axioms. And SVO order is, you know, the more axioms it satisfies or the fewer it violates, the better it is, you know, the more likely it's going to be. And it turns out that it works, OK?
So it gives you a reasonable accounting of this table. But the question is, doesn't it make sense to also, sort of, you know-- now that we have a very, very preliminary conjecture, idea of how the syntactic scaffold of a sentence is created, can't we use this? So you know, can't you use, sort of, you know, brain facts as-- brain considerations as arguments for this, OK?
OK, so for example, suppose that we are SOV, OK? Many languages are SOV, OK? So how would we implement it? OK? And basically, one way to implement it is to fire the root. In the next step, S is going to be output. And the other internal node is going to fire.
Imagine that the firing of this internal node, that's the blue arrow, can inhibit the V. And then in the next step, all will fire. And this is going to disinhibit. That's the green arrow, OK? So basically, imagine that you have these inhibitory, disinhibitory mechanisms. And that this is how--
Notice, that we're not born with these. These are things that are created, installed by the toddler, OK, so you know, at 24, 25 months of life, OK? So this mechanism implements SOV. And basically, you can already say it explains why SOV and SVO are much, much more likely than others. And the reason is that they only need one inhibition and one disinhibition, whereas everything else, all the other orders, the other four orders require more than one inhibition and more than one disinhibition.
And you know, you can go on one step further. Imagine that the frequency of a particular order, that's the pi, is sort of exponentially dependent on its complexity, OK? So you know, where this is some kind of [INAUDIBLE] distribution, OK? And the complexity is the sum of control steps, OK, of the complexity, of the difficulty of the control steps. And through gradient descent now, we can solve the corresponding equations, OK, so you know, as close as we can.
And you find the difficulty of each primitive. The primitives are disinhibition and inhibition primitives, OK, between nodes of these five node trees. And by doing this, you make a particular prediction that the two areas for object and verb, they are symmetrically connected, whereas there is much better connectivity from subject to verb than there is from verb to subject, OK? So you know, in the infant brain, before we learned language, this is a prediction that this calculation made.
So I'm done. Let me go through-- So you know the first, you know, of course. You know, the study of the brain is, you know-- Sometimes I call the study of the brain the sort of, you know, the sunsets from the Berkeley Hills, OK. In the following sense, that when I ever saw this site, the sunset from the Berkeley Hills, I wonder, why isn't everybody stopping what they're doing and coming here to look at this, OK? When I work on this problem, OK, this is what I'm thinking, how interesting, my colleagues are working on other stuff. I don't get it.
So the new thing I brought you, OK, that's, of course, something that we all share, is a sort of, you know, a gut feeling we all share. The problem of language in the brain may be coming of age for some bold hypothesis. And I have proposed one. How is parsing done in the brain?
OK, notice that parsing is not only done, but learned, OK? Because every language has a different parsing algorithm, but these parsing algorithms might come from the same circuit, OK? So that's a very, very challenging problem. My colleagues and I have some ideas that we are pushing.
So are assemblies the seat of Axel's logic? OK, so you know, is the concept of assemblies and the Assembly Calculus that I presented to you the answer to what Axel is seeking? And of course, how can one test it, verify it, falsify it, the assembly hypothesis. OK? So these are my collaborators, Dan Mitropolsky is a graduate student at Columbia. Santosh Vempala, a colleague theoretician from Georgia Tech. Mike Collins of NLP from Columbia. And Wolfgang Maass, he's, in many ways, my teacher in all this. He's a professor at TU Graz. So thank you very much. And I hope I left some time for questions.
PRESENTER 2: Great. Thank you very much for that wonderful talk. We definitely do have time. We've got a question here from [INAUDIBLE]. In your recent paper on Assembly Calculus, you approached the problem of computational modeling of brain by studying ensembles of neurons. According to you, at what level should we, as computational neuroscientists, approach the computational problem?
Should it be at the level of single neurons or studying at the level of neural assemblies forming a hierarchy within the brain, i.e. studying ensemble densities of these networks which characterize the properties of ensemble densities higher up in the hierarchy? Or do we need to build up from the single neurons to neural assemblies, then to specific parts of the brain, and then to the whole brain, purely from a computational perspective?
CHRISTOS PAPADIMITRIOU: OK, so as a computer scientist, OK, you know, the way I'm thinking is this-- the answer is all of the above, OK? But in the following structured way, that an assembly is what a computer scientist would call an abstraction. OK, so, you know, it is-- You know, I don't think that abstract. I think they exist and live and torture me every night. But they're abstraction in the sense that you have to show how they're implemented from the immediately lower level, which is neurons and synapses, OK?
And this is something that is ongoing. And it has gone some way, OK. So you know, we have some evidence that, indeed, assemblies can be compiled down, you know, the assembly operations can be compiled down to the level of neurons and synapses. And of course, so you know, assemblies are-- so you know, they build hierarchies. And language is, of course, only one of the ways in which-- you know, I believe that math computation, deduction, planning, OK, so you know, all these things, creativity, all these things, they must be implemented somehow.
OK, I mean, this, of course, to the question, you read to me sort of strikes at the core of what I'm trying to do. So you know, there is another sort of question lurking. I mean, do you really believe that we will at some point discover that the brain really is working in this clean assembly way, sort of, you know, that you have assemblies. You can always name them. You can always point to them and so on.
So the answer is alas, no. So you know, I think all of us are resigned to the fact that no matter how clever our theory is, the brain is going to do something that is much more ad hoc, messy, wet, special variables, OK. But the real question that I see it are assemblies, sort of, you know, a helpful way, you know, can what is happening in the brain reasonably and in a helpful, in a productive way abstracted by a sentence. OK? So you know, long answer, but I hope I hit the button in what you asked. Thank you very much.
PRESENTER 2: Great. Thank you. The next question we have hear is from Anthony Chen. You mentioned that the arcuate fasciculus is the hardware required to support the merge operation, which is the important operations to build trees. Does this imply that before information is processed by the arcuate fasci-- I'm probably saying that wrong-- fasci--
CHRISTOS PAPADIMITRIOU: Fasciculus.
PRESENTER 2: Thank you. [LAUGHS] We do not have the ability to construct, ergo syntactic trees, i.e. Wernicke's area alone cannot construct sentences. Do patients with lesions to that brain region have the inability to understand sentences?
CHRISTOS PAPADIMITRIOU: So the patients with lesions in Broca's area, they do not have syntax, OK? So you know, they say things like, "Me, coffee, cup," instead of, "I want a cup of coffee." OK? So the patients with lesions to Wernicke's area, for the life of them, they don't understand what the role of words are in the sentence, even though they create syntactically perfect, but nonsensical sentences, OK?
Now, what happens with that, you know, that's an incredible mystery to me, OK? You know, I think it's one of the most fascinating things. You know, what happens to patients who have serious lesions in their arcuate fasciculus, OK? They can still do sentences. The sentence makes sense. They understand. They're good sentences.
So apparently, there are other ways to do Merge. OK? The arcuate fasciculus probably implements Merge, but there are other mechanisms in the brain that implement Merge. I mean, overall, it's definitely other parallel links, parallel fibers that goes ventral, but also dorsal, smaller fibers that can do Merge. You know, very surprisingly, this is something that I think requires explanation. Maybe the key to language is understanding this.
Patients with lesions to the arcuate fasciculus suffer what is known as conductance aphasia, which is the following. You cannot repeat sentences well at all, OK? So everybody, even though I cannot hear you, repeat after me-- I do not have conductance aphasia. If you are successful, you are right, OK? You do not have conductance aphasia. Because that's what conductance aphasia does not let you do.
So one has to think, then, I mean, what is the real purpose of arcuate fasciculus? And if you think-- you know, I can only speculate, you know, it's along speculation, but you know, these trees must go somewhere and be stored, OK? So maybe it is where, you know, the arcuate fasciculus is a bus that takes trees away and puts them where they're going to be used in the future. But what do I know? Nothing, OK?
PRESENTER 2: Great. Thank you. The next one is referring to earlier in your talk. They were asking if you wouldn't mind repeating or clarifying the part about why 12 is particular?
CHRISTOS PAPADIMITRIOU: OK, OK. Sure, sure, sure, yeah. So there's a beautiful book by Gyorgy Buzsaki which I recommend, Rhythms of the Brain. OK? Neuroscientists have thought about rhythms in the brain forever. OK. So you know, the alpha rhythm was a huge thing. So there are frequencies that the brain uses for various purposes, OK? And we sort of understand them, OK?
And it turns out that two of these are particularly useful in my way of thinking. The theta rhythm is basically the rhythm of spiking neurons. OK, so it's maybe from 30 hertz-- sorry, 30 milliseconds to something like 100, 200 milliseconds, 5 hertz to 30 hertz. You know, let's call it 50 milliseconds or sort of like-- the theta rhythm is something like 4 to 6 hertz.
And it's the rhythm of the language, OK? And if you divide the two, 50 divided by 4, you get a dozen, OK? And this means that if, you know, this should be the time constant of your compiler, OK, that if you want to simulate one system through the other, somehow you must do the elementary operations of the simulated system in about a dozen spikes, OK?
And it turns out a dozen comes up a lot in our simulations, OK, that this is about the right number of iterations after which our operations converge, OK? I mean, I know that it could be just a numerical coincidence, and probably is, you know? But it's also worth mentioning.
PRESENTER 2: And the next question is from Dario. Since you are convinced that trees are in our brains, do you agree with Noam Chomsky, his universal grammar?
CHRISTOS PAPADIMITRIOU: Um. OK. Um. OK. So trees in our brains goes way beyond Chomsky. OK, you know, that syntax, the understanding of the structure of a sentence is a huge part of a language, that's uncontroversial, OK? The question is, is it like an abstract concept that sort of, you know, you deduce from? Or is it an empirical observation that you measure?
So you know, I believe that you don't have to be a Chomskian to believe that something like Merge is needed, not in order to study language, to publish a book about language, but in order to implement language in an animal, OK? So universal grammar, OK, so that's-- I don't believe that Chomsky believes in that sort of, you know, very strong hypothesis anymore, OK? If you look at the late Chomsky, so you know, since 1995, first of all, grammar is not very prevalent in sort of, you know, in the work.
And the concept of Merge, so you know, the motto is all you need is Merge, OK, so you know, Merge is important, but, you know, universal grammar, in some weaker sense, is uncontroversial, OK? But basically, you know, here is the modest version, OK? We all have a knowledge of language, OK? We all understand language. And you know, everybody, you know, every human without sort of very specific pathologies knows language, has a knowledge of language.
OK, for simplicity, let's call this knowledge of language grammar, OK? Since languages are so different-- OK, so you know, there are thousands of languages across the globe-- this knowledge of language cannot be specific, OK? But maybe there is a sort of a way of understanding language that abstracts all of them. And you know, because if a Japanese girl is raised from infant in England, OK, who will speak in Oxford English with an Oxford accent and will not understand Japanese, OK?
So there is evidence that there is something universal about it and that we must all have, been born with some common abstraction of this knowledge that ready to receive some modifiers that make us speakers of Greek, English, and so on. Did that answer this? OK. So you know, I don't believe in the Chomsky in the '60s, but neither does Chomsky believe in Chomsky in the '60s. So--
PRESENTER 2: Great, thanks. I think we have time for one more question. This is from [INAUDIBLE]. Do you have any advice on how cognitive neuroscientists using fMRI, EEG, and MEG techniques can design experiments to investigate these assembly operations?
CHRISTOS PAPADIMITRIOU: Good god. OK, suffice it to say that at the beginning of the current pandemic, I received word from NSF that they gave me a grant with cognitive neuroscientists at CUNY to do science experiments. So yes. So we are full of ideas on how to do this, OK? And many of them involve language. OK, unfortunately, we cannot run our experiments right now. OK? But yes, of course, you know, that's something that I'm very interested in.
So you know, I said how do you verify or falsify the assembly hypothesis. Let me be perfectly frank. I think that a hypothesis for verify, falsify is sort of like-- I wouldn't call it a myth, but it's sort of some kind of abstraction, all right? And we know that this is what Popper taught us. And it's basically correct, but not quite, OK?
So I believe that hypotheses are neither falsified nor verified. So you know, basically, they're either pursued or abandoned, OK? And what I want is the assembly hypothesis to be pursued, OK? And if it's pursued, then what will happen? So you know, OK, one extreme case is that nothing like that is happening in the brain. You were dreaming. Fine, OK.
But the most common thing will be that, OK, so you know, you have to modify it, you know, but you know, because experiments show that something different is happening. You know, so pursuing is good. OK? What is the death of a hypothesis is not falsification, but it's abandonment. OK? So you know that people say, eh, OK, OK, I'll do something else.
PRESENTER 2: We've got one from [INAUDIBLE]. What is the basis of choosing those values, ergo in the plasticity of coefficient beta equals 0.1, how those selected values effect the assembly simulation results?
CHRISTOS PAPADIMITRIOU: OK, OK. So we have our simulation system is online and you can play with it. And the paper contains a pointer to it. So roughly speaking, the bigger the plasticity coefficient, the happier we are, so you know, the more true our results are. OK? So you know, the point is-- you know, you need-- if you make-- and 1.1 is sort of the threshold where things start working, OK?
You know, and when we put it into 2, people who know about the brain, about plasticity, scream at us. OK? You know, already this plasticity rule is incredibly simple and unrealistic. So you know, we know that plasticity is much more complicated than that. But you know, it's just an implementation of Hebbian plasticity. But sort of, to make it much stronger than 1.1 would be extra ground for doubting it, doubting the results.
PRESENTER 2: Thank you again for joining us, Christos. Wonderful talk.
CHRISTOS PAPADIMITRIOU: Thank you, thank you. It was a pleasure.