Topological Treatment of Neural Activity and the Quantum Question Order Effect
Date Posted:
April 26, 2016
Date Recorded:
April 26, 2016
Speaker(s):
Seth Lloyd
All Captioned Videos CBMM Special Seminars
Loading your interactive content...
Description:
Seth Lloyd - Mechanical Engineering, MIT
Abstract: The order in which one asks people questions affects the probability of their answers. Similarly, in quantum mechanics, the order in which measurements are performed affects the probability of their outcomes. The quantum order effect has a specific mathematical pattern, which -- unexpectedly -- is also obeyed by the human question order effect. This conjunction of the two order effects does not mean that the brain is processing information in an intrinsically quantum way, but rather that certain aspects of neural activity can apparently be captured by a linear projective structure,as in quantum mechanics. I introduce a topological treatment of neural activity and identify topological linear projective structures that might be responsible for the question order effect.
PRESENTER: I have the pleasure of introducing Seth Lloyd. This is a somewhat strange talk for the Center of Brains, Minds, and Machines, because this has to do with quantum theory. But you know, it's great to have Seth, who is one of the nicest and most creative colleagues I have at MIT. And after all, the mind is quite the mystery, and quantum theory is quite a mystery. This is basically the argument.
SETH LLOYD: But that does not mean they're mysterious in the same way.
PRESENTER: Exactly. And you'll explain in which sense maybe we can use one to understand some of the others. Seth.
SETH LLOYD: Thank you very much. Is it a real pleasure to be here, though I must say I face this with some trepidation. Because I am going to be talking about things that happen in the brain, and I'm quite sure that pretty much everybody in this room knows more about this topic than I do. But since my lab motto is "powered by ignorance," I thought I would go on. We were thinking of making bumper stickers, actually, but I figured maybe the Boston drivers wouldn't get the joke-- take it too literally.
When I looked at the abstract and title I'd given for this talk, I said, wow, that sounds pretty wacky. And my purpose in giving this talk is to tell you a little bit about something that I've been learning about the last few years. Let's see if I can't find it.
As Tomaso said, I work primarily on quantum mechanics and on quantum computers, and I've been doing a lot of work on quantum algorithms for machine learning for the last few years to figure out new ways of trying to analyze data quantum mechanically. And one of these algorithms is to analyze topological features of data. So topological features are things about-- topological space is a space where you have a notion of adjacency, that things are next to each other. And topological features are things like the number of connected components in a space, or the number of holes in a space or the number of gaps or voids.
So topology is a very basic notion, and over the last 20 years there has been a series of topological machine learning algorithms to try to learn topology from data. And there's a very good reason for doing this, which was explained to me by Michael Friedman, the fields medalist who was a famous topologist. And he coined the phrase-- the mysterious phrase-- "Topology is the discrete residue of geometry," whatever that means. I have no idea. No, I'll tell you what that means in the course of this talk.
But the reason is that-- what is data? Data is information we gather from the world. And when we analyze the data, we're trying to establish relationships that are out there in the real world. But merely by getting the information and representing it, the data gives an assorted picture of what's out there in the world.
And topological features, something like the hole in a donut, are famously features that are invariant under continuous distortions. I remember when I was a kid reading this article in Scientific America. Here is a donut, here is a coffee cup. They have the same typology. I was like, what?
And as this kind of complicated-looking article shows, people have been applying this to analyzing data from the brain. I'll actually explain what's in this paper. I'm putting this up for two reasons. The first is to show you that these methods look really fiendishly kind of complicated.
This, by the way, is a paper-- I had a terrible time learning these methods. Somebody told me I should learn about these topological methods, and I tried to read the papers and I failed. And then I tried again and I failed. It's like a novel. You read a novel and you just don't make it past the first 100 pages or first 50 pages. But I know from reading novels, eventually, if you try enough, you can eventually sometimes succeed.
And I finally enlisted the help of a bunch of people who are much more mathematically sophisticated than I am. One is Francesco [? Vacorinio, ?] who is one of the authors of this paper-- to help explain this to me. And then once I had it explained it to me, I realized that I could actually make quantum mechanical versions of these algorithms.
By the way, the quantum mechanical versions of these topological algorithms have a feature that they're exponentially faster than the classical versions for reasons that I'll explain in a bit. But I'm just putting this up to show this is a kind of paper where you start looking at the things like this, this. You go down this list of features that are going to be discussed, and you get down to the Kth homology group.
And you say, OK, forget it. At least that would be my take on this. I sometimes read papers. There's an old Peanuts cartoon where Linus is reading Brothers Karamazov, and Charlie Brown says to Linus, but what do you do with all those long, complicated Russian names?
Because everybody has, like, five different names. You can't remember who they are. Linus says, oh, it's very easy; when I get to one, I just bleep right over it. So this is often a good way to read a paper or some equation you don't understand. Just bleep right over it and hope it will become clear.
Anyway, this is a paper where it's very specialized to this topological analysis. It's all about homology groups and things like that. Here are pictures of these [AUDIO OUT] holes, closures of holes. I'm really showing this to show you this paper, despite the fact that it probably can only be understood by about 50 people in the world-- and I don't think I'm necessarily one of them-- it has an awesome graphic in it, which was downloaded hundreds of thousands of times.
And it was Wired magazine's best scientific graphic of 2015. So what is this graphic? This shows you the kind of information you get out of looking at topological analyses of processes.
This data is functional MRI data from the brain divided up into 256 segments. You get a notion of adjacency of different neural sectors by looking at the correlations between the data. So if two of these sectors are highly correlated with each other, even if they're far apart in the actual physical brain, then you assign them an adjacency. You say they're adjacent.
And then they did this analysis of the brain. The picture on the right is the topological analysis of the resting brain. And each of these little clumps represents clusters of neurons or topological features of the connected components. And then you see that the clumps kind of naturally divide up into five or six or seven or eight different thought processes, which are these different colors, which are characterized by some kind of connectedness structure.
And this makes sense. If you're just sitting with a resting brain, you've got a bunch of things going on in your mind at once, simultaneously. And the lines represent overlaps that connect these different thought processes to each other. It makes some kind of sense that you have different processes that are going on. Who knows? It's mysterious how they actually occur.
And by the way, that's like this quantum mechanics is mysterious, the brain is mysterious. To say, oh, they must be mysterious in the same way, I call this the Penrosian fallacy. Gravity is mysterious, quantum mechanics is mysterious, the brain is mysterious, consciousness is mysterious-- they must all be the same thing.
This is simply not true, though I will tread perilously closely to talking about quantum mechanics and the brain in this talk. I should say right at the start that I don't believe that there's any fancy quantum mechanical coherence or anything like that that's going on in ordinary neural function, except possibly at the level of individual synaptic receptors, where there is evidence that in receptor dynamics and molecular dynamics and receptors quantum mechanics plays a role. But don't worry, I'm not going to claim that our thought processes are quantum mechanical.
So this picture on the left is the topology of the ordinary resting brain, and the picture on the right is taken from the same group of subjects after they've taken psilocybin, magic mushrooms. And what can I say? Woah, everything is connected. Wow.
I think this explains a lot. Anyway, I thought you would enjoy this, because what is not to like about this? My God. Anyway, so during this talk I'm going to do the following.
Now I'm going to raise a question that has been noted. I came to these ideas kind of circuitously. I've been working on these topological algorithms for the purpose of making quantum algorithms that did exponentially better than the classical ones. And then a few years ago, a reporter called me up to ask my opinion about what's called the quantum question order effect.
Who here has heard of the quantum question order effect? I see people who have [AUDIO OUT]. So this is a very interesting effect, and not without controversy. Some papers that have written about it say it's absolutely crystal clear and it's absolutely the case. Others say, well, maybe it's not so clear.
But it comes from a very simple observation. So when you ask people questions, the order of the questions matters. So if you ask them the questions in one order, you'll get different probabilities for the answer than if you ask them questions in the opposite order.
Well, that makes perfect sense, because once you've answered the first question, your brain is in some state corresponding to just having answered that question. And then when you answer the next question, whatever neural processes are going on then will come into play when you answer the next question-- and visa versa. So it certainly makes sense that when you ask people questions that the probabilities that they answer with are not just-- they depend on the order.
And now, in quantum mechanics, it's also the case that asking questions depends on the order. If I measure first momentum and then position, I'll get different answers than if I measure probabilities for the results, if I measure first position and then momentum. And this happens because, in quantum mechanics, there's a very specific mathematical structure, which I will now tell you.
So in quantum mechanics, I have the states of a system. So for instance, I have a state which could correspond-- see, it's nice and quantum mechanical. Who here has a background in quantum mechanics? I know some people do. Does not? Does and doesn't at the same time?
Those are the true quantum mechanics. So if you don't have a background in quantum mechanics, then when you put these things called Dirac brackets around something, it means it's a quantum mechanical thing. There are actually two ways to indicate things are quantum mechanical things.
So one is to take a letter and change it to a Q. A while ago I worked with people at Google on a project called Quoogle, which is a quantum version of Google. And now Google has an actual Quantum Artificial Intelligence Lab. I've been working with people there.
And here's their logo for their quantum AI Lab at Google. In fact, they've got an agreement from Sergey Brin that nobody else in Google is allowed to use the word "quantum," because the word quantum gets used all the time for just saying something is kind of cool. We made a quantum version of the Netflix algorithm.
So it's got a matrix completion algorithms, like, you watch these movies, you might enjoy these other movies. And we said, oh, this is great, we can call it the quantum Netflix algorithm. But then I googled "quantum Netflix algorithm." It turns out that the algorithm that Netflix created themselves for this they call their quantum algorithm, even though it has nothing to do with quantum mechanics.
So here is a quantum mechanical state, and this could be the state that says, OK, I asked my first question and I got the answer A. And it's a vector. It's just some vector in some [AUDIO OUT] dimensional complex vector space. And I can just write it like that if you don't want it to look [AUDIO OUT], but it looks quantum mechanical so it's cool. And then suppose that B corresponds to answering the second question, to giving the answer be for the second question.
Now, in quantum mechanics, the probability that I get B for the second one, given that I got A for the first question, is equal to the following thing. I'll write it in fancy form here. It's equal to this thing, which is just actually equal to B transpose-- it's actually just all it is, actually, is really B transpose complex conjugate dot A quantity squared. So these states correspond to vectors, and conditional probabilities correspond to the inner product of these vectors squared.
But we note that this is also equal to the inner product taken in the other way squared, and that's equal to the probability of asking B first and then getting the answer A second. So if I asked the first question, got the result B, then the conditional probability for getting A, answer A for the second question, is the same as if given that I got the answer A for the first question, I got the answer B for the second. So this is the order effect in quantum mechanics. I could make it fancier-- you can talk about measurements, et cetera-- but this is a basic feature of quantum mechanics. You always get this particular form for the conditional probabilities.
Now, the quantum question order effect in human beings is that, quite bizarrely, there is strong empirical evidence based on many studies-- I think one of the papers I have said, looked at 70 studies with an average of 1,000 people in each study. So there were many people, there were many studies on this. It appears that human beings obey the same rule.
The degree of accuracy to which this is true differs according to who's making the claims. And you look at the statistical analysis, and there are lies, damn lies, and statistics. But it does seem that something like this is going on. Yeah?
AUDIENCE: [INAUDIBLE] because it's traditionally [INAUDIBLE].
SETH LLOYD: It's traditionally called a quantum order of [AUDIO OUT]. I agree that that would be a better word for it.
AUDIENCE: [INAUDIBLE]
SETH LLOYD: Oh, the order does make a difference-- absolutely. You can easily [AUDIO OUT] because of-- no, no, no. OK, I better answer this question. So this does not mean that probability of B [INAUDIBLE] A then B has to be the probability of first getting B and then getting A. This is a conditional probability given this.
That would be true by Bayes' rule. [AUDIO OUT] divided by PB by basis theorem. [AUDIO OUT] A's rule or Bayes' rule. These will be the same only if these probabilities are equal, which in general [AUDIO OUT] order effect. Somebody else had a question. Yeah?
AUDIENCE: [INAUDIBLE]
SETH LLOYD: It was the same question. OK. So the probability of first getting B, then A, doesn't have to be the same as the probability of first A and then B. It's the conditional probability that if you got A, then you get B, is equal to the conditional probability that if you got B, then you get A.
So you can get quite bizarre effects out of this. And when your probabilities have this kind of feature here, there are a host of anomalous decision effects which don't obey the ordinary rules of probabilities. Many ordinary probably rules assume that P of ba is the same as [AUDIO OUT], and you get all kinds of bizarre things, like the probability of A and B-- people judge this to be higher than the probability of B, which can't happen with ordinary probability. But it can happen when you actually have this kind of projective structure.
And at least in the papers that I've read on the quantum question order effect, there are lots of ways that people actually try to explain these other issues as well, but they basically come down to this funny feature. And it's completely mysterious why human beings apparently have this feature as well. There's no good explanation for this. There's no real reason why human beings-- sure, there is an order effect, but why should it have this funny quantum mechanical-like structure?
And the authors of these papers, thank God, are not Stuart Hameroff and Roger Penrose. And so they are not claiming that this is because the brain is quantum mechanical and our thoughts are in a quantum mechanical superposition, et cetera. But I would that what it indicates is it is suggestive that there might be some description of the thought processes and of neural processes that has some kind of vectorial description and that these probabilities are in a sense of that you go to B, given that you answered A, is given by some function of the inner product between these vectors.
That's what it suggests. And I'm going to make a proposal, which I have no idea if it's true or not, that maybe what's going on is that these vectors are vectors that describe topological states of the way that the brain functions. That's what I'm going to propose. And I don't know if it's true, and anybody's welcome to come up with their own vectorial picture of brain functions and to argue that, hey, these conditional probabilities should depend only on the inner product between these vectors.
Because, you see, that's what really is required for this quantum order effect. This quantum order effect-- if indeed there are some dynamics, which it can be accurately described by brain processes correspond to vectors in a high-dimensional space, and then it turns out that if you are in vector or neural state A that your probability of going to neural state B is proportional to the overlap-- doesn't even have to be the overlap squared; it could be any function of the overlap-- then that would explain this quantum question order effect. And actually, when I phrase it in those terms, that actually sounds kind of plausible, and at the end of the talk, I'll give a specific model in which I'll argue that it's plausible.
And it's a model based on work by my colleague Jean Jacques Slotine. In the mechanical engineering department, they have a nice paper about neural nets, artificial neural nets, and I'm trying get them to test to see if this is true.
SETH LLOYD: [INAUDIBLE]
AUDIENCE: You said, [INAUDIBLE]
SETH LLOYD: Yeah. Yeah, it's got to be-- yeah, not any function. Sorry, yes. So yeah, any positive function of this will do it. Right? Because this is symmetric-- the magnitude of the inner product is symmetric in a and b. That's-- [AUDIO OUT] That doesn't have to be. For instance, a square is what's called the Born rule. For some reason, the literature people always assume that these evade Born rule. It does not have to be that way. So if you happen to read some of these papers-- you don't actually have to have this Born rule square here. That's getting pretty hard to get. It just has to be some function of this.
Then, for instance, if these vectors are normalized to 1, then the inner product is a simply measure of the distance between these vectors. So I can imagine cooking up all kinds of models. Where, if I'm in state a-- state corresponding to vector a, then my probability going to vector b is some function of the distance between these two vectors. That makes perfect sense. Anyway, I'm trying to-- I'll stop trying to talk you into it. I actually have difficulty talking myself. Maybe I'm more trying to talk myself into it. OK.
AUDIENCE: [INAUDIBLE] --question that you ask, and it knocks a person into a brain state. And from within that brain state, there's a fixed probability that they would return a or b. But you're asking a different question-- so a and b here you're asking different questions.
SETH LLOYD: I'm asking different questions.
AUDIENCE: Are you more likely to buy Koch brothers' products versus this? And the other question is, do you care about the Koch brothers' impact on politics, or something?
SETH LLOYD: That's the kind of questions that they ask. Yeah.
AUDIENCE: [INAUDIBLE] --which you might have fixed responses for those two answers, because there's a lot of different brain states people operate in.
SETH LLOYD: Right. That's right, but that in and of itself does not necessarily imply that-- this particular form. So I agree that a question order effect is clear-- just as you said. Right? And I think I said a less articulate version of that a little earlier-- that, that makes sense. But in addition, you need something more, which is that the probabilities in some sense obey this funny--
AUDIENCE: --in a certain [INAUDIBLE]
SETH LLOYD: Well, the probabilities if you're in brain state a that you go to brain state b-- when you ask the question that can have answer b-- is the same as the probability if you're in a brain state b. a, if you ask the question that corresponds to a. Right?
AUDIENCE: [INAUDIBLE] And when you ask for a second question, it's now overlaid on that brain state they're already knocked into.
SETH LLOYD: Yeah. So the kind of models for instance in Slotine's paper, basically there's a kind of a-- there's some brain state which is your neurons firing in a particular set of pattern. And then you ask a question, and now your brain is put in a mode where it has to give this answer or this other answer. to These are attractors of some complicated nonlinear dynamics. OK, you're actually forcing me to give my argument for why this should be the case, but I'll do that anyways-- why not-- in case I don't make it to the end. So there's some complicated nonlinear dynamics that's going on. You end up either in this attractor, b, or you end up in not b. OK.
Now what's weird about this-- we know that this complicated non-linear dynamics. Yet here, there's somehow something very linear about this. What the heck is going on with that? The point here is that if your probability is some function of the distance from your original brain state, expressed in this vector, to here-- to not b or to b-- the distance from a to b and to not b is the same as the distance from b to a, and from not b to a. That's what this is actually saying. So then I think that that makes some-- that actually makes some kind of a sense. And if you're just rattling around in some weird nonlinear dynamics-- which is what my brain does-- and you have to end up in this tract or another, then it makes sense that the further away this is the less likely you are to go in it, and the closer you are to the other one the more likely you are to go in it.
Still, that's something that really has to be verified. I would say, this is not at all obvious even though I was trying to make it plausible. Right. Any other questions about the quantum question order effect? Yeah.
AUDIENCE: [INAUDIBLE] So is the motivation for entire framework, which you propose, just observation of human responses to questions are not independent?
SETH LLOYD: Actually, no. In fact, by now you're forcing me to, because you asked the questions-- the questions were asked in a different order from what I expected. You're forcing me to give my--
[LAUGHTER]
--to reveal my motivation for this talk. Which is that actually there's a huge amount of data on-- about how brains work. And the amount has exploded vastly, and it's going to continue to explode even more vastly. Now, how do we make sense of this, right? How do we make sense of what's going on with these very complicated systems with very large amounts of data? So how do we find patterns of how thought patterns work? How do we make sense of what's going on? I'd like to argue-- and I already argued for the case of data in general, but that for brain data in particular-- these kinds of topological approaches are potentially useful.
Topology is the general connectedness structure of anything, and the general connectedness structure of what's going on in the brain is probably a good way to go. And may I say that there are-- this is a well-established field of ordinary machine learning and there are off the shelf packages you can use to try to analyze it. You don't have to actually go in and read papers like homological scaffolds of neural functioning, and you don't have to know what the k-th homology group is either. In fact, it was a-- that was the big hump for me for getting over-- getting over this.
In some sense, this quantum question order effect-- I realized having spent all this time learning about these topological analysis methods-- is topological analysis methods as we'll see require you to describe topology in terms of vectors in high dimensional vector spaces. That's how algebraic topology works. That's how these analysis methods works. So whether independently of anything about quantum mechanical in the brain, then you can describe the topology. In fact, you're kind of forced to describe the topology of what's going on by this vectorial representation.
And this vectorial representation-- and then I realized after while I was working on this this last summer-- I remembered this reporter calling me and asked me to give some comment about quantum question order effect. Which I was simply saying it's very important to note this is not a quantum mechanical effect. Right? I realized, hey, this topological description already has this vectorial picture built into it. Maybe what's going on is the kind of thing we were just discussing, but it's governed by this funny linear algebra. That's why I came to this quantum question order effect. And I think it is kind of fun, and apparently it's an effect.
So what I intend to do now is just to describe to you quite briefly how these classical algorithms work for analyzing topology. And this will be a little bit mathy. And as you saw the math looks hideously complicated, but the actual ideas are in fact rather straightforward. And when I actually finally got over the hump to learn what was going on, I realized that it wasn't as complicated as I thought. So I thought that I would actually tell you a brief-- my brief understanding.
I have a very crude understanding of this which is good for pedagogical purposes. My crude understanding of what's going on with these topological analysis-- if you're not interested then that's fine, then we'll go back in about 10 minutes or something like that. We'll go back to arguing about what might be actually going on that's responsible for this quantum question order effect, which is just the kind of thing that you were saying. OK? So any more questions about the quantum question order effect, or any of you else want to stop me? You can delay me talking about algebraic topology, but you can't stop me. [LAUGHS] By the way, who here has studied algebraic topology? OK, great. So we actually have people-- and once again probably guaranteeing that you know more than I do though.
I think this is actually a pretty cool-- a pretty cool technique. Once I finally understood it, it's like, oh, OK, that actually is pretty neat. So here's what these things are supposed-- these algorithms are supposed to do. So here's some data. OK? The data has a hole in it. All right? There is a hole. But how do we tell that? Because something's sort of strange, like if I look really close it's all holes. [LAUGHS] Not a hole, right? But if I move back and I fuzz it out it's like, oh, yeah, look, that's a great hole. There is a hole. And then because I'm quite near-sighted, when I get back here it's like I just see some kind of blob over there. All right?
So the way that these methods work, the idea is that you want to analyze the topology of the data at different levels of fuzziness. And at different levels of fuzziness, then by using this mathematical structure of algebraic topology-- which basically is just-- all it is, is to get topological features by looking at big vectors in gigantic dimensional vector spaces. Which is why, by the way, we have super fancy quantum machine learning algorithms for them. By looking at it at different scales of fuzziness, if you have a feature-- the hole, right? And a hole is something that belongs to this thing called the homology. A homology just means it's some topological thing that you can use to scare people with words like homology and cohomology, and stuff like that. Right?
So the hole belongs to the homology, and the idea is that if at some level of kind of fuzziness you see a hole-- you find a hole-- and it persists over many levels of fuzziness until finally it goes away, that this hole is a real feature of whatever the thing was that you took the data about. OK? So in fact, this theory goes-- is called persistent homology. Persistent homology. So persistent homology, the idea is that you analyze the data at different scales of fuzziness. At each scale, you find the topological features like holes, connected components, voids, gaps, et cetera. And then if you find a feature that persists over many scales of fuzziness, you say, that's a real feature of what's going on. So these topological methods work in that way. That's what they're doing. Does this make sense? I mean, I think this is-- it seems like a wise thing-- [AUDIO OUT]
Yeah, and people have been doing-- the names associated with this, I think actually Michael Freedman actually-- the topologist-- actually developed some of this for a secret DARPA project in the 1980s. And when I gave a talk about the quantum algorithm, he got incredibly excited and brought me-- gave me all these top secret documents, which was really fun. But he just wanted to show that he'd done it first. And then his colleague, Carson Nielsen at UC San Diego, then picked up on it, and then developed this theory. There's a lot of people who have been working on it over the last 25 years or so. OK.
So now how do we identify this measure of fuzziness? So the idea is we have a notion of distance between points. So these are between two data points-- distance between two data points i and j. I always draw my distances as little deltas like this, because they obey the triangle inequality. [INAUDIBLE] --thing. So we won't mind? So we have distances, and we have a fuzziness scale. I'll call it a grouping scale. Epsilon-- I call it epsilon because it starts small, and then it gets bigger. And so then the feature is we connect two points if within epsilon-- if delta(i, j) is less than or equal to epsilon.
So I'm going to change this set of dots into a graph, OK? And so when epsilon is very small, I don't connect anything. But then when epsilon gets bigger, I start to connect these points here, these points here, these points here, and these points here. So as epsilon gets bigger, the first thing that happens is you start to create a bunch of edges. You also start creating little triangles. So when you have three points that are all within epsilon of each other, you create a triangle. And when you have four points that are within epsilon of each other, you create a tetrahedron. Five points, you create a pentahedron, or whatever it's called. Yes, it has 10 sides. [LAUGHS]
OK. So start creating edges, triangles, tetrahedra, et cetera. And these things are called simplices-- k-simplices. So a triangle is a two simplex, it's a two dimensional simplex. An edge is a one simplex. A tetrahedron is a three simplex. So as epsilon gets bigger a simplicial complex-- is called a simplicial complex, I'm afraid to say-- S epsilon emerges. So I start connecting these points, and then what happens is at a certain point when I start connecting things-- I'm not going to connect everything here because I drew too many points-- but at a certain point in this epsilon-- I'm going to show it-- when epsilon gets large enough, what happens is the hole will emerge. You'll see the hole, OK? Then if I start connecting points that are further and further away, what happens is actually the hole starts to go away. And finally if I connect all points within the radius of this whole thing, the hole is gone.
So I have what is a nested set of simplicial complexes, or "comp-leh-sees" depending on how you want to say it. I'll just call them complexes. And this is what's called a filtration. So note if a simplex is in the complex at some scale epsilon, when I make epsilon bigger it's still in the complex. Now, there are-- so let's [AUDIO OUT] counting, all right? The counting is going to get scary. But let's do this counting.
So I claim there are 2 to the n possible simplices in complex. So at any point, there will be some number less than 2 to the n. When I get everything that's in it, then I have 2 to the n. And there's a simple way to actually see this. First of all, this is the power set. It's the set of all subsets of these-- sorry, if there are n points. It's the set of all-- any subset of these n points is a possible simplex. This is the set of all subsets. It's the power set, there are 2 to the n members in the power set. Blah, blah, blah, but it's much easier to just see it in the following way. I can label a simplex just by putting a 1 if there is-- if a vertex is in the simplex, and putting a 0 if the vertex is not. So this is a 1, where point 2-- it's a tetrahedron. We'll put that it contains point 2, point 3, point 5, and point 7. All right, does this make sense? It's a way of labeling the simplices in the complex.
I could also give a list of vertices. So vertex 1-- sorry, vertex 0-- vertex 2-- sorry, i 0. [AUDIO OUT] Or, Vi0, Vi1, [AUDIO OUT] Vik for a k-simplex. Just note that a triangle is a 2 simplex, so a k-simplex has k plus 1 vertices. All right, so this by the way, you can see that even for rather modest data sets, listing all the simplices in this filtration at each point epsilon can get very difficult. And if you look at the classical methods, people tend to focus only on things up to 2 or at most 3 simplices. Where, because there are n choose k + 1 possible k-simplices. When I'm looking just at things like edges and triangles and things like that, it's not as bad.
Our quantum algorithms actually nicely would handle everything very beautifully, because the way that this works-- and I think I'm actually tempted to describe this, because I already introduced you to this quantum mechanical notation. I'll actually describe this in quantum mechanical notation to show you why we can make a quantum algorithm that kicks classical topological butt. So the way that it works is-- so algebraic topology-- I think I'll change colors. Anybody here colorblind? Is red OK for people? You don't have to admit it. I don't see it very well myself.
How does algebraic topology work? So once I actually have a simplicial complex at some scale, I can now go ahead to identify the number of connected components, the number of holes, the number of three dimensional voids in the system, the number of very high dimensional holes and voids. And these are these elements of this homology. These are the topological features of this data. So a component is, for instance, if we think about neural activity, we could say and if these distances are given by correlations between neurons over time, then a connected component is a cluster of neurons that are talking with each other very strongly. Right? And a two dimensional hole means that we have something that's going around in a topo-- that we have neural activity and signals going around in a topologically non-trivial fashion. And if there is a void, it means that nothing's going on in there. OK?
So these topological algorithms-- just for data in general-- the topological features that you get tell you something about what's going on in the system. Moreover, they're kind of a fingerprint for the data. That is, if I take the data and I mess around with it, it should still have-- I deform it in the way that I described before-- it should have the same topological features. They might just show up at different scales-- at slightly different scales epsilon-- but topological feature that shows up and persists for many scales will also be there if I deform what this data is. So it's the original [INAUDIBLE].
So algebraic topology works in the following fashion. Map each simplex to a vector in a high dimensional vector space. There are 2 to the n of them. So I'm going to take the complex numbers raised to the 2 to the n-th power. That's a big value. So you can see how this gets combinatorially tough with the classical algorithms. But quantum mechanically, I'm going to do this in the following fashion. I just quantized it. Quantized this-- I'm guiltily-- because we're proud of this algorithm, I'm telling you how the algorithm works. So this is n quantum bits or n qubits.
Now, don't get confused about that, this is not what's going on in the brain. This is just showing how you do algebraic topology in a quantum mechanical fashion. That's all I'm doing right here. This is, by the way, the changing the letter to q as in q-tuple, I call that first quantization. Putting it in a direct bracket, I call it a second quantization. Quantum mechanics joke, sorry. [LAUGHS] Not a very funny one either, so. [LAUGHS] Now what you do-- the way, as its name suggests-- your analysis defined topological features can now be-- you can find top logical features by asking questions about linear maps on this high dimensional vector space.
And particularly, there's a linear map called the boundary map. I'll call it del k. The boundary map takes a simplex and it maps it to the sum-- it's a vector now, right-- and maps it to sum of the simplices on the boundary. So for instance, if I have something that looks-- let's do it like this. Here's a 1, 2, 3. This is a simplex. This is something like this. Here's 3, 4, 5, 6. Here's a simplex 1, 2, 3. And here's a hole, right? Which is the kind of thing we're going to want to try to find using this procedure. And I hope I can explain this.
But anyway, the simplex 1, 2, 3, It gets mapped to the sum of 1, 2-- remember, these are vectors now-- plus 2, 3 plus 3, 1. But there's is a sneaky feature about this. These are vectors, they sit in this vector space. And you actually alternate minus signs. So you actually subtract this one. You'll see why in just a second. So the boundary map acts on a vector like this. You take the vector Vi0, Vi1, and Vik. It goes to the one which is-- I remove the first one, and then I remove the second one, et cetera, up to-- I'll do it like this. I'll write it like this, minus 1 to the l, Vi0 dot dot. I remove the l-th 1. So this means I took it out.
So it's just a map on a very high dimensional-- a linear map on a very high dimensional vector space, a 2 to the n dimensional vector space. A 2 to the n by 2 to the n operator, and I can define it for each of these cases. Let me show you why this works so nicely. Why define such an object? It's the following-- so suppose I have something like this. Here's 1, 2, 3, 4. OK? And I apply this map to this collection of-- remember, this is just a collection of simplices. There are two simplices here. 1, 2, 4, and there's 2, 3, 4.
And I take the boundary map and apply it to this. So I take the d2 and apply it to this. And I get 1. I get 1, 2 minus 2, 4 plus 4, 1. Then I get plus 2, 3.
AUDIENCE: [INAUDIBLE]
SETH LLOYD: This one? No, this one is the--
AUDIENCE: [INAUDIBLE]
SETH LLOYD: Plus 1, 2 minus 2, 4 plus 4, 1.
AUDIENCE: [INAUDIBLE]
SETH LLOYD: No, for this because I start with the first label which is 2. So it's 2, 3 minus 3, 4 plus 2, 4. And you see something-- do you see what's going on? What's going on is rather nice, which is that the two 4's cancel out. So all internal edges were internal simplices that actually go away to 0. So if I have a collection connected simplices, then the boundary map will construct the simplices which were on the boundary of this whole collection-- it's called a chain of simplices. All the internal simplices it counted once in one direction and once in another direction, and they get knocked out. This is kind of a central feature that makes this whole thing work.
And so using this actually-- so with this boundary map you can actually identify all these topological features. So topological features-- so a hole, if you look at what a topological feature is like a hole-- like this hole, the 2, 3, 4, 5, 6 hole-- Oh, let me just say one [INAUDIBLE] boundary map. Note dk minus 1 dk is equal to 0. The boundary of a boundary is 0. This is a famous feature. [INAUDIBLE] Boundary of boundary is 0. If I have something that doesn't-- that I take the boundary of a thing, the boundary doesn't have a boundary.
So now what is a hole? A hole is something-- it doesn't have a boundary. So if I go around this right here-- OK-- there's no boundary. The boundaries are these each point. So when I go to 3 to 4 I get 3 plus 4. 3 minus 4 plus 4 minus 5 plus 5 minus 6 plus 6 minus 2-- all of those points cancel out. A hole, or more generally a void or a connected component, it doesn't have a boundary. It doesn't have a boundary. So it corresponds to a vector such that dk of this vector is 0. It doesn't have a boundary. There's no boundary.
Now, of course there are plenty of things that don't have boundaries. Like, for instance, the boundary of this simplex-- if I look at this right here-- if I took its boundary that would be 0, 2. So a hole has another feature which is-- the two features, it doesn't have a boundary and it is not itself a boundary. Why? Because it's the boundary of the hole, right? This set of simplices-- this chain of simplices around here-- is the [AUDIO OUT] which is not in the complex. It's the boundary of the hole. The surface of the hole is something-- each hole has a surface. The surface is boundaryless, and it is not a boundary. And everything that is boundaryless and not a boundary is a topological feature of this [AUDIO OUT]. All right?
And then that's pretty much it. Because in fact-- so I should actually say that what this means is that dk minus 1 dagger acting on V does not belong to the simplicial complex. I'm just writing down the math part of it, but that's just the mathy way of saying we're looking for things that are boundaryless and don't have boundaries. And these are just the holes, gaps, voids, cycles, et cetera in the data. These are the topological features of the data. It's what's called the homology of this data. OK. And so now what one does-- now it's just-- you just use linear algebra to find all vectors such that dk is 0 where dk minus 1 dagger on V does not belong to the complex.
In fact, if you really want to know, you can [INAUDIBLE] what's called the-- which maybe you don't-- you can [INAUDIBLE] what's called the combinatorial Laplacian which is this object. And you look for things that are-- these are things that fall in the kernel of this combinatorial Laplacian. Yeah?
AUDIENCE: This operator just makes sure it's orthogonal? Is that why you're using a dagger instead of a [INAUDIBLE]?
SETH LLOYD: It's not even square on its own, right? Like it maps k-simplices to k minus 1 simplices. This one's square. This is a very high dimensional sparse-- this is this thing that we actually operate on a very high dimensional sparse square matrix. So dk-- dk operates from the space of k-simplices and it makes the k minus 1 simplices actually goes to a different vector space. [INAUDIBLE] OK. So anyway, that's how these algorithms work. I mean, this is-- I just told you.
Let me just summarize how the algorithms work. It looks hard to do. It's got all kinds of fancy words, but all that's [AUDIO OUT] is you find topology by constructing these simplicial complices. Now you say, hey, I want to find the topology of this simplicial complex. And you do that by this trick of algebraic topology. And the trick is, you map each simplex into this high dimensional vector space, which I did in this case by quantizing it. OK? And then you just diagonalize some suitable linear operator on this high dimensional vector space. And suitable linear operator is made out of this boundary map. Boundary map was constructed to have this neat feature that it constructs boundaries of things. And this gives you this topology. If you do it for all these scales epsilon, you find [AUDIO OUT] like holes, and voids, and gaps that persist for long periods of time, and then that's it.
So I apologize I took a little more minutes and I thought to-- took more than 10 minutes, but I didn't take much more than 15. Are there any questions about this? I feel a little guilty about saying, hey, look at this-- bringing in algebraic topology, but what the hell. And also, I want to point out that it's actually-- it took me-- I'm doing this partly because it took me five years to get over the hump of actually being able to read the papers and figure out what's going on. And actually it's not so scary. It's not scary after all.
For the quantum mechanical people here, you can easily see how you can actually-- this gives you an exponentially better quantum algorithm because you're trying to diagonalize a very large dimensional sparse operator. And that you can do exponentially faster on a quantum computer than a classical computer. So our quantum algorithm kicks butt with this kind of thing. But the point is that you can identify the topology, and that's what the classical algorithms do as well. Yeah?
AUDIENCE: Can you say one more time why the pattern of pluses and minuses in your bottom line on the top board make sense?
SETH LLOYD: Yeah. So remember the rule is-- oh, yeah, I think maybe you're right that I kind of screwed it up. But, hey, let's do it right. Let's do it right. So it's plus 2, 4. I omit 1, right? Thank you very much. I'm very embarrassed. Not only did I put math up, I got it wrong, which is typical of me. Right? And then I have minus 1, 4. [INAUDIBLE] when I omit the 2. And then for this one, then I omit the 4. So I have plus 1, 4. And then for the next one I start with 2. I have minus 3, 4. And then I add-- sorry, minus 3, 4. And then I add-- so this is a tricky thing here. I have the plus 4, 2, but they got-- I have to keep track of the orientation. So this really was 2, 4 in that direction. This is 2, 4 in the other direction. And then when I switch the orientation, I get a minus sign. So it's minus this, because these are actually oriented simplices. So not only is your question an excellent one, but you forced me to admit more of the math that's actually going on.
AUDIENCE: Why is it rotated by [INAUDIBLE]?
SETH LLOYD: Oh, sorry. It's not. It's just a [AUDIO OUT] And then the last one is plus 2, 4. So the simplex at this point gets evaluated in one direction, and one one way and the always the other direction the other way. So it always cancels out after the boundary map. Thank you very much. You were so, so right. All right, so let me actually-- I should get done. So actually I think it might be time to say why I think this is an argument for this quantum order effect, which is that we have these thought processes going on in our brain. They have a topology, right? Moreover, people write papers where they analyze the topology of the brain and the brain on drugs. By the way, my colleagues are analyzing the LSD data right now-- as it turns out-- to see if the topology of your brain on psilocybin is radically different from the topology of your brain on LSD. My guess is it's [AUDIO OUT]
So independently of what's actually physically going on, if we make a topological description of the brain we have this sequence of vectors. We go through these different topological sequences. We have different connected components. We can describe at each point the topology of a brain by the evolution of a vector in a high dimensional vector space. This is just in the mathematical description of what's going on.
And then, so we have just some vector that's [INAUDIBLE], and I'll even make it quantum like this. So here is my vector, it's a function of time. It describes the topology of the space. This mathematical description is there whether one wants it or not. All right? And then the only real question is, if the probability of falling into another vector-- if the probability of a given V of t is equal to some function of this overlap between them. And I argued in hand-wavy terms before, this does make some sense. Right? If the answers to question b or not b are some attractor, there's a topology for the thought processes involved-- that thought process described by some big vector.
And here's my initial process-- thought process-- after I gave the answer a for the first one. And if this complicated non-linear process of evolution of these vectors happens to have the-- [AUDIO OUT] --that the probability of going to b is proportional to the distance between these vectors-- just the ordinary vectorial distance between these vectors or some function of the distance between these vectors, all it has to be is a function of the distance between vectors-- and you get this quantum question order effect.
So just to summarize, because I'm out of time, I brought up the quantum question order effect. It's a kind of a weird thing, because it says quantum mechanics makes very specific predictions for how probabilities of questions asked in different orders come out. And there is empirical evidence from many studies on people that people based something very similar. I actually recommend these papers-- some of the papers that give you some of the more funky things. Like the probability of a and b being greater than the probability-- sorry, the probability of a being less than the probability of a and b. This is hard to imagine how it works, but it works out quite nicely. And this [INAUDIBLE] when a and b correspond to non-commuting measurements.
Then I said, OK, and I argued that well maybe we can understand this quantum question effect by looking at algebraic topology, looking at topological analysis of the brain. Why? Quantum question order effect seems-- weirdly it seems like there's some weird vectorial linear algebraic like process going on inside the brain. That's what it suggests. I have no idea if it's correct or not, I mean, but the data does suggest that.
Where could this funny linear algebra exist? Well because I was already doing this I said, hey-- [AUDIO OUT] --this for the purpose of making a quantum algorithm. The topological descriptions have an implicit, in fact, actually rather explicit linear algebraic structure to them in which the topological state of the brain is described by some vector and some huge honking dimensional vector space. And as that thought process evolves in time, this vector evolves in time.
So if it is the case that, as I said before, that somehow these probabilities for falling to this attractor or this attractor are proportional to the overlap between these vectors, or some function of the overlap between these vectors, [AUDIO OUT] --to go. And of course I already had to confess my other motivation, which was that we're going to have a lot-- if we have a lot of data about brain function, we're going to have oodles more data about brain function. We're going to have to apply big data analysis techniques there. I mentioned big data. [LAUGHS] The way to get grant proposals these days is to have it either big data or graphene in the title. So with our quantum algorithm we could have a grant proposal that's graphene-based quantum random access memory for the analysis of big data. That's a sure winner.
Anyway, I think that to make sense of what's going on in the brain-- which is very mysterious, certainly very mysterious to me and not only because I'm more ignorant about that than most of the people in this room but just it is an experience-- I think it's good important to actually look at different ways of looking at what's happening in the brain. It seems to me that this kind of topological analysis might be helpful. So I thank you all for sitting through this, correcting my mistakes. That's who get extra if this were a class. You would get an extra five points on the final grade for that one. [LAUGHS] And that's it.
[APPLAUSE]