A conversation with Michael R. Douglas
Date Posted:
March 9, 2020
Date Recorded:
February 25, 2020
CBMM Speaker(s):
Andrzej Banburski Speaker(s):
Michael R. Douglas, Simons Center for Geometry and Physics at Stony Brook University
Loading your interactive content...
Description:
On February 25, 2020, CBMM Postdoctoral Fellow Andrzej Banburski took the opportunity to sit down and chat briefly with Michael R. Douglas of the Simons Center for Geometry and Physics at Stony Brook University.
[MUSIC PLAYING] ANDRZEJ BANBURSKI: Welcome to Center for Brains Minds and Machines. I'm Andy Banburski, and I'm a postdoc at the center. And I'm joined here today by, Michael Douglas. Michael is a professor and a founding member of the Simon Center for Geometry and Physics of Steinbrook. Also a researcher at Renaissance Technologies and a previous director of the new high energy theory center at Rutgers. Welcome, Michael.
MICHAEL R. DOUGLAS: Thanks, Andy. It's a pleasure to be here.
ANDRZEJ BANBURSKI: So Michael, I understand that your main work is in string theory. And let's start from the deep end. I understand that you have actually previously applied computational complexity theory to understanding string landscapes. Could you say something about that?
MICHAEL R. DOUGLAS: Sure. Yeah. So there's again, a long, very long story of string theory, one of the central goals is this is supposed to be the theory that describes all of the fundamental laws of physics that we could derive the standard model, the force of gravity, the quarks and leptons all from string theory.
Now, as many of you might have heard, it's not as easy as that. And the situation is perhaps best described by an analogy. So the problem we have is we understand string theory moderately well, but string theory predicts that there are extra dimensions. So six and some descriptions, seven extra dimensions. And we don't know their structure or their topology, their geometry very much about them and we have to work backwards from what we see-- what extra dimensions could give rise to that.
And the analogy would be, suppose you were a chemist and somebody told you, here is the law of all of chemistry, charting our equation. And they would be right. That's the law of all of chemistry.
You can derive all of chemistry from it. But it's a vast project because there are millions or even billions of different molecules and sorting all that out and trying to decide which ones that we see, which substances. Of course, we don't even see molecules. There's several layers of indirection there.
Correspond to which solutions of that equation is quite challenging, and that's a case where we can do these experiments and make all these molecules. So it's all the more challenging, given that of the many possibilities string theory describes we only see our universe. We have no direct evidence or other universes. And so being such a huge, vast problem, one tries to simplify it.
And I initiated an approach of which would sound very obvious and simple to perhaps a computer scientist, perhaps a neuroscientist. Well, let's just study the statistics of the solutions. How many have this property? How many have that property?
Maybe if more solutions have property a or predict that we'll see a certain particle then the number of solutions that predict that we won't see that particle, we again base a prediction on that. And that, again, I've and my collaborators have made a lot of mileage with that idea. A lot of other physicists really hate the idea and think that the laws of physics should uniquely determine the laws we see. We don't know.
So now, within that context, we can ask, OK, suppose we knew something about these statistics. Well, we have some evidence that, yes, that universe that we see is this likely. It's just this kind of general universe. Now, how hard would it be to actually find a solution, a structure for the extra dimensions that realizes that possibility.
And the most basic example of that is that we know that the vacuum has an energy, the so-called dark energy. It's very, very tiny but non-zero. Can we find a vacuum, a solution for the extra dimensions, which at least reproduces that one number-- this tiny vacuum energy.
And what I was able to show in 2006, and work with Frederik Denef from Columbia, is that in models very similar to string theory-- and in fact the string theory case is more complicated-- it would be, it's an NP-hard problem to find such a vacuum. So we could be in this paradoxical situation that we have good evidence. Yes, string theory describes our universe, but we cannot actually find the solution of string theory describes our universe.
ANDRZEJ BANBURSKI: Wow, that was very interesting. So now that you've gone on this explanation, maybe you can step back, historically, and I would like to understand where did your interest in computer science come in? I understand that you worked on this thing called, the Digital Orrery when you were in Caltech. Can you say something about that?
MICHAEL R. DOUGLAS: Yeah. Yeah, so I had been really equally interested in fundamental physics and computer science and AI since I was in high school. I avidly-- I mean, of course something like a [INAUDIBLE], but you know Minsky's blogs. I heard of Jerry Sussman, even back then.
So in some ways it's not quite an accident that I wound up doing string theory. But when I chose where to go to grad school-- I went to Caltech-- I had never heard of string theory. And I went there because they had both. They had very strong particle physics, fundamental physics, but they also had very strong neuroscience and this very strong tradition of interdisciplinary work. As you know, Caltech is a little place and so everybody can meet everybody.
And in particular, I visited before deciding where to go, and I went to a course. And this was a very seminal course in the history of-- certainly the relations between physics and computation and even computer science. It was co-taught by, Richard Feynman, the famous physicist, John Hartfield, a famous chemical biophysicist, and Carver Mead, a computer scientist-- one of the pioneers of the VLSI design.
And they were exploring this question-- what are the relations between physics and computer science, and I was fascinated. And I decided to go to Caltech. Each of them gave their own course. They had so much to say, to do one course split into three.
And Feynman gave the first lectures about quantum computing. Carver Mead gave lectures about what we now call, neuronal computing. They were inventing those ideas. And John Hartfield had-- what many of you will have heard of, especially here-- the Hotfield model, something which was directly inspired by a physics and statistical mechanics, via spin glass, but was applied to produce a model of memory and something that one could analyze using physics techniques.
And that same year, Jerry Sussman was coming on a sabbatical from MIT. He spent the year at Caltech to work with people like Peter Goldreich on planetary dynamics and to build a computer, the Digital Orrery, with which we showed that the motion of Pluto, in the solar system, is chaotic. So it was quite an incredible year.
But at the end of it, for whatever reason, I had spent a lot of time learning neural networks in hotfield. And I told myself, well this is very cool stuff, but somehow it's going to take a while to actually get anywhere. I'm not quite sure what reasoning I applied to do that. But in any case, that was my attitude.
And then I came back in the fall of 1984, and those of you who know something-- the history of string theory-- this was the year of the Green Schwartz anomaly cancellation, this revolutionary discovery, that started the modern era of string theory. And most of the students in particle physics, theoretical physics at Caltech switched to work on string theory, as did I. So indeed, that led to a lot of exciting things the work I just described.
But I kept up my interest in AI, computer science, branching out in many directions. There was a lot of very interesting interactions between physics and machine learning, statistics, statistical physics and in part leading to the topic that I'll discuss in my seminar later today-- the applications of AI, new computational technologies-- to help us do research in mathematics and physics.
ANDRZEJ BANBURSKI: Before we get into that, I would like to ask you about something because you have been very involved with the IHGS in France, and I was just curious how exactly that came about?
MICHAEL R. DOUGLAS: OK, very good. So this is a famous institute in the world, especially of mathematics and theoretical physics. They have a little bit of biology, so far not so much neuroscience or cognitive psychology. But in any case, it was started in the '50s and modeled after the Institute for Advanced Study in Princeton.
And it has had more fields [INAUDIBLE] associated with it than any other institution and university, whatever. So in math terms, this is the center of the universe. And so my own relation to it essentially starts in 1997 because I had met a very active professor named, Alan Kahn, who was both in college in France and IHGS, and he developed a software called non-commutative geometry, which again is going far afield. But he felt it had applications to physics, and he even had a way to derive the standard model from it.
And in fact, most physicists, I'm sad to say, did not think very much of his work. But for a variety of reasons, I was more sympathetic and there were various discoveries-- something called a Dirichlet brain, again a very long story that you could see the connection. In fact, Edward Witten had pointed this out. I was, in the way, following this lead that, according to the equations of the Dirichlet brain, a coordinate on space that describes where you are, actually becomes a matrix. So it's non-communicative when you multiply matrices. So what could be more non-communicative geometry than that?
So if one stood back, that was already enough reason to be interested in talking to Alan Kahn. And so they actually, on the strength of that, offered me a job there. And so I went. There were, again, long story-- personal family reasons that led me there and that, in the end, I didn't take the permanent job. But I kept up a long association with the IHGS.
There was a period of 10 years, during which I would spend every summer there. I, in fact, talked quite a bit with Maxine Konsevich, a mathematician who's the only person to win two of the breakthrough prizes in both mathematics and physics. And his insights were very central in some of the work I did after that.
In more recent years, I've been involved in quantitative finance. And I've been, in fact, heading their fundraising organization in the United States, a group called, Friends of IHGS. And so we raise money to support scientific research, we provide a way for US citizens or US people to give a tax deductible donation. So if you are looking for a worthy, scientific institute to give you out there, this is one to keep in mind. And we have public events to convey some of the great work that goes on in the IHGS to American audience.
ANDRZEJ BANBURSKI: So maybe to cap off, do you think the current AI can be used to do math, or do we need to go beyond it?
MICHAEL R. DOUGLAS: OK, so I think already the current AI can be very useful for mathematicians, for physicists. And that will be the body of my talk, but it will not enable computers to do math, to discover theorem, to create on their own. And in fact, I make a prediction of this sort at the end of my talks. And my talk is entitled, How Will We Do Mathematics in 2030?
And I do not think computers will achieve a human or even a creative level of mathematics. But my prediction is that, right at present, you can even-- it's hard to do to defend the claim that computers are doing creative thought and understanding complex questions in any domain. But math will not be the first.
So but once we see that computers can, if not do creative thought, at least access facts from a large database and do logical reasoning on that facts in a robust way, 10 years from that computers will start to do what we could call human level math.
ANDRZEJ BANBURSKI: Thank you, Michael, for this illuminating conversation, and I'm very much looking forward to your talk later this afternoon. And thank you for joining us. If you're interested in more content like this, you can find it on the CBMM website.