MEG/EEG Source Estimation Approaches: A Spectrum of Purpose-Built Optimal Tools
May 8, 2019
May 6, 2019
All Captioned Videos MEG Workshop
Matti Hämäläinen, Massachusetts General Hospital
DIMITRIOS PANTAZIS: Hello, everybody. Thank you for coming for this MEG workshop. My name is Dimitrios Pantazis. I am a principal research scientist at MIT and I also direct the MEG facility.
And together with [INAUDIBLE], we decided to organize this workshop for two reasons, mostly. One is that there are several new users interested in using the MEG facility, which is very exciting. And of course, it would be useful to provide some background on methods and data analysis.
And another reason is that MEG has a decades long history of developing methods. And during this time, methods became, to some extent, more complex. There are several methods for the same function. And it can often be confusing and challenging, but of course, there are also great opportunities, as well, given that algorithms have tuned to different cases and different experimental paradigms.
So today we'll have three talks focused in different aspects in methodology. The first one will be about localization methods and the great variety of methods that exists and how they can provide solutions in different circumstances. The follow up talk will be about decoding and how it can be used to extract information from brain signals, from MEG signals.
And the last one will present connectivity measures and how it can reconstruct brain networks and infer how different brain areas communicate one another. And the workshop will conclude with some demonstration of the brainstem software that we use to analyze data. That's not the only software available. I actually will describe there are many valuable packages to analyze data.
The presentation will be focused for users that want to collect data in our facility and analyze the data. But it will also be useful for other interested researchers to learn, to get a general idea about what MEG data analysis involves. So without further delays, let me introduce the first speaker, Matti Hamalainen, who is Professor of Radiology at Harvard Medical School and Director of the Magnetoencephalography Core and the Department of Radiology, MGH.
He is one of the pioneers in the application of MEG, in conjunction with other non-invasive functional and anatomical imaging methods to study human brain function. He has had a crucial role in developing whole head MEG instrumentation, analytical methods, and tools, as well as experimental protocols, which have together paved the way for MEG becoming an important basic research and clinical tool worldwide. And let me also emphasize that, in my opinion, it's hard to find other researchers that has as much or more impact in MEG than Matti because of his valuable contributions over all of these years.
And if I were to select three contributions that, to me, really made a difference, he was the key person to introduce minimum norm estimates in the MEG field. And we use minimum norm estimates all of the time to reconstruct brain activity in the cortex. So it was a very valuable contribution.
In the '80s and '90s, he contributed software that became the foundation of Neuromag, the main MEG company, or one of the main companies over all these years. And his software was in the heart of our analysis. And also, Matti was the developer and the principal investigator of the MNE software, which I will not be describing today. Apologies for this. But it's among the most robust ones and the only one with a Python implementation. So that's really unique.
And most important of all, Matti was also instrumental in helping me end up in my position here at MIT. And forever, I will be grateful. So thank you, Matti. And please come over.
MATTI HAMALAINEN: Thank you, Dimitrios, for this kind introduction. So today, I have the title of my talk, a little bit, maybe, in a advertisement way, saying that the source estimation approaches constitute a spectrum of purpose-built optimal tools. And you will see, during this talk, what that means. In the corner, you will see me using an optimal tool for [INAUDIBLE] on a spring Sunday, because I am wearing a collared shirt.
So the whole point of source estimation in imagery is that we want to, instead of dealing with the measurements on the MEG helmet, we want to know what the brain activity inside is. And that is, we want to make an estimate of not only the signals, but the sources in the brain. And this talk is all about what kind of information will we use to do this, how exactly is this done.
And especially, I have several examples of discoveries made with different source estimation methods. Which I hope illustrate to you that in different cases, you'll need different tools. There are nails and screws. And there are, like in computers now, many kinds of screws with different kinds of bits needed to open and tighten them.
So my overall contents is to first talk quite in length about the relationships of sources in the brain and the MEG fields that we measure in imagery [INAUDIBLE]. Then I will discuss, in particular, the use of dipole models and what kind of discoveries have been made with it. And then I will talk about distributed source estimation and come up with some conclusions.
But first, about the sources and fields-- so the sources are inside the brain and the fields are measured, either on the MEG helmet or with EEG electrodes on the scalp. And in this sense, MEG and EEG are very much related. The only difference is that EEG measures the electric potential on the scalp and MEG measures the magnetic field outside the head.
But in both cases, the original sources of these signals are the same. They are the neural currents. Sometimes you'll hear that EEG is produced by volume currents. But physically speaking, that is wrong.
The volume currents are actually-- they're EEG. They are, like, EEG and volume currents are like two sides of a coin, which cannot be separated from each other. So both are generated by the same sources.
They both have a reasonable spatial resolution. If you think about the optical spatial resolution of being able to distinguish two sources, the rule of thumb literally applies so that the sources need to be a couple of centimeters away to be distinguishable from each other. And especially MEG and EEG are real time measures of the actual brain activity. That is, they record millisecond by millisecond what the current distribution in the brain is.
And from this, it follows that you can, for example, derive frequency specific measures of association-- that is connectivity-- so that we can calculate connectivity in different frequency bands. And in fact, recently we published a paper where we studied the evolution of certain network characteristics in the brain as a function of not very short time, but years over the age. And we found that the beta band at 20 Hertz band activity behaves differently from the higher frequency activity.
And if you think about fMRI, these two things are lumped together. So that MEG and EEG potentially give a more colorful picture of the brain activity. And in particular, the maturation in the beta band was linear, where as in the gamma band, it basically saturates over time.
Another consequence of being able to measure this millisecond by millisecond brain activity is that we can relate, with the help of biophysical models, the MEG and EEG measurements to the models of the neural circuits. So we can use a model of a neural circuit and make a proposition what might be the microscopic course of the time courses of the current in the brain that we observe.
Another consequence of this kind of a model is that the data can be compared across species. So that the same model can be, in certain cases, used to predict the animal invasive electrophysiology. And this is the work I have been involved with, working with Stephanie Jones and Chris Moore at Brown University. And we have actually recently published a paper in which we looked at the same phenomenon in not only humans, but also in monkeys and mice and were able to show that there is this same basic neural level mechanism that can explain the signals in all three species.
Now what are the actual causes of MEG? So this audience, of course, knows very well about the action potentials and the postsynaptic potentials that follow in the cells. And the time courses of these are such that the action potential time course is very fast, whereas the postsynaptic potential time course is slower.
Another important difference is that the currents related to the action potential correspond to opposing current pair, which has current flowing forwards and backwards. And that's why it's difficult to see anything from the action potentials far away. Because these signals from these two currents cancel each other.
So that we believe, on the basis of this current configuration and the time course, that MEG and EEG signals on or outside the head mostly are due to postsynaptic currents. And amplitudes of these things [INAUDIBLE] amplitude is between 110 to 200 microvolts and the MEG amplitudes between 1 femtotesla, 10 to minus 15 tesla to 3 picoteslas. So it's difficult, of course, to say how small these are.
But one figure of merit would be that in [INAUDIBLE] the white noise level is about 2 femntoteslas in a unit benefit. Meaning that if we measure at the 10 Hertz bandwidth, the noise is about three times higher. So that these signals are, in fact, quite reasonable to measure. And recording of how many cells in the cortex might be responsible by the [INAUDIBLE] observe, we made an estimate early on that about half a million synapses might be required to produce a current of 10 nanometers, which we measure typically in MEG of that order.
Later on Murakami and Okada used a much more sophisticated model. And they fortunately came into the conclusion that the number of cells is probably about 10 times smaller. So that tens of thousands of cells are probably going to be synchronously active if we observe these kinds of signals.
Another important basic level information is that it has been strong with measurements and also with models that the amount of current-- the current dipole density [INAUDIBLE] surface area is pretty much constant across species and also across brain structures. Meaning that you can make a rough estimate of how big an area of the cortex is active once you have a current amplitude of, let's say, 20 nanometer. That would mean about 20 square millimeters or larger. So maybe the range is, like, from 0.1 to 1 nanometers per millimeter squared.
So this is very important to realize that, in this sense, MEG and also EEG give quantitative estimates of the actual currents in the brain. We are not looking at any kind of statistical test variable. But we are looking at the physical quantity. And you can actually probe that physical quantity also with direct measurements from the brain.
So in that sense, there is something really to grasp when you look at MEG signals and especially at their sources. And therefore, you can also get the sense whether something is signal or noise from these amplitudes, because there are some typical amplitudes which you should observe.
Now to understand how these signals then are converted to the measured fields, we need what is called the forward model. That is, a solution of Maxwell's equations in the case of the head. And if you look around, it's very clear that the sphere model is a good approximation.
Here is a typical subject. And then you will fit the sphere to his head and it's perfect-- perfect fit. You'll notice that there are these different layers on the sphere. And they correspond to layers of different conductivity in the head. That is the scalp, the skull, also the CSF and then, the brain.
And now it happens to be so that, if you replace this multilayer multilayered structure with the single layer, a homogeneous sphere, it turns out that MEG remains unchanged between these two situations, where EEG is clearly different. And this means that you can accurately compute MEG with less information about the actual conductivities in the head. So really, the conductivity doesn't matter and you don't need to take care of these layers.
Even when you go to a next level approximation or boundary element model, the situation is such that the MEG has some practical benefit. And that is caused by the fact that the skull, which is between the brain and the scalp, has a very poor conductivity. It's almost an insulator.
And we came up in the '80s with the proposition that maybe you can get quite an accurate result actually by replacing the skull [INAUDIBLE] perfect insulator. And in fact, it turns out to be so. And as I mentioned, this was proposed in 1989.
And then actually, [INAUDIBLE], co-workers in 1990s, made an experiment where they measured MEG and EEG signals from a swine. So they actually made an opening in the skull and removed the flap of skull and saw whether the MEG signals changed. And they didn't change much. And then they put the flap back and were able to repeat this experiment.
So with this in mind, let's see what follows from the fact that the head is almost spherical. And what follows from it is that radial currents don't contribute to the magnetic field B outside the head. That is, for any current that is flowing to the direction normal to the skull, we don't get any magnetic field outside.
On the other hand, if you look at the cortex, there are these large pyramidal neurons, densely packed. And they have dentrites which have a very preferred, specific direction, normal to the cortex. And this geometry guides the currents so that they are normal to the cortex.
And on the other hand, from that it follows that if you look at the fissure, the currents normal to the cortex [INAUDIBLE] to the scalp. And therefore, both MEG and EEG are non-zero. Whereas, at the crest of the [INAUDIBLE] and the [INAUDIBLE] of the [INAUDIBLE], the currents are radial and you don't see any MEG signal from these locations.
And in fact, in the MNE minimal norm estimate paper that Dimitrios mentioned, we made the prediction that it would be desirable to improve the minimal norm estimates by finding ways to inject some a priori knowledge or assumptions of the experimenter. For example, one could confine the integration area to be the cortex. And this actually was practically made possible by the work of [INAUDIBLE] and then later, [INAUDIBLE], because now we are automatically able to construct a geometrical description of the cortex and use that as the basis of the source estimation.
And as you know, the cortex can be inflated so that we can actually look at the fissures. This is important for MEG, because the signals come from the fissures.
So to repeat, for MEG, if the source is tangential, then this current is flowing in this direction. And we have, around [INAUDIBLE] direction of the current, outgoing and ingoing magnetic field. We get the MEG for the tangential part. For the radial part, which is pointing towards you, we don't get any MEG. And if the source is tilted here, somewhere half way, we get, like, a fraction of the first beta.
Whereas for EEG, we get an orthogonal pattern for the tangential source, which is pointed in this direction. We get the peak for the radial source. And when the whole source is tilted, we have a combination of the two.
So that MEG is composed only of one prototypical field pattern, whereas EEG has two. And this complicates the analysis of EEG on the basis of just looking at the signals. On the other hand, if we look at the cortex and plot [INAUDIBLE] though the signal amplitude's at different locations, the situation is the following.
I will come here so that I can point better to the correct location. So here is the [INAUDIBLE] fissure. And this is the motor cortex. And this is the somatosensory cortex. And if [INAUDIBLE] down from the crest of the [INAUDIBLE], there the source is radial so that we don't see any MEG, then the MEG signal grows very quickly when you go down, just turn a little bit tangential. And then finally, it's totally zero at the trough of the [INAUDIBLE].
EEG, on the other hand, is strongest here at the crest of the [INAUDIBLE] and it goes continuously down when the source moves down. This means that EEG is dominated by these radial currents. And that explains why in many cases, MEG has been very useful in understanding what is going on. Because there are situations where we are interested in what is happening in the [INAUDIBLE] and not so much in the background activity in the [INAUDIBLE], which has a strong component in EEG.
For example, it was long believed that the auditory late evoked responses are actually generated by something happening in the frontal lobe, which is consistent with the radial source pointing into the brain in the front. Because you see a big negative is a component in the frontal part of the head. But MEG clearly pointed to the direction that the sources are on the two auditory cortices, because you saw these two separate field patterns. And you could see clearly the activity from the two auditory cortices without further ado from the MEG measurements.
And it's curious that there was a paper in the '70s suggesting that this might be the case also for the EEG. But since the frontal lobe explanation was so much fancier, nobody believed this paper, which was very technically competent. So don't always believe what other people say.
So then it's important also to understand what we measure. In the MEG system you have here, [INAUDIBLE] vector view have-- [INAUDIBLE], we have these triple sensor units. And there is a reason for having these triple sensor units, which contain a magnetometer and a planar gradiometer. And that is that the magnetometer measures this magnetic field profile with plus and minus above [INAUDIBLE] source. But if you take a derivative of this measuring the planar gradient, then you get the peak above the source. And therefore, it's, as a first approximation, easy to look at the planar gradiometer patterns and see approximately where the sources are.
And for example, you could very directly see that the background activity, the spontaneous rhythms are modulated specifically at specific regions by different conditions. There are eyes closed, eyes open, move left, more right conditions. And you can see that in the eyes closed condition, the 10 Hertz component [INAUDIBLE] much stronger in the occipital sensors than eyes open.
And similarly, these two peak structure, the motor cortex is clearly dampened by movement, that is, by finger movements in this case. So that these background activities are differently modulated by external events. And you can sometimes see it directly [INAUDIBLE].
Now this has been kind of halfway through the talk and useful background, I would say. And now, let's go to the MEG and EEG source estimation. Almost everybody knows about the inverse problem in the sense that there is an inverse problem in MEG and EEG. And what does this mean?
That means that in the brain we have currents. And then [INAUDIBLE] MEG and EEG and this relationship between the currents x and the measurements y is governed by the forward solution, by the Maxwell equation. There's nothing particularly exciting about this relationship. It just requires skill and care to calculate this gain correctly. In addition, there's, of course, noise.
But the problem is that once we have the measurements, why the noise and the gain matrix, we want to calculate the inverse estimate. We want to calculate what might be the current underlying this measurement. And unfortunately, this problem is ill-posed.
This means that there are potentially many current distributions which explain the same data. And even more importantly, the solutions may be sensitive to noise. So we need to do something about this.
But to illustrate this further, the situation is like in this, my favorite cartoon, where this dinosaur has died a long time ago. And then the paleontologists come up to pick up the bones. And then they make a source reconstruction of the dinosaur, which is almost successful.
And now unfortunately, many people take this being not at all successful. But in fact, it is in many ways successful. You can see that the dinosaur stands on two feet and has a tail. But some parts are just misplaced. And once you know which parts are likely to be misplaced by the MEG EEG source of construction, you can usefully use the source reconstructions for your benefit.
The reason why there are many possible solutions is that there are silent sources. We saw already one with this radial current in the sphere that doesn't produce any image. On the other hand, if there are current loops in the brain, which is unlikely, then there will be no EEG.
And the third possibility may actually occur. If there is a closed surface and uniform current across the surface or throughout the surface, then there won't be any MEG nor EEG. So this is a little bit depressing. But let me make it a positive, based on, say, that we have many ways to make the problem unique.
And these many ways are characterized by how they behave in different situations and what kind of assumptions people think. So parametric models and current distributor models are what I am going to talk about. In parametric models, we assume typically that there is a limited number of dipoles. And therefore, the problem of finding the allocations and orientations and amplitude is overdetermined. So there are more measurements than source parameters. And we obtain a solution by the least squares fit.
In current distribution models, on the other hand, we have many sources on the volume or on a surface. And therefore, we need an additional constraint. We select what kind of an estimate we prefer.
And in the minimal norm estimate, we make the condition that the over all power of the current would be minimized. But in sparse estimate, we typically use the one norm basis, sum of the absolute values of the current.
It's important to realize that neither the L2 MNE nor the L1 so-called minimal current estimate are based on a physiological effect, maybe the L1 norm estimate more. But certainly, L2 norm estimate is characterized by its ease of computation and maybe L1 norm estimate by the fact that the dipole models have been so successful. So there isn't really a good justification for using either of these models. But they still are useful in practice.
Before moving to the actual estimates, let me repeat some terms. We have seen that it's reasonable to have the elementary source in the brain be a current dipole source, and think of the current because of the organization of the neurons. On the other hand, the inverse model could be defined as the definition of what kind of an optimal solution we want.
And then, we often hear about focal sources, which means activated areas of small extent or extended sources, activated areas of larger extent. And it should be said already now that it's difficult to differentiate between these two. Then distributed sources I would reserve to the concept of having multiple extended focal sources in different areas of the brain. And then finally, the forward model is the recipe for solving the Maxwell's equation in the case of the head.
So about the parametric model-- so I have entitled this section discoveries with dipole models. And the idea in the dipole models is really that we are so far from the actual sources that the neural currents on the few square centimeter patch of cortex look like a dipole from a distance. This "look like" is important to understand.
This means that we equivalently replace dispatch of activation with the dipole. We don't think that there are dipoles in the brain. But we think that this is a useful description of the activity of a patch.
And typically, we fix the dipole locations over time. And then let the amplitudes vary over time. And this type of a model was promoted early on by Mikhail [INAUDIBLE] but for [INAUDIBLE] also.
And in fact, the idea for a dipole was pointed out very early on in the MEG field. Because, like it's mentioned in this early somatosensory [INAUDIBLE] field paper from [INAUDIBLE] group, it looks like, really, on the scalp, we have a field pattern that could be accounted for by a dipole. And talking about this focal versus extended, this is just a [INAUDIBLE] example. A simulation activated an area like this on an auditory cortex and then, found the best-fitting dipole. And in fact, this best-fitting dipole explains 99.9% of the variance in the absence of noise.
So you cannot differentiate this kind of an activity of a patch, which you'll see in red in the MRIs from the dipole [INAUDIBLE] you'll see in [INAUDIBLE]. So the fact that the dipole is not quite [INAUDIBLE] is, of course, that it's on the parietal lobe side in this inflated view of the brain. And therefore, it's very close actually to the red patch in the cortex.
Now the challenge in this type of dipole estimates is the fitting, finding the best solution in least squares sense. That is, we compare the method and data to that of the model, calculate the [INAUDIBLE] to norm, then move the dipoles, qp and their locations, rp, so that we obtain the best fit. And this is, in principle, done so that you select the number of dipoles, select initial guesses.
Then you can use linear fit for the amplitude. So your fixed locations and orientations of the dipole determine what is the smallest least square error. And then, if that error was the same as in the previous step, you stop. And then you find better candidates for the dipole locations.
Unfortunately, there are these steps. One, select the number of dipoles, select the initial guesses, and find better locations, which are not trivial things to do with multiple dipoles in the works. Because it turns out to be a non-linear inverse problem.
So therefore, people have developed some partly heuristic strategies to use the multidipole model. This is not as bad as it sounds. Because with this kind of an approach, you, at the same time, model and understand your data. The modeling process is not separate from the understanding.
Whereas when you do distributed source estimates or typical analysis, fMRI data, you will do the analysis. And then you will try to understand what the analysis means. Heuristic just means that, in the dipole analysis, you typically model and try to understand at the same time.
And typical strategies that have been used is to try to select time points where only one dipole is active. And this works often very well with sensory responses. Because early on, only the primary area is active and then you can add more sources to that. Or use only part of the data to get any sort of guesses for the dipoles, because the dipole field patterns are quantified in space.
That is, you construct the model dipole by dipole. And what that means is the following. Let's look at this case of somatosensory evoked responses.
So early on at about 20 milliseconds time point, there is only activity in the primary somatosensory cortex. And it's almost a perfect focal source in the somatosensory cortex. And you can fit the dipole at that time point.
But if you look at the data carefully then, you're already at 35 milliseconds. 50 milliseconds later, see that the data are different when you look at them at the sensors. And in fact, if you add 35 milliseconds, with another source, which is this darker yellow curve here, then you retain the 20 millisecond source and get a new source at 35 milliseconds. But if you look at the goodness of fit, there at later latencies, you don't explain the data at all. And that is because these two areas will be active. And when they are added to the model, you'll get the nice, about 90% explanation of the data all the way through, except for times where the signal goes down.
Now it's important to notice that this N20 and P35 components point into different directions. You see these two orange arrows in the primary somatosensory cortex here. And this means, actually, that it's likely so that the N20 component reflects input from the thalamus to the deeper layers of the cortex and that causes the current to flow forward. And then there are these auditory corticocortical connections at the later time that input to the superficial layers. And that causes the current to go in [INAUDIBLE]. So in this sense, the current directions are important.
Now when we had the first whole head instrument, we, by accident almost, found that in addition to this primary somatosensory cortex [INAUDIBLE] later activity very close, but more posterior to the primary somatosensory cortex. And we found, actually, that this activity came systematically from so-called posterior parietal cortex. And it was systematically posterior to the primary somatosensory cortex And that can be shown here also in from now-- today's point of view-- primitive MRI surface reconstructions.
And actually, we very early realized this. That if you measure them, you see all over the head you really get qualitatively new information from the human brain. And this is, I think, a challenge to any of these new sensor technologies that are coming online.
Do they provide qualitatively new information? Is it so that whatever you measure, you'll get something new? And that remains to be seen.
But anyway, we measured, for example, the activity in a picture naming task so that we could see the activity in different areas of the brain. Then it also happened that when we tried to see what the eye blinks look like in the whole head MEG instrument, we saw activity in the back of the head was systematically related to the eye blink and was systematically located in the posterior parietal occipital lobe juncture. And it turned out that actually a viable explanation for this is that this area somehow maintains our visual reality so that our missing observation is corrected for afterwards.
It's interesting that this was half an accident. But then, led to a Nature paper, we were able to connect the dots. So it's important to be open minded when you look at your data. If we would have looked only at the frontal sensors, we would have never observed this.
So another case was that we similarly wanted to see what saccades look like. And it turned out that in the related to the saccades, we have activity very low in the sensor array, which corresponds to activity in the cerebellum. And the cerebellum activity, which is related to [INAUDIBLE] movements, remained the same, independent of whether lights were on or off in the room. Whereas the posterior parietal cortex activity, which again had this maintenance role, disappeared when it was dark in the room. And then we saw that in several subtexts that time courses were similar and the sort of locations were reasonably similar also.
And we did all kinds of experiments. For example, we let subjects imitate orofacial gestures. And we were able to follow the activity from the occipital lobe to SDS [INAUDIBLE] parietal lobe to the frontal lobe and finally, to the motor cortex. Interestingly, in subjects with Asperger syndrome, there was a delay in this frontal cortex activity, probably related in the difficulty to relate to other people's actions.
And we also looked at the locations of spontaneous activity with dipole monoliths and found that related to movements to 10 and 20 Hertz activities behave differently in time. And interestingly, only the 20 Hertz component of the so-called mu rhythm in the somato motor cortex follows the homunculus. The 10 Hertz component does not. And this is probably related to the importance of hands in our life.
So I hope I have, with this big potpourri of things shown you that, by correctly using the dipole models, many interesting things can be found the brain. So dipole models can be used in a wide variety of situations. And in fact, the multidipole model can be considered to be an interactive hypothesis testing tool.
So you would build the model. Check the model's significance, that is, to ask the question, should we reject this model. Check the parameter significance, that is, computer confidence intervals also. But for all of this to succeed, we need a reasonable estimate for the noise.
And it's important to know this, that in the dipole models, cortical constraints are usually not applied. And this is because if you don't apply the constraint, the dipole positions and orientations can compensate for an inaccurate forward model. And it turns out that then the time courses of the sources are more accurate and don't mix with each other.
And then you just need to interpret the dipole locations correctly. And it often turns out that they are a little bit on the white matter side. But that just means that either the forward model is not accurate exactly, or that the source activity is actually not very focal.
And again, the dipoles are equivalent sources. The information about the extent of the source is only indirect in the amplitude of the source through the [INAUDIBLE] constant. So one characteristic of this dipole example is somehow that "Statistics is not really necessary. I only conduct experiments in which the result is clear." This is a quotation from my countryman, Ragnar Granit, supposedly. So in all of these experiments, we were able to essentially look at individual data.
Now as a result of a long sequence of development, these anatomically and functionally constrained source estimates came into use. And we already saw something related, which is the ability to reconstruct the cortical surface automatically from the MRIs. And in particular, in the minimal norm solutions, we have the grid of dipoles usually on the surface. And we have many more sources than measurements.
And therefore, we need to meet an other optimal criteria besides the one which compares the measurements, y, to the predicted measurements, G times the dipole amplitude. And this criterion is added to the cost function. And the simplest one is the minimal norm estimate.
But there are other possibilities also. Of course, we talked about the discrete dipole model before. But in addition to the minimal norm estimate, we can obtain a sparse estimate by minimizing, instead of the sum of the powers of the currents, the sum of the amplitudes of the currents. And we can do more complicated things with the so-called mixed norm estimates, which can be sparse in space but smooth in time.
So this illustrates the situation. So the minimal norm solution is moot both in time and space. So every point in space and time has a non-zero value. When we use the minimum current estimate, the L1 estimate, at different times different discrete locations in the space are active.
But we don't really like this very much. Because the activity tends to move around in the vicinity in a jerky way. And then, unfortunately, these minimum current estimates have been used so that you actually smooth the sparse estimate across the neighboring locations.
Then with [INAUDIBLE] we introduced these so-called L21 or L12 estimates in which the solution is sparse in space and continuous in time in the sense that if a source is active at a certain location, in time it's always more or less active. And finally, there is a variation of this called the time frequency, meaning mixed norm estimate in which we allow the sources come on and off so that they are periodically on and off in time.
So early on when we used the minimal norm estimate, nobody believed. Even though in this visual experiment where different octants of the visual field are stimulated, you can see symmetry between the left and right and also between the lower and upper visual fields. But this is not yet very convincing.
I think people became convinced when the MNE was, so to speak, modernized. So first of all, the source orient locations and orientations are constrained to the cortical mantle. The forward solution is calculated with the boundary element model.
We use better noise estimates. And we display the data on an inflated cortex to reveal the sulci. And finally, we compute statistics, convert the data to a statistical test variable, and are able to compute combined MEG and EEG solutions and also fMRI-guided solutions.
And we already saw this benefit of concentrating the orientation of the sources. And if we compare the minimum norm solution and the so-called dSPM, which is a statistic without orientation constraint, MNE tends to be very superficial because MNE likes more currents and smallest currents to explain the data are obtained when the sources are close to the sensors. Whereas when we constrained the currents to be normal to the cortex, peak at the more reasonable solution in this auditory measurement close to the auditory cortex.
And we can also combine MEG and EEG with these estimates. But spotted here is the point spread function of these estimates for MEG and EEG and MEG and EEG together on the cortical surface. And then, you can see that in this combined case, you will see more of the cortex.
That is, the point spreads are smaller. So there is really benefit of combining these two measures. And it turns out that the combination is super additive, so that you cannot obtain the same results by adding more MEG channels.
And a few examples of how these methods could be used in a more interesting way in this scene, the different parts of this ambiguous figure flicker at different frequencies. And if you look at them, you see signal loss. In the back of the head, you see 15 and 12 Hertz peaks in the spectrum, corresponding to this flicker frequencies. And when the subject reports a change in the percept from the waist to the faces, you can see a change in the dominant frequency in the data so that we can kind of eavesdrop what the brain is doing.
And we can get a more quantitative measure of this eavesdropping signal by looking at the signals in the brain in the source base and making a linear regression with respect to the dark frequencies and other noise and frequencies so that we get amplitudes of these dark frequencies. And it turns out that the significant signal loss in either of these dark frequencies are in the occipital cortex. And in this circled area, there is a significant difference between the two persons. So by cleverly using this source [INAUDIBLE] we can get more information in the brain space.
Another example is measurement of connectivity after showing difficulty developing in autism spectrum disorder subjects, pictures of faces and houses. And then it turns out that the difference between faces and houses is that if you calculate connectivity from the fusiform face area [INAUDIBLE] of the brain, there is a difference specifically to these three different regions at different times. And it turns out that this difference is actually slightly even reversed in the ASD group. So the theory there is something abnormal going on the connectivity.
And this is long distance connectivity. But since MEG has [INAUDIBLE] temporal resolution, we can also look at local connectivity in the fusiform face area, which we defined on the basis of evoked responses in the sense that we can look at face amplitude coupling. And there is really very clear face amplitude coupling between low frequencies and higher frequencies in the FFA and specifically in the FFA.
And there is a difference between houses and faces. But this difference is missing in the ASD group. So one can really get meaningful information in the brain space by looking at the source signals instead of the sensor signals.
Then finally, about fMRI-guided estimates, this method was introduced at the turn of the century by [INAUDIBLE] and co-workers. And the idea is that you make the sources more likely to occur in locations where there is significant fMRI activity.
And in the [INAUDIBLE] paper, they showed that by doing this focality of the MEG source estimates improves. And this has been, to some extent, used in experiments. Even though it must be said that doing this combined experiment is extremely, extremely tedious.
Let me for the benefit of time, jump a little bit forward. And this is an interesting note. That sometimes it's actually more interesting to look at fMRI and MEG in relationship to each other. And this is the work of Dara Manoach, in which we tried to find where the so-called error-related negativity happens in the basis of MEG and EEG. ERN is kind of an oops signal related to committing an error in a task. In this case, a saccades task where you are asked to move your eyes either in the direction of a indicator light or in the opposite direction.
And from fMRI, the conventional wisdom was that the anterior cingulate is active in the case of errors. And surprisingly, from our MEG and EEG, they looked at the posterior cingulate, as you can see here. So this is fMRI. And this is MEG and EEG.
So the MEG and EEG data, in this sense, different from the fMRI data. And in fact, it turned out that other EEG measurements, at least, had found the same. They had just reported that it was still anterior cingulate, even though it was clearly posterior.
Fortunately for us, it turned out that there was a subsequent monkey study with confirmed [INAUDIBLE] finding that clearly the electrophysiological data comes from a different location than the fMRI data. And this is reflecting the fact that these two methods see different aspects of brain activity.
So I have briefly covered what I call MNE and friends and these distributed solutions which use cortical concentrates. And if one uses MNE, the source extents are not true. They are overestimated instead of being underestimated, typically, like in dipole models.
The sparse estimates, on the other hand, resemble the dipole solutions. And normally non-parametric statistics are used in group analysis. And I must say that pooling the sparse estimates is still a challenge. Because they may not overlap across subjects. And I would say that the comparison with fMRI is often more useful than fusion with it.
And I would like to remind you of the ambiguity in the sense that, if you have in the brain a focal source, you measure this field pattern in MEG, you calculate the current dipole solution with an equivalent current dipole. It's exactly the source. But you can also calculate the minimal norm estimate, which is more widespread.
On the other hand, since the minimal norm estimate produces the same field pattern as the dipole, you can imagine that there was this distributed solution in the brain. And then you get exactly the same field pattern as with the current dipole. And when you fit the current dipole, you get the current dipole back as expected. But you get also the minimal norm estimate back.
So the conclusion is that both MNE and dipoles point to the approximate location of the activity. But the extent of the activity is difficult to determine. So there is this kind of ambiguity.
And as Dimitrios already pointed to, there are many open source, or academic software packages which have a very important role in practical MEG analysis, including MNE and Brainstorm and also other software packages. And in particular, for MNE Python, Alex Gramfort has had a big role in producing this package. This doesn't follow the meaning of the word academic, of no practical importance.
So finally, general comment-- so I will say that different source estimation methods give converging evidence when interpreted correctly. So the understanding what is happening is really, really very important. So that, for example, dipoles are equivalent sources. They don't mean that the sources really [INAUDIBLE] in the brain.
And especially, the dipole examples, I think, illustrated that the exploration and hypothesis driven approaches should be used in conjunction. And it's more evident, I think, in MEG and EEG than in fMRI, for example, that the scientific questions, experimental design, data analysis approaches and interpretation are interconnected and interact. So that you should take somehow into account what can be done later and not be stubbornly fixated on your particular solid question. But you should be willing to adjust your question so that it can be answered.
And really, initial lack of formal hypotheses like in the eye movement and blink cases doesn't imply that your data analysis matters are not principled and you cannot get principled results. And in particular, conventional wisdom may be wrong. And it turns out that it's not really useful to use a hammer to drive in screws, but use the best tools for the particular purpose so that you get the most useful answers. So I would just like to mention all my colleagues and friends from this community. And thank you for your attention.