28 - Surface Analysis: SUMA: Part 5 of 5
Date Posted:
February 15, 2019
Date Recorded:
May 31, 2018
Speaker(s):
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Rick Reynolds, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
RICK REYNOLDS: Before we continue on, I'm going to just do something, a little thing on the side. I want to give you a couple ways to think about the mapping of surface data and how we can look at it. What are we looking at here?
Here we registered our EPI data to some T1, or even to an EPI volume base that's in alignment with our surfaces. Then we mapped our data to the surface right then and there, after the registration, and did all our smoothing-- smoothing and scaling on the surface, in the surface domain, and did the stats in the surface domain, ran 3dDeconvolve in the surface domain.
You can also have results in the surface domain, and map them-- results in the volume domain, and map them to the surface domain, but they won't go through those pre-processing steps in the surface domain. So they might not be as nice, but you can do that if you want to. So I want to be able to show you that on the side.
For example, how do these results compare with what we would have done in the volume? Even looking in the volume and/or mapping the volume to the surface after the fact-- so I want to possibly just take a peek at that. So don't do this. I'll just try it on my own, and maybe if you're so inclined, you can do it yourself later.
I'm going to CD into AFNI_data6/ftanalysis again. I'm going to copy S05 to-- how do you spell S05-- copy that to ss dot-- ss. ss. Fine. Wonderful.
And in order to compare these, remember, our-- we mapped-- we aligned our surface data with the current T1, right? So what if we did our EPI analysis without a TLRC block? Now it's the same analysis, but we're aligning to the current T1. That would put these volumetric results in alignment with our other results.
So let's do that. Do we have any TLRC stuff? So let's get rid of that. Fine.
I'm going to change my subject ID. We'll call this subjectft.v.filerich. Which we'll call it a rich space. OK? So I'm going to TCSHSS. You are off and running.
So that's going to do the same analysis as Tuesday, but not go to standard space. Otherwise, the same. And you, just-- let me just dump you down here.
So now, what are we doing here? So first of all, before we do anything else, let's just look at the results that we have. This image is actually one of the coolest things I have seen in the little time I mess around with surface data.
So we're thresholding based on the t stat. Oh yeah, I didn't tell you the threshold. So most of you probably have a-- possibly have a bright green color overlay. You can drag this to some similar level, or you can even double-click up here and type, let's say, three points or whatever.
So now we're thresholding our data at a t-stat, an AD, an odd t-stat, where the auditory reliable t-stat is at least three or greater than three. But the colorization is still at the negative 1.5 to 1.5. With this color map, you can choose a color scale here too, but that's the CMP line.
I'll just mention that briefly. CMP, this is the color map choosing, but don't click on that because you'll lose your color map, and then you'll have to find it again. This is an example where you might actually want to right-click on the CMP text to open up a window like this, and it'd be much easier to-- oh, it doesn't help to stretch it.
But you can at least page through these things, and it's an easier way to see all the color maps. Anyway, so are we all looking at basically the same image? Any trouble with that?
What is cool here is this upside down T or something here in the Y image, we have this strong intensity area, surviving threshold, that is anatomically sharp. It's not a big blob.
You know, I'm not studying this in depth. I'm not looking at the time series. I haven't really spent much time with this, but at a glance, that looks really cool. I mean, that looks like it's so well-defined anatomically.
If we turn off the overlay-- yeah, you can't turn it off quickly, too bad. But anyway, so you could-- I was going to toggle it to see exactly what that Y corresponds with, but it's like right down here.
Anyway, it just looks nice to see these-- it's not a blob, you know. It looks like it's following the anatomy. So it looks like some-- much more anatomically clearly relevant.
AUDIENCE: [INAUDIBLE]
RICK REYNOLDS: F key? Oh, fantastic. There you go. Foreground colors on and off. Fantastic, there we go. So now we can play with that.
So the background doesn't look-- it's not following-- it's partly in the-- partly on the gyrus. Anyway, that's why I went through. That's great.
Anyway, something so sharp and strong looks nice, and it's probably not going to be that nice in a volume that's mixing all your tissue types together, and all your so-called patterns backed-- will be mixed across Euclidean space. OK, so with that in mind, I want to leave this here briefly.
Oh yeah, you don't even have this data. So in my-- do I have this data? AFNI data-- OK, I do have this.
So I downloaded-- there's a SUMA MNI150 2009-- actually, it's not called that on the website. It's called SUMA MNI152 template, 2009 template. I don't know. I can send you a link if you care to do this.
But so this is the MNI152 2009C template run through FreeSurfer. So that template, of course, is in that template space. That template is in FreeSurfer space.
If you look at the coordinates here, if I click on the surface, it shows the coordinates here. 41, 9, negative 22.7. What does that mean in terms of my group of subjects? Nothing, these are original space coordinates.
Node index 25,266, what does that mean across my subjects? That means this anatomical location across subjects, that does correspond across subjects. That node index is supposed to be the same place across subjects, according to the FreeSurfer spherical registration process.
But the xyz-coordinates don't tell you anything. So what if you wanted to say-- what if you wanted to put in your publications, well, where am I in MNI space? Fine, I can do a group analysis with this data, but the coordinates don't mean anything. I want the coordinates to mean something.
Well, we have-- remember, like for this analysis, what is important? What is the part that gives us the group correspondence? The std.60, the MapIcosahedron with the reassembling of 60.
So if we've done the-- if we've done the-- you know, using the FreeSurfer's registration and done the MapIcosahedron to invert those coordinates, and now we have our standard meshes, as long as that is working well enough, then we have nodal correspondence across any such surface, including our 2009C template.
So this, Paul Taylor just ran this through FreeSurfer a few days ago and he put that out. So I downloaded the TGZ file. What is in there-- .afni-- I dumped it in my AFNI data directory because we use that for some things, but that's kind of irrelevant right now. It's got all these STD 60 surfaces.
It's also got the 141 surfaces. If you do a real surface analysis, you'll probably use those. That's a higher resolution. That's a more natural resolution. But anyway, it's got all this stuff. So--
AUDIENCE: The macaque has also something similar, the macaque atlas.
RICK REYNOLDS: The macaque?
AUDIENCE: Yeah, that means--
RICK REYNOLDS: Well, you'd have to rely on having that spherical registration step done. So in these cases, if FreeSurfer-- if you gave FreeSurfer macaques and the macaque cortical patterns matched the human patterns well enough, subject to that, you'd get--
AUDIENCE: But at least in [INAUDIBLE] also. And I don't know, for the macaque-- for the macaque atlas, in the [INAUDIBLE] you have some correspondence also.
RICK REYNOLDS: If we don't have one, we could possibly make some correspondence. If we sent it through the same software package and then subject to their spherical registration step, we should be able to do this. So you see, this is really powerful. We can-- any subject that you send through this same processing stream, subject to their registration, you get nodal correspondence.
So I'm going to-- Rick R hist-- wasn't there a hist file? A hist SUMA-- so I'm going to set my subject directory to be-- I'm going to set this sdir variable to be that MNI 2009 directory that has all the surfaces, and then run SUMA with the spec file that's in that directory and the surf file that's in that directory.
What space is this surface volume in? MNI. The surface coordinates are all in MNI space.
Wait, wait, what directory am I sitting in? Oh, I'm still in the results, fantastic. So I'm still in the same-- I'm in the results directory. Let's look at the inflated surface.
I'm still in that same subject results directory that was just created by the AFNI proc script. But now I'm looking at the MNI 2009C template, surface template, in that subject results directory. So can't we Control+S, load data set stats? Open--
What do we look at? The audit file indices four and five. We threshold to what? Three. Come on, behave. Three--
We colored to 1.5. How do you spell 1.5? There you go. Same results.
Now on the MNI template, if we click in the Y there, coordinates 43, 31, 5 versus 41, 9, negative 22, the coordinates are drastically different. These coordinates are in MNI space. And there you go, piece of cake. No effort, almost, at all.
So that's the power of having those icosahedron surfaces. You can just leap across subjects, as long as they've run through that same mapping. One more thing, our volumetric analysis is done. Let's see if I can handle what I'm doing here.
So if we run afni -niml, so if we CD into this results directory, these are the volumetric results in original space aligned with this T1. It's the same T1 that we're aligned with in SUMA up here. So what happens if I run afni -niml here?
afni -niml here should talk with the first SUMA created. That should be this one up here. That's the one that we ran the surface analysis with. So let's just test ourselves here.
So I'm going to just look at the stats in AFNI first. So I'm going to set my overlay to be the stats, and let's look at the same stuff. Let's look at the auditory reliable beta weight, the auditory reliable t-stat, set the threshold to about three. Three-- scale the beta weight to 1.5. OK, the same colorization.
Now, do we talk? Come on. Oh, we need this volume here.
ftvol-- so I'm going to copy ft.surf results, ft.surf. Surf file aligned to experiment-- I'm going to copy this surface volume here. AFNI needs the surface volume to accept the coordinates.
So now I'll just lower this window, and let me set that to be my underlay. So now it has that data set. It should be on the bottom. OK, one more try.
Closed and opened, force it, and there we go. So now we have surfaces in here. Now they are talking. So now--
I messed up my threshold, but whatever. Let me just change my sub-grid, just to see. So now what are we seeing in SUMA? Now the data in AFNI is being mapped to SUMA, according to that same volume to surface mapping.
Actually, in this case, I haven't been picky with it. So this is merely using the midpoint between the white matter and the peel. It's just taking the intersection value-- the volumetric value at the midpoint, not intersection-- at the midpoint of the line segments between the white matter and the peel. So I didn't take the weighted average yet.
Maybe just to very briefly show you we can, I can click on Define Data Mode, and plugins, Vol2Surf. I can do that. I can say Use Vol2Surf, Yes, map function F, just to show you. Node, number of steps, 10, this is basically what we did before. Surface 0, 1, I think that's probably right. Set and keep, hide that, and re-display.
So now we're using the same mapping. So now I'll go back to the beta weight, and we're looking at basically the same result. How about our Y now? Not as pretty.
I never looked at this before, so this is-- I didn't know how it would turn out. But you can see, it's not as nice, and if we go back to SUMA, we can actually toggle between these. Switch Data Set, Stats, SUMA, AFNI, SUMA, AFNI.
So they're both in SUMA's hand now. It can switch between them. What is SUMA really looking at? Here, SUMA is colorizing the numbers in the stats data set. When I switch to AFNI, SUMA is merely showing the colors that AFNI sends to SUMA for the nodes.
So AFNI is doing the colorization in volume space, and AFNI is giving each node a color, and sending SUMA colors, not values, just the colors. So the colorization should exactly match what we're seeing in AFNI. So the tints and stuff, the hues, things might be a little different from that, but clearly, you see the statistics are a little different as well.
AUDIENCE: So for thresholding, you have to do everything in AFNI, and then send back the result to SUMA?
RICK REYNOLDS: Yeah, in this case, in AFNI, we're looking at the volumetric data. We set the same threshold, three point-- there, 3.000. The color range, 1.5, just like in SUMA. So that's negative 1 to 1 times 1.5, so negative 1.5 positive. And so there's a colorization in AFNI.
And we can click-- if I right-click on there in SUMA we jump to that location in AFNI. So here's the same area in volume space as we have in SUMA, in the surface domain.
AUDIENCE: And did you change the threshold in AFNI? The projection of the surface automatically updates, or we have to rerun?
AUDIENCE: Did you do something in the volume?
RICK REYNOLDS: The volume did the same analysis we did before, except we didn't go to standard space. So we did the 4 millimeter full to half max blur. In the surface, we blurred to 6 millimeters. In the volume space, we added a Gaussian of 4 millimeters. So they'll probably be comparable, but that would be a good point. The smoothing might be different, but it should be pretty comparable in this case.
AUDIENCE: That's fine. If you do that on the surface, can you do the opposite, send out the results to the [INAUDIBLE]?
RICK REYNOLDS: Yeah, we have a 3D-- so the volume to surface mapping is done with 3dVol2Surf. There's a sister program, 3dSurf2Vol, that will invert that mapping. That's pretty cool.
So you're seeing in one case, the analysis done on the surface, and again, that was this case. And then you're also seeing the analysis done in the volume mapped to the surface for that same subject. Did the analysis in the volume in standard space, and then mapped it to a standard space surface, how do you think that would look, better or worse?
Worse, much worse. At least with this subject, the surface is the bright place in the volume. And so the mapping from volume to surface is correct, except it was done after volume processing instead of after surface processing. So it's just, really, it's just the blurring that's different between--
Blurring was on the surface rather than in the volume. That's really the main difference. But if you mapped standard space volumetric results to a standard space surface, the contours of the surface are not going to match the contours of that volume mapped and anatomy.
So then, however the contours happen to hit, if you do a good job with nonlinear registration, perhaps-- but otherwise, the mapping is going to be much worse. So you would only do that to make a quick, pretty picture. So it's easier to--
But if you want to map to a standard space surface, it's better to do it the way we did here, wherever the heck that went. Here. So here, this is the MNI surface. And in this case, we have node correspondence. So we do the analysis in the surface, but we're using the node correspondence of the MapIcosahedron to show the result in MNI space.
So a lot of different ways to look at this. And how would you run a 3dttest now that you've done your analysis? We've got this stats, that left hemisphere NIML DSET and right hemisphere NIML DSET. How do you do a group analysis on this now?
Same way as before. You just run 3dMEMA, LME, 3dttest, 3dMVM. These data sets are fine. Remember, we're doing all of our-- all of our work here was 3dTstat on surface data, 3dcalc to scale the data.
Doesn't have to be volumetric, because these are not-- there's no spatial dependence to 3dcalc. It's just, at every location, do your calculation. So it's now blurring over space, so you don't care about your neighbors. So for these problems, you can use them with surface data, and that applies to your group analysis.
What about clustering? Clustering is different. By performing the group analysis to get the t-stat at each location and the beta weight at each location, that's fine. So I think that's good for this part right here.
AUDIENCE: Can you get both the hemispheres on a single--
RICK REYNOLDS: Say that again?
AUDIENCE: Can we visualize both hemispheres on a single--
RICK REYNOLDS: That's-- so this spec file was only for the left hemisphere, so that's all we're looking at, but there is a spec file with both hemispheres. So you could look at them both, or you could switch to the right hemisphere spec file. But the analysis steps, you see the analysis is done for each hemisphere. So the computations are separate, but the display can be together.
So the regression matrix is identical between these statistical results. Everything is computed exactly the same, except instead of using the TLRC block, we have a surf block and map to the surface, and then the blurring is different between the surface and the volume. That's the only difference between these two methods.
AUDIENCE: OK, I have one more general question. In the sagittal volume view of the brain there, you can see that there's activation. Yeah, that's right. You can see that there's activation outside the brain. And is it-- using the surface [INAUDIBLE], are you more-- you can't see that surface with more risk to--
RICK REYNOLDS: Definitely. In the surface, you are blind to such artifacts. If you have motion artifacts, if your results-- like in the volume, if you've got the results-- what does a motion artifact look like in the volume? Many of you may have not seen it.
It would be a statistically significant-looking result that just follows the contour of the cortex. It doesn't really care where it is in the brain, it's just following the cortex. So the motion-based result is just playing with motion-correlated-- with motion that is correlated to your regressors of interest.
And so you get some result that just likes the contrast there, the tissue contrast, and the-- but the location doesn't matter. So like this, you can-- to some degree, this follows the whole cortex, but this is getting strong blobs in the visual area and the auditory area. So that actually looks OK.
But anyway, so if you had a motion result, that's easier to see that, oh, that looks bad in the volume. But in the surface domain, everything looks beautiful. So it's more easy to be fooled in the surface domain, and that's one reason I suggest doing a sister analysis in the volume, and just looking at them to see if there's something that you need to be interested in, OK?
AUDIENCE: So [INAUDIBLE] on the surface. There's a blur [INAUDIBLE] Is this a distance about the node, or [INAUDIBLE]?
RICK REYNOLDS: It's-- the 6.0 millimeters is typically a Gaussian blur distance, but it's on the nodal mesh. So distances are not Euclidean and they're not even straight, because you've got to go along triangle edges. So 6 millimeters is kind of an estimate along triangular edges, and so it's actually a more complicated computation to keep track of.
So blurring on the surface is a little more difficult, and estimating the blur on the surface similarly. This is doing both, because it blurs and estimates, blurs and estimates. So it keeps growing out until it seems like it's locally 6 millimeters all over the place.