10 - Alignment and Atlases: Part 2 of 2
Date Posted:
August 7, 2018
Date Recorded:
May 29, 2018
Speaker(s):
Daniel Glen, NIMH
All Captioned Videos AFNI Training Bootcamp
Loading your interactive content...
Description:
Daniel Glen, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
PRESENTER: All right. So now I'm going to continue on with the rest of the alignment talk. We'll probably get into the Atlases soon, too. OK. So now that we've talked about affine transformations that have those 12 parameters. Now we're going to take it to nonlinear warping. That's taking it up to another level. And we're going to end up with thousands and thousands of parameters. 50,000 parameters maybe over a whole data set.
And so what we're going to do is rather than calculate over-- find one transformation for a whole data set, we will find transformations for parts of the data set. We'll look at twisting our source image onto our base image using cubic polynomials.
And we'll start off with very large polynomials that stretch over the whole data set, large cubic polynomials, splines. And then gradually go through a procedure where we'll get to smaller and smaller transformations. And then to a small neighborhood of maybe 11 Voxels wide around every Voxel.
And the result of a non-linear transformation of all this twisting is a series of deformation. So in the end, that's what we get. We get a delta x, delta y, and a delta z for every Voxel in our data sets.
How do we move our source image to our destination? We'll move it by delta x, delta y, and delta z. A very complicated procedure to give us a fairly simple result. And we used this program called 3dQwarp to do that. This will calculate the alignment.
We will apply the alignment transformation. So 3dQwarp will do this for a particular data set. If we want to apply the same transformation to other data sets, we'll use 3dNwarp Apply. And then we have other tools for dealing with concatenating and calculations of those transformations. And then alignment tools that will use 3dQwarp.
Autowarp will call 3dQwarp. AFNI proc.py can also call 3dQwarp. We'll also talk here in a moment about a blip-up/blip-down correction. This also uses 3dQwarp. We'll see that here. OK. So blip-up/blip-down correction, this is a special acquisition procedure where you acquire your slices in one direction, maybe anterior to posterior within a slice, and then do another set of acquisitions where you do posterior to anterior in the opposite direction.
And so what you'll see, depending on your scanner and the subject, you'll have some distortions. So here we're losing parts in the anterior. And then if we flip the direction, we get some of those parts back.
It goes a little bit more that way and a little bit less on the posterior side. And so we have a blip-up and we have a blip-down. And we say that the correct distortion is somewhere in between. We'll pick the halfway in between distortion non-linearly. So we'll will align the blip-up to the blip-down and take the halfway transformation.
AUDIENCE: This is different from the field map correction, right?
PRESENTER: This is different from a field map correction. That's correct. Yes. So--
AUDIENCE: How do you do the field map correction.
PRESENTER: Field map correction is done differently. Generally, field map corrections are done over a much smoother field. And it has the same goal as this. It's done completely differently. So we did a comparison of field map correction versus this versus top up, which is FSL's version of this. And for our data at NIH, this worked better.
So here's the result the corrected data set, this one that goes to this in-between of the blip-up and blip-down. This was on our 7T scanner. So it depends on your data whether you need it and how well it works.
So that blip-up/blip-down correction is now included in AFNI proc 2, so you don't actually have to call our script, which makes it a little bit harder to integrate it into your linear models. AFNI proc will do this for you.
And. yesterday I went on a little rant about the left/right. This is what I was talking about. This is how you can check it. This is edge displays. So one of our users, Brad Buchsbaum, he found this problem in the fcon-1000 data set. And we've automated this now into aligning [INAUDIBLE] so you can check for flipping with just the check flip option.
If the data is flipped, it will give you a warning. The flip data aligns better than the original data. That's not good if that's the case. As I said yesterday, we don't know what's left and we don't know what's right. We just know that the two don't match each other.
So you would have to figure that out either with a vitamin E capsule or there's lesion on a certain side or the subject has tilted their head a certain way. You know that you've given them instructions for that. That kind of thing.
AUDIENCE: Might be worth noting it's a bit common as you open data sets that the anatomical data set is correct, that the EPI data set is more often correct than missing one.
PRESENTER: OK. Yeah. So, yeah, most often you can find the anatomical be correct. But you have to know something about the processing of it. Generally, people will bring it through different pipelines.
So there was a format called Analyze that was popular for many, many years. It's the precursor for the NIfTI data sets. It has no orientation information in the header. So if they use the Analyze format, you kind of trust that they've done a whole series of things that could be right. But they will lose that header information.
And so even if they produce a NIfTI data set at the end, it may not be right. And NIfTI data sets could be missing, header information that could have incorrect header information, dicom can be wrong. There's all kinds of things that can go wrong that can cause left/right flipping. And there are tools like this to check for it.
We also have a script for doing DWI emotion correction. I won't talk too much about it. But using align EPI in that or 3dAllineate and these kinds of tools that we've talked about, you can create different kinds of scripts that will do correction for you.
This is an iterative procedure that will synthesize a new DWI data set based on the motion corrected data set from the tensor. And it does this multiple times, too. And then aligns itself to itself, basically.
Anyway, I'm going to switch to a somewhat different topic but related, the idea of atlases. So inside atlases, I like to start out with some definitions because everybody uses the word atlas in a different way. And so I'm going to use it in a specific way because it causes less confusion for me and hopefully for you.
OK. So first, let's start off with the idea of a template. This will be some sort of reference data set that we're going to align everything to it's going to define our space, which is the next definition. So an example of a template is the TT_N27 data set. We have other kinds I'll show you in a moment.
So the N27 data set, this was the Colin, also known as the Colin brain, someone who had his brain scanned 27 times in a row. They were all aligned to each other and averaged. And so it's a very good high quality brain. And it's probably been the most studied brain ever. And this is what it looks like in Talairach space. We also provide the one in MNI space and in MNI in [INAUDIBLE] space. We'll talk about what that means, too.
OK. So the template space. Now the space of a data set is important. It means that every xyz location in that data set corresponds to the same thing in the template and to every other subject that is in that same space.
So, MNI space is an example. Talairach space is another. The space that it's acquired in is it's orig space. We'll call it that orig space, the native space of the acquisition. So every data set has a template space. If you do 3D info dash space, you'll see it.
And then finally, we have the word atlas. So atlas is where the segmentation or the parcelations n are stored. So they will generally look something like this, these multiple multicolored things where every Voxel is assigned a region and a name, that the region has a number and a name. It has a particular intensity and index in it.
This is the Eikhoff-Zilles macro label atlas. It's in the TT_N27 space. OK. So we include a lot of different templates with AFNI. We include the N27 data sets. We include these averages of 152 data sets. This actually came from the MNI, well, this MNI data set.
So the original MNI template was an average-- well, there have been several versions of it. 300 data sets, 300 subjects, or 152 subjects, and ICBM one that's 452 subjects. And they were just, they were afinely aligned and averaged. And you end up with this kind of a blurry brain.
More recently, the 2009 MNI version is much better and it's one that we recommend now. So this is the MNI 152 T1 2009. There's a few variations on this, A, B, and C. We provide these things with AFNI. And you can get them elsewhere, too.
So you can get a data set from anyone, any data that could be a template. If you want to see this in AFNI, you can set your AFNI global session environment variable to be where your templates are stored. And every time you set Choose Data Set, it will be there.
This brings up the topic, why should we use a standard space? There are lots of reasons to use them and a lot of reasons not to use them. That said, I will say that almost everyone uses a standard space when they do FMRI on humans. In the animal world, not so much.
And there are cases where you do need to use it. In some cases, you don't want to use it. So it makes comparing subjects a lot easier because it's done, Voxel wise it's all kind of automated. You've got coordinates that you can standardize with other people. So a paper says it's at this coordinate. You can look at the coordinate on your data and see if you see the same kind of thing.
Why you wouldn't want to use it, if you have inconsistencies, lesions, a lot of variability on your subjects, you don't actually need to look-- you don't have many subjects and you're not doing a group analysis and you're looking at a particular region and you know where to find that region.
You can draw your own anatomical regions and say this is the area that I'm interested in, rats, macaques, humans. This is what I want. And that makes statistical analysis a lot easier, too, because you don't have to account for multiple comparisons of 100,000 Voxels. You only have a handful of animals and tests that you need to account for.
So you have to choose the template if you're going to use a standard space. And you should choose one that's like the subjects you're looking at. If you're looking at humans, human template would probably be a better idea. So macaques, we've got macaque templates.
Pediatrics, you know, not that kids are a different group than humans, but we have a pediatric template available and we're also working on an elderly template. So try to choose one that's similar to the group that you're studying. And you can make your own. And I'll talk a little bit about that.
We have these scripts here. And it does this procedure where we do and affine warp first to some initial base template. And then we'll continue on to do a non-linear warp to make a template iteratively.
So we'll take a rough calculation using the large neighborhoods in the nonlinear warping and then we'll go to smaller and smaller. And every time we do these calculations, we'll average over see what are the average across all data sets and use that as our new starting base.
And so we'll get a new final template. And that's how we did a pediatric template with the Haskins Institute. And we're working on a lot of other things like that. I'm in the process of making a new one that will do this using parallelization scheme called Dask.
I don't know if any of you are familiar with that. It's parallelization using a cluster. So it doesn't matter whether you're doing 10 subjects or 1,000 subjects. It will do this all in parallel and make a template for you. So hopefully that will be done within a couple of weeks.
So the initial idea of atlases in MRI, well, this came out before MRI. Talairach and Tournoux, they came out with this book in 1988 where they did half of a woman's brain. And they put that brain into its postmortem analysis. They put slices 5 millimeters apart and drew-- well, they put it into a kind of coordinate system, the stereotaxic procedure that they're famous for. And drew various regions.
And they said when they did it, whatever you do, this is made to describe this one woman's brain. Don't apply it to anyone else. Of course, that's what happened. Everyone has been using it for many, many years, the Talairach Tournoux procedure.
And we have it built into AFNI, this procedure of dividing up the brain into different coordinates. And you define your AC coordinates and the superior edge and the posterior edge and the PC coordinate and the midsagittal points and the most inferior and the most superior points and the whole box that the brain is in.
And we used to spend time in a class like this, a lot of time for everyone to do it. And it's kind of a laborious procedure. It's about 20-25 minutes for you to learn how to do it. And then after you get used to it, maybe five minutes per subject. But we don't have to do that anymore because we have a new thing called-- a newer one called @ auto Talairach that does the affine transformation instead.
It is there if you want it. The manual procedure is good if you want to align around the AC and PC. It's really good for that. And pretty much the medial line is good, too. But for the rest of the brain, not so good.
So mostly it's not used anymore. The manual procedure, you generally use an affine procedure. And even more than that we'll-- well, the automatic procedure, even more than that we'll use a nonlinear procedure now.
But let's go through some examples of @ auto Talairach so you know how that works. This is done similarly to 3dvalreg. You give it a base. In this case, the base is template. You give it a suffix. If you say none or you don't put anything at all, it will just change the anat plus orig into the output of anat plus Talairach.
And to apply that transformation that you've calculated that goes into the header of anat plus Talairach, you can apply it with @ auto Talairach or AD warp. That's another program. And we're going to-- so if we apply it to that funct slim data set we saw earlier, our statistical results, we want to put that into standard space two.
We can do that giving it the anatomical parent of the anat plus Talairach and say that we don't need it at the resolution of the anat plus Talairach data set, which is at one millimeter. We're want it at, say, two millimeter resolution. And that's how you do that here.
So a comparison here is the data analyzed, the funct slim results in the original space. This is put in to Talairach space with the @ auto Talairach procedure and the manual procedure. All pretty similar but there are little differences,
Some years ago there was some controversy of which space should you use. You could choose Talairach space, you could choose MNI space, and yet there's a third space called MNI-Anat. So Talairach space, you saw what that is. That's roughly fitting that Talairach coordinate system. We have the TT_N27, which makes a fine base in Talairach space.
One thing to note about the Talairach space and atlas is that there is no corresponding MRI data set that goes with the original Talairach atlas that was done postmortem from slices. We don't have a subject to align to from the original.
We have for the TT_N27 that we put into the Talairach space using AFNI and the manual procedure, but we don't have something that exactly corresponds to that atlas. But if you're using the N27 space, N27 data set in various spaces, Talairach MNI, MNI-Anat, they all look fairly similar.
So the MNI space is one that was aligned to the original MNI, I think 300. And this is in the MNI 152 space. And the one nice thing about the Talairach space was that the anterior commissure is at 000. MNI space 000 doesn't correspond to any particular structure. But Eikhoff and Zilles when they distributed their SPM tool box, anatomy tool box, they liked that feature of the Talairach space having a 000 at the anterior commissure.
So they moved the MNI space a little bit. So they moved it just five millimeters in one direction, six millimeters in another. And so we have an MNI in that space. And that added some confusion because people weren't sure, is it MNI space, is it MNI-Anat? And it's not very clear. But in AFNI, we try to say that this is the one space and this is the other, MNI and MNI-Anat.
Now I will say that even if something is in MNI space, there are at least a dozen variants of MNI space. And there are a lot of variants of Talairach space. Every subject that's aligned to Talairach space that's a template is a different variant. So none of these are set in stone that this is the gold MNI space. So when you report that's an MNI space, you should also report what template specifically that it's been aligned to. And that would be a better definition of that template space.
OK. So MNI space is slightly larger than Talairach space. Not too different, but slightly larger. MNI-Anat, slightly shifted from the MNI space, too. AFNI will show you all three versions by default of the coordinates in the Where Am I GUI and on the command line showing you where you are in any of those three spaces.
If you're not interested in any of those spaces, you can define your own list or you can say I'm only interested in one of the spaces that's in an environment variable called AFNI Atlas Template Space List. All of this is controllable.
Now AFNI was originally built around Talairach space. So there was an initial preference for Talairach. But now there's almost nowhere that AFNI is done specifically for any particular space. And that's why we can work with different animal spaces equally. It doesn't really matter for AFNI. You have a question?
AUDIENCE: Yeah, Is there ability to warp from a atlas space into a subject specific space?
PRESENTER: Yeah. And I will show-- I've got two slides on that, how to go from standard space to original space. All right. I'm going to show-- you so here, this is an example where we have aligned the AFNI data six anat data set to the TT-N27 data set with just an affine transformation.
So the affine transformation is just squeezing, stretching and sharing. It's not a nonlinear alignment. This goes through first step of unifizing it. First, where we're going to remove any bias, any bright spots around the data set.
And then we're going to start our nonlinear warping procedure, first on a large neighborhood then a small neighborhood and smaller and smaller. And this goes up to a level nine here. And this is the result at the end of this transformation. And this is what the N27 data set looks like as a reference. And this is a rendering it using the AFNI render plug-in of these steps so you can see what happens with each iteration.
So you can turn-- you can morph one data set into another using this nonlinear warping. And it works pretty well. OK. Here are some comparisons of non-linear warping using 3dQwarp versus affine registration over 188 data sets and looking at resting state FMRI. And here we're just looking at the differences between a nonlinear warp using a seed at the left precuneus. And you can see correlations are higher with nonlinear warping.
And then nonlinear warping to a detailed template like the TT-N27 or the MNI 152, I forget which one was used here, versus the MNI 152, the old version, the MNI 152, that blurry version. And so that the template is a kind of limiting factor on how good your registration can be.
So if your template is blurry, your group results will be blurry, too. So the finer your template, the better your results will be. So you want a detailed template. And so now we recommend using detailed template and nonlinear warping. And we've got a few tools for doing that.
Now disadvantages are sometimes the skull stripping has to be done better. Particularly on humans, the shapes of the brains can be different, more than in macaques and rats. And the skull stripping has to be done pretty carefully or sometimes not at all.
So if you strip off a piece of brain, that nonlinear warping will treat it differently. And if you add in a little piece there here and there, it will be included and warped differently. So you've got to be more careful with skull stripping with nonlinear warping.
On an affine, you don't really have to care too much. As long as it's generally done correctly, it's OK because we're only picking one transformation for the whole data set. If a piece is gone in that, we can still apply it to the whole data set before warp.
OK. So we've got a couple scripts here. So the auto_warp.py is our script that calls @ auto Talairach for us to do the affine transformation and then it calls 3dQwarp to do the non-linear transformation. And this is an example. Same syntax as before, but it's doing a lot more stuff.
Now nonlinear warping takes a lot longer than what we were doing before. So rather than a few minutes, it'll be hours for this. So we don't do this in the class because we'll spend the whole day waiting for it.
And we have an even newer script called @SSwarper. The syntax here is super simple. You give it input and you give it what to call the output. And it gives you an edged display to show you how well your image. This is the image.
Let's see if I can show you. So this is one of the JPEG images that it shows you. This is the T1 data set we saw in the class for the AFNI data six anat directory aligned to the TT_N27 with the edges of the TT_N27 shown. So it's not perfect, but it does a pretty good job.
And this one combines skull stripping with it. So we've got the skull stripping, the affine transformation. It's even slower. It's doing a lot of fancy calculations to do this. It does this in an iterative way. Rough calculation of the skull stripping. Rough calculation of warping, going back and forth, and it finally gets to a skull stripped data set and then goes to the nonlinear warping again.
OK. So we have a lot of atlases that are distributed with AFNI. Some are distributed directly and some you have to call a special script that will go fetch them for you and some are just stored on our website. Rather than show it to you here, we can look at it inside AFNI.
So let's do that. So rather than call AFNI, we can switch to the [? ABIN ?] directory, for instance. I'm going to go to the one off my home directories. This is where I've got AFNI installed, wherever you have AFNI installed. It's Probably tilde slash [? ABIN. ?]
If you type which AFNI, you can know where your AFNI is installed. And it's most likely that your atlases are also stored there. They don't have to be installed where your AFNI binary is, but probably most of you have them there.
And if I do LS star dot head, I'll see all the data sets that are in the AFNI directory. And if I just type AFNI, I can look at them. So let's look at some things in AFNI. I'll change my underlay to one of my templates. So let's say I'll pick the TT_N27.
All right. So if I change the overlay, I can pick one of the atlases here. Now, most of these, they're all in different spaces and some aren't appropriate for this particular data set. It will let you choose them anyway. But let's stick to ones that are registered to Talairach space. So all these ones that start with TT something are in Talairach space.
That's an older version. Let's pick the NPM 18 down here. This is an Eikhoff-Zilles. The one before was I think 1.4. This is 1.8 of the Eikhoff-Zilles anatomy tool box. So this is the maximum probability map.
And you'll see that there are some regions identified there. So this is one of the atlases. This particular one was made from 10 post-mortem data sets that were then aligned to the MNI space.
AUDIENCE: Sorry, one more time. So the underlay is NPM 18 and the overlay is--
PRESENTER: This one, in the underlay I'm using TT_N27. You can look at the top caption here to see what's what. So TT_N27 is my underlay. And I've made my overlay the TT_CAEZ NP 18 plus Talairach.
And if you look in the overlay panel, you'll see that the region is identified by label there. I can also show the label. See if I can get to it. Show the label here by right clicking on the grayscale bar. And then pick something like Upper Left for label. I'll make it larger for you to see. OK. So I can see the label here. OK.
So atlases can be shown like that. And let's see, we can do other things with atlases. This is in the standard space now. This is in Talairach view up here. If you have an anat plus orig and an anat plus Talairach, you can switch between the two by selecting the orig view or the Talairach view.
OK. So here, let's right click on an image viewer. It doesn't matter which one. And you can see that there are some things here. You can go to atlas location. You can choose Where Am I. You could do atlas colors.
So let's start out with Go To Atlas Location. So here, I'll say I want to go to the left hippocampus. And then select Set. And it takes me to the left hippocampus. And here this atlas calls it HIP CA. I can also go to atlas colors. Now this doesn't have to be done-- if you have an anat plus Talairach, you don't have an atlas shown on your overlay, you can still do these same things. You can go to any atlas location.
As long as there's a data set available to say how to get there, it will show you these. So can select Atlas Colors. And here it shows you a list for whatever you've set as your primary atlas. And by default, it shows you the Talairach demon as your primary atlas.
So let's say I want to show this in red. So I've got the left hippocampus shown in red here. So you can see the Talairach demon's version of the hippocampus is showing up in red over my Eikhoff-Zilles maximum probability map. Or it could just be with your functional results. You can just show different regions like that. That's one way you can see these different regions.
Let's go onto-- another thing you could do with atlas is you can right click and select Where Am I? So the Where Am I GUI pops up. And we've got a lot of different things showing up here.
And first it shows you what the original space of your data set is. So this data set is the TT_N27 data set. So of course it's in TT_N27 space, which is a variant of Talairach space. And then it says the coordinate, which is related to this coordinate here, this is RAI order. And over here, this focal point is in LPI order. So the data set has the first two items reversed. So this is 32, 24, and this is negative 32, negative 24, and then negative 9 in Talairach space. And then it has the transformation of that coordinate into MNI and MNI anat space.
AUDIENCE: Why is it first? Because the radiologist just does?
PRESENTER: These are just, the publication standard has been in LPI order, what we call LPI order. And as we mentioned, that's just the opposite of what FSL and SPM might call LPI order. They might call that RAS order.
Anyway, so when you publish your result, you should say this is on the left and this is on the right. And this is posterior and interior. And that kind of thing.
AUDIENCE: So I'm getting-- the atlas-- so AFNI is-- left is right in AFNI but the atlas is left is left? Or it's just that they--
PRESENTER: It has nothing to do with-- this just has to do with what you report as left and right. So just be clear on what you're looking at and how you describe it. It's not a property of the atlas. And it's not even a property of AFNI, really. It's a coordinate order. And there are 48 possible orders that we can describe every coordinate.
So when Where Am I pops up, it shows you that coordinate through different atlases. It shows you the Haskins Pediatric Atlas. And because there's a transformation from MNI space to the Haskins Pediatric space, it shows you that there. And then we can scroll down. It shows you what it is in the Eikhoff-Zilles NPM space, which is in the hippocampus. And it shows you not just the coordinate where your cursor is, but what's nearby. Because alignment isn't perfect.
And so no matter what we do, we won't have perfect alignment. And we can't say for sure that this is the hippocampus. And it's a different subject. The template is a different subject or a composite of a group of subjects. So we give you a neighborhood of regions that it might be related to.
And we go out, I think, to nine millimeters. So if we find another region within that nine millimeters, we'll let you know that there are other regions nearby that could be that location for that subject. And it's not just-- this is because of alignment. It's because of variability across subjects. It's also because of how the atlases were built. Every Atlas is made with a whole set of procedures. So we show for every Atlas that we can get to from the space of the data set that you're in.
And at the top, let's see if this will work. We have a link to the NeuroSENSE database. So this coordinate is in that database. Let's see. Oh, here it is. It takes it a little bit of time. And it will show you maps that are associated with that particular coordinate. It's sometimes a little slow.
But it will also give you these studies that are related that have published something about that coordinate. And these have been converted to MNI space. And the way we go from Talairach space to MNI space for the TT_N27 space in particular is that we have an exact transformation because we transformed it from MNI to Talairach.
So we know exactly how we did it. So we just take that out of the header and transform it back. And that's what we sent to NeuroSENSE. And we say what's there. There are other things that we do that are kind of similar to that. But the databases is come and go. So--