32 - AFNI DTI & FATCAT - Part 1 of 2
Date Posted:
February 15, 2019
Date Recorded:
June 1, 2018
Speaker(s):
Daniel Glen, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Daniel Glen, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
DANIEL GLEN: We're going to start off with an introduction to diffusion weighted imaging and DTI and then we'll show you our implementation and how to make it work with your resting state or functional-- to connect it to functional connectivity to.
All right, we have logos, as usual. This logo here is the newer one. This one was actually designed by [? Delia ?] Chen. So she did a really nice job there. This is Gang Chen's daughter. She's our artist here. So all right. These are some of the topics we'll cover.
OK, so obviously DTI is a kind of MRI imaging. We're looking at diffusion in the water. In general, water will just diffuse randomly. But we're going to describe this diffusion in the form of a tensor. And this tensor has this kind of matrix. It's actually made out of-- it's a symmetric matrix, and it's got x, y, z components. Diffusion in the x, the y, and z along the diagonals, and then combinations of x and y on the off diagonals. And we use it in MRI for looking at white matter usually.
So generally, so if you have a cup of tea, the tea sits in the water and it diffuses in all directions roughly equally. It doesn't have any structure. But if we put straws or something in it, like that has a structure, diffusion will happen in a particular direction. And we're going to take advantage of that with our MRI signal.
Usually we will assign-- we will calculate one direction for an MRI signal with diffusion tensor imaging, but we can look at more directions if we're doing something like HARDI. This requires a lot more acquisitions. And we can have multiple directions at once. Here's our diffusion tensor matrix again.
We can solve this diffusion tensor to give us-- we can get the eigenvectors, the three primary eigenvectors from that, and the three eigenvalues from it. And those will give us the directions of those vectors. Geometrically, this is an ellipsoid surface. And that's one way to visualize diffusion tensor images. This is showing that it's an upper symmetric matrix here.
So in DTI, what are the things that we look at? We have a few different measurements that are useful. We have the first eigenvalue. This is the size of the primary direction of the eigenvectors. The water in the brain is going in a particular direction at that voxel with this eigenvalue size. So if you have a larger eigenvalue, it will go more in that direction than a smaller eigenvalue.
The eigenvector is the direction of that flow. We can also look at the fractional anisotropy. How different are the directions from each other? So we will take the differences among all the eigenvalues and see how much they are different from each other. If they're not different at all, then we have a sphere. And you have an FA of 0 if you have-- an FA can approach 1 if the eigenvalue in one direction is much larger-- the first direction is much larger than the other directions.
We can also look at the general diffusivity within a voxel, and that's called the mean diffusivity. This would just be the average of all of our eigenvalues. And we can look at it just on the second and third one, and that gives us a measure of how the ratio of those two-- the relationship of those two remaining eigenvalues. So all of these are used in the literature. We'll talk about a little of that.
So in gray matter, gray matter is not as organized as white matter in terms of the myelination. The fibers are going generally together in many places in the brain, which leads to the fractional anisotropy being higher in white matter. We have bundles of fibers together will be more unified within a voxel. And so that will also contribute to fractional anisotropy. The observed fractional anisotropy for that voxel will be higher too.
If we have more of them within a voxel, more white matter bundles, fibers, then we can also see an increase in fractional anisotropy. And this will also very with the myelination of the subject. So younger subjects have different myelination. Or if you're looking at problems of myelination, you can see differences there in fractional anisotropy.
And this is what some of these images look like. Generally, what we use is a fractional anisotropy of at least 0.2 as a proxy for white matter. This is a map of our fractional anisotropy. And if we just mask that 0.2, we can see that it follows the white matter contours. Mean diffusivity is throughout, and it's generally higher in the gray matter and in the ventricles.
What we do in the MRI scanner is we impose a gradient field and we see how the tissue responds to that in the MRI signal. And we it follows what's called the Stejskal-Tanner equation. So the signal is proportional to the signal without the gradient and the diffusivity of that diffusion tensor is going to change our signal there. So what we're trying to find is this d here. We're going to try to find that tensor. We're going to solve this for the diffusion tensor.
OK. And this is what the images look like. B equals 0 image. This is without any additional gradient looks like this. And so we'll acquire some of these. And then we impose the gradient in different directions. We impose it on lots of different directions. We're trying to solve for six elements of that diffusion tensor matrix. So we need at least six, and generally we'll have many, many more. We would have 30, 40, or if you're doing HARDI, you could have 200 or even more of those. All these take longer to acquire, so you have to account for that. But you can-- and each gradient changes the image in these kinds of ways.
But by using those different values, we can solve this equation. And we'll solve it generally with we use an additive noise model. So you can solve it linearly or we can solve it non-linearly, taking into account this noise. And if you don't account for the noise, you'll get differences in rotations and re-scaling. All the noise in it can contribute to mistakes in the values that we get out of the ellipsoid.
DWI has a lot of distortions in it. We saw the distortions-- just the image by itself looks in a way distorted. But we can also have the regular things that we see in fMRI. The subject motion. We have eddy current distortions from the switching of the gradients. We have EPI distortion, the same kind of thing we saw with fMRI data that we solved with blip up, blip down. We can do the same kinds of things here. And all these things combine. So we need a processing method that takes into account all these different problems.
Here is what motion looks like in diffusion weighted images. So you have interleaved brightness. So it may look fine within plane, but outside the plane you'll have this kind of striping pattern. And like what we saw with the blip up, blip down on an fMRI, we have that same kind of thing of signal pile up and the attenuation. So it could be larger or smaller in one direction than another. And so we have some things like this here. And so we can handle that in a similar way.
Generally, this is what we're going-- with AFNI, you could solve these kinds of things with 3D DWI to DT to get your tensor. You can also use a package called TORTOISE, which is also developed at NIH by Carlo Pierpaoli's group with Okan Irfanoglu. And we will show you some examples with that. So that's kind of the basic introduction to diffusion tensor imaging. Any questions on that? OK. There'll be three sections to this part. Yes?
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: Can the non-linear-- can the larger distortions be accounted for by non-linear registration? Yeah, so that's, in fact, what we do in the same way that we-- well, pretty similar way that we do it with the fMRI while matching the blip up and blip down. But it requires that to be part of your acquisition to do that correction. This is done in TORTOISE, not in our package but in our kind of sister package. It's done with the part called DR BUDDI. So DR BUDDI does that blip up distortion correction. Every gradient is done in both directions.
FATCAT 2. DTI tracking intro. So we're going to talk about tractography now. All right, tractography. OK, so we're going to use tractography. We're going to talk about motivations, why we want to do it, how it works, and use it in combination of gray matter and white matter and think about the interpretation of it.
This is a neuron. We have the axon. We are basically trying to model this myelin sheath. Water flows through the myelin in a direction. That's what this technique is based around. And we're not doing it on one neuron. We're doing it on lots and lots of neurons together organized into bundles. We've got a lot of pieces of axon, a lot of axons all together within a voxel. OK, we've seen the fractional anisotropy and the mean diffusivity.
So before to see white matter, you'd have to extract the whole brain and go through a lot of histology checks there. And it's fairly invasive and nobody will agree to do this. So hopefully we're trying to get to the point where we see really clear tractography like this.
So we start off with all these ellipsoids that we talked about before. And we're going to link these ellipsoids together like sausages. This presentation was written by Paul Taylor, who's from Wisconsin, and it's a thing there.
[LAUGHS]
OK, so we connect our ellipsoids together and we have linked structures. So I like to say that you don't want to know how sausages are made, how laws are made, and tractography really. So these are kind of behind the scenes. How do we come together? How do we get these tracts to get them to happen?
In the end, we can come up-- this is one form of tractography. This is a view from head on over here. Here's looking at you and here's looking downward over the top of the head. This method produces lots of pretty pictures. And you can [INAUDIBLE]. Maybe even more so, you could spend all day playing around with the interfaces and looking for these beautiful fibers.
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: Usually we're doing this-- OK, so groups are done in the end at region levels, generally. So you're going to take your results at four regions from tables of region connectivities and then compare those. But not so much at the voxel level, if that's your question. And mostly we're going to do this in the native subject space. We don't really go to the standard space. We're going to move everything into the native space, including atlases. FreeSurfer segmentation and that kind of thing.
So this is a diagram of how tractography works. You've got your ellipsoid at every voxel and then you make a map of all the ellipsoids. You start at some-- use some sort of algorithm to connect the voxels together. And then you come up with a map that looks like this. And there have been lots and lots and lots of methods. They work in varying degrees.
Paul Taylor came up with one here. So there are lots of different methods, because none of them solve everything. And even histology can't give us the answers, because histology is destructive and you still can't see all the fibers fantastically well and then connect them to each other. We do have some test models. I will show you a little bit of that.
So one algorithm that's been in use for a long time is one that is called FACT, Fiber Assessment by Continuous Tracking developed by Susumu Mori at Hopkins in 1999. It's been used many times. This is an old slide. It's been used many more times than that. And this is generally how it works. At every voxel, you look at the ellipsoid or directions, and then you go from one voxel to the next. And the next voxel, you take that direction. It's simple and it roughly works.
But you'll notice there can be some problems. So if this is going toward the corner but it happens to land on this side, it will go like that. If it happens to land on that side, it goes up. So you could miss the voxel that it seems to be pointing at completely, because it's really kind of based on every voxel that it goes to just switches its direction dramatically.
So how can we improve it? Paul came up with FACTID. And basically, what he does is he cuts off the corners on all the voxels. If it reaches a corner in all dimensions, and this is what it looks like in 3D, then he continues on to the diagonal voxel. This leads to a big improvement in the tractography,
So you can look at something like what we looked at before. So if you take these example vectors, you put one like this, one like this, the green one will continue through. The blue one also continues through but to the next voxel. So you just compute the path that way. So it bends. It has some more bending capability in there. So FACTID ends up having 26 neighbors of voxels and FACT has six neighbors. Gives you more possibilities to turn around.
So FACTID, if you rotate the axes, and these are rotated in the scanner, not in the data set, they're new acquisitions rotated, you can see that the FACTID will be more or less consistent in fact because its six voxel neighbors will give you different results every time. Also because it does this, it's more robust to noise. So if we have added noise onto it, then the original data will look similar to even adding a lot more noise onto it. But FACT is a lot more sensitive on that.
This is a phantom that somebody developed in 2011. Fillard developed that. And he tested various algorithms on it. We tried it on this one, and it does a fairly decent job. You can see that in some places where fibers-- this is just a phantom with a gel on it that's in tubes, and so they know what the right answer is. This is the right answer. How close to that right answer can you get?
So it's not like a real brain, of course, but it's a quick test. And so we still can't go through where fibers are crossing or fibers or kissing, because we can't tell the difference whether they turn around or go through. And so that's the problem. Or they can just stop there. We do a better job than FACT, but still it's not perfect.
So Charles Babbage quote. If you give a machine the wrong figures, will the right answers come out? And this happens in imaging too. If you give it bad data, can you get good data out of it? Well, mostly you can't. So we have to make sure our data is good and take care of our data as much as possible, looking for problems. So data acquisition, preparation matter a lot.
So generally we recommend using the TORTOISE package. So what happens is if you don't process your data and you just do your tensor [INAUDIBLE], you will end up with something like this as your fibers. This is a side view. And so if you process with TORTOISE, instead you get something like this. So instead of a-- to me it's like a sick cricket, you end up with a pretty hummingbird or something like that.
All right. So we can do tractography through the whole brain using that algorithm. We go to every water voxel. We start at fractional anisotropy of at least 0.2, and we'll end when the voxels are at least 0.2 also. And we look for bending of the certain amount too. And they have to be at least some length, because every voxel has some length. And we'll also over seed. We'll do more than one voxel, more than one track per voxel.
We can add onto that. Say we want to have tracts that go from one region to another region or we can have-- so that-- or we could have-- that would be and logic. We can have tracts that go between either regions. That would be or logic. And remove all the other tracts.
We can take groups of ROIs and look to see how they're connected together through the white matter. So here the ROIs are generated through a functional [INAUDIBLE] like resting state correlation or something like that. Or we can generate these by clusters of activation in a task based study.
So that's the same thing. Where do you get your targets? You get it from fMRI. You can get it from FreeSurfer parcellation, any kind of parcellation, or you can get it from spheres across a data set where you've set them up. And here this is around the corpus callosum. We're looking at target regions. And we have networks of targets that we can put together in different kinds of logic.
Tracts are-- each one of those fibers we call those tracts. Groups of tracts together, we'll call those bundles. And if there's a connection between two regions, well, that would be our white matter connection. And we can get average quantities across that whole group of connections. And we have the network all the white matter regions that we've got together.
So here is what we call a structural connectivity matrix. So like the resting state data, we can calculate a matrix of connections between regions. So there's region one, two, all the other regions. And so let's say region six and region eight are connected very strongly here with an average FA of 0.6. You'll notice that rather than as opposed to correlation, that value of one along the diagonals, these are the average FA for that region.
And you see there are also a lot of empty spots. We don't have to have a connection at all. And so those are just-- we'll just leave those as zeros. This is the similar functional connectivity matrix. So you can look at these and compare how similar or different fractional anisotropy is from the functional connectivity matrix.
Here's some examples. We have a gray matter network. So these gray matter ROIs are gotten from the literature, from Neurosynth, from your own experiment. And then we'll use those as our seed regions to see how connected they are with diffusion tensor imaging.
So the Beauty and the Beast. La Belle et la Bete of tractography. OK, so we see lots of beautiful images, but we're looking at data. The axons are only a few microns. We've got voxel sizes that are generally in the order of millimeters. We're trying to resolve a problem that's a different scale.
So here this is something from EyeWire. Trying to look at directions of axons altogether is hard. They have that project where people figuring this out as a community. Forget what that's called. Yeah. No, the EyeWire. What's the word for that? The group?
AUDIENCE: Crowdsourcing?
DANIEL GLEN: Crowdsourcing. That's the word. I just lost it there. Yeah. So we don't do crowdsourcing in this data. We'll just try to figure it out as best as we can. And we have the other problem of the fibers that cross or kiss. This is the crossing fiber. This is the kissing fiber. It's very difficult to tell with our data which one it is. So fibers come together and apart in groups and do they go across or do they turn around?
All right. We do have post-mortem data that we can use to verify some of these things. And we see that some fibers do exist. We're able to reproduce many of the known path. But there are problems. We find false positives and we have false negatives. So sometimes we don't see the connections. But we do know that gray matter is connected by white matter. Let's see what we can find that connects them and see if it makes sense in combination with our connectivity data.
All right, so that was the second part, the tractography. Any questions about that?
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: So yeah, we'll get to kind of more tractography. We have three kinds. So we'll talk about that. Yeah, so we have different kinds of tractography. So we're trying to look at our networks.
And we've got some fMRI data, some parcellation. You've seen these. So we're not going to a standard space. We're doing everything in its local space. And we're going to try to combine our functional connectivity with the structural connectivity. And this is a diagram of how these kind of go together. This is actually an old diagram. I'll show you the new diagram in a little bit. But we'll keep going here, see how the tractography works.
So we've got networks of gray matter. This is from an ICA analysis. And we can quantify various things about the gray matter and put those together. So we have correlation. That kind of thing that I've showed you. And we'll use these as seed regions.
And this is what it looks like in SUMA. You can take those white matter region-- those gray matter regions, use them as seed regions in SUMA. [INAUDIBLE] in another session how to do that for any kind of region. And we'll look to see what connects to what.
OK, so we have tractography. This is the first one, the simplest case of a deterministic tractography. We're just looking at that the simple FACTID and just the data as it is. And so we'll look to see whether one voxel or one seed point connects to another. We over seed it so we can get this. The deterministic tractography gives you results like this.
You can make surfaces out of it with SUMA and show it there and it looks like that. It's OK for quick testing. It's also fun to look at. You can do the exploring with this. But it's going to be a lot of-- it's going to have a lot of dependence on noise in your data. We're going to try to handle the noise through two other kinds of tractography methods.
One is called mini probabilistic tractography. So here we're going to repeat our tractography about seven times. So we're going to repeat the deterministic one seven times, but any time we're going to add some noise onto it, noise that we see in our data. We're going to perturb it a bit, randomly do the tractography again, and see if we end up with those tracts.
And here's an example here. You get somewhat bigger tracts because you've done it many times. But if you see [INAUDIBLE] fiber by itself like that, then you don't trust that as much, because it only showed up in that one case. If you've done it seven times, you should see seven of them. This is still relatively fast, but it's not interactive as in the diffusion tensor in the deterministic tractography.
And then you have full probabilistic tracking. So in the full probabilistic tracking, we'll repeat this thousands and thousands [INAUDIBLE], just in the same way that we did in the mini prob, but in this case, we don't see those thousands of fibers. We're just going to say based on some threshold that is this voxel included in our end connection. So is it still there? [INAUDIBLE] we'll keep it as part of that connection.
And that's generally what we recommend for your final analysis, to use the probabilistic tracking. You will have these kinds of surfaces. You get kinds of similar things out of the deterministic and the mini probabilistic. But this is probably the way to go. And here these are the regions that we used as our seed regions here.
Besides the pretty [INAUDIBLE] and surfaces, we generate these matrices. And the matrices include-- they come from these tables. The tables are in text files. You can use them as text files or as tables or as graph images. So we produce things for you. Go through a couple different choices here. Depending on which one you're doing, the deterministic, the mini probabilistic, or the probabilistic.
And this is all done by a program. Most of it's done by a program called 3dTrackID. This is the program that Paul wrote to go between regions. It actually calculates all the regions in the whole brain at once, and it's only a few seconds for a whole brain tractography. And then these are applied afterwards to see how they otherwise interact with those tracts. So we can do this very, very fast. This is a deterministic, but we can repeat this many times.
But before we used the deterministic tractography in the part of the quality control for doing gradient flipping. And I think the first time I ran into it was with Christina. We had the wrong sequence and the wrong directions and things didn't connect. And so we end up with things like this or things like this or things like this.
And so we have this program @GradFlipTest that will flip the [INAUDIBLE] in different ways and sees which ones are right. In that case, with Christina we had a whole new-- we had the directions completely wrong. And it's surprising how good the results can still look even when the directions are completely wrong. But they look a lot better when they're right.
So we provide tools like this to automatically detect the gradient format and say that, OK, we're going to flip in the y. Usually that's the case. We flip in the y. There is no standard. That's why we have to flip. So there's x, y, and z for the gradient directions, and we have to say which direction they are.
All right. So one of the things that we have to do is we use 3dROIMaker. The gray matter regions are just-- they're very often a little bit too small. We need them to be slightly bigger but not go into the white matter. So we use 3dROIMaker to inflate them a little bit and not go into FA values of 0.2 or more. And that was subtle, but that's the difference. It's generally just a voxel to get to the 0.2. So we're inflating our gray matter to the 0.2 value and not [INAUDIBLE].
Here's some from fMRI. Some clusters that we have there. And there we're also going to inflate those. There are other things that ROIMaker can do. You could remove different parts of it. You can have just the highest values of some data set. So you can use that as a thresholding there. And you can provide a reference set to get consistent numbering across your subject. So if you always wanted to be one, two, three, four for these regions over there and there so you can have it automatically find the matching set.
So this is a description how we calculate uncertainty. What we're going to do is we're going to take some of our gradients and recalculate the tensor. And so here we've got just 12 gradients. We'll take nine of them and calculate our tensor. We'll do this thousands and thousands of times. And then we get a distribution. From that distribution, we will have an estimation of how sensitive our [INAUDIBLE] is.
So our first eigenvector is going to determine the direction of our tracts. And so we can apply that to our data set-- to our tractography and see if it makes it. And the FA value is another-- we will see both the FA and the eigenvector are affected by this. So this is a map of the bias, the standard deviation of the eigenvectors, and the FA. So that's what gets fed into the uncertainty.
We can do this for HARDI images too. These are data sets that are modelled. The diffusion tensor is calculated with multiple ellipsoid kinds of shapes that will give us the direction in different ways [INAUDIBLE] one, two, three, four ways. So you can get higher order of directions.
And so FATCAT can take care of that too. Now, we don't do HARDI tensor estimation. This is either from DSI Studio or Diffusion Toolkit. And I think MRtrix also does this too. And this is what it looks like. It's [INAUDIBLE]. It's probably better. But it does take up a lot more time.
So FATCAT can also do the connectome type of tracking. This is where you give it parcellation. This is parcellation from FreeSurfer And here we use ROIMaker to inflate the regions up to that FA of 0.2. And then we're going to keep the label tables in it and then do the tractography. And you call 3dtrackID with these as your seed regions. And voila, you get connectome tracking. So all the regions are colored by those-- all the fibers are colored by the regions.
So that ends that part of this presentation.