13 - FMRI Analysis Start to End: Part 3 of 5
Date Posted:
August 8, 2018
Date Recorded:
May 29, 2018
Speaker(s):
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Rick Reynolds, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
RICK REYNOLDS: When we finished off, we were just going to start comparing the original data, which is just missing the first two time points with the time shifted version. So presumably some of you have actually hit the set button or not. But just to be sure, so on controller B on the bottom, the bottom set of controllers, I'll choose run one of the PB01 t shift data. So now just to compare these a little bit, I'll raise this one up a little.
You won't really see any visual difference in the data except-- for example, looking at our data here, we remember this was time index 2. This is index 0. Let me back this up to time index 0 here. So time index 0 for the original data value 12:15 at time t equals 1.3 seconds. And now index 0 value is 12:30 at time t equals zero. And you might notice as I bounce across the slices here, all of the slices show time t equals 0.
So the whole volume was interpolated back to being as if we acquired the data at the beginning of the TR. Hard to see, but you notice this spike now has a little child spike on the left that doesn't exist up here. So interpolation-- any temporal interpolation is going to produce something like that. And that's one reason that I remember we use polynomial interpolence here instead of like Fourier interpolation because Fourier interpolation might actually make these spikes ring throughout the data a bit.
So while Fourier interpolation is maybe the coolest sounding and most mathematically elegant, it comes with-- well, in some cases, it does better, but in some cases there are problems, so like with everything. So anyway, now we have data so it's as if we acquired it at the beginning of the TRN1 snapshot. Leaping back to the script, so that was that the t shift processing block. Notice at the end of that, we extract some volume. And we call it vr base min outlier.
It so happens that this is the last processing being done to the EPI data before any alignment steps begin. Actually, you can see all three of them align TLRC vol rank. So these are the three alignment processing blocks. So this is the last time the EPI data is messed with before it's going to be registered to something. Therefore we should extract the registration base now and that's what the proc script does. So 3D bucket-- bucket is some data sets that have multiple volumes-- could be just time series, or they could be beta weights. Or they could be combinations of beta weights and t stats and f stats and contrast.
So for these data sets that don't really have any clear objective, say, Bob started calling them just buckets. And that's why we have a 3D bucket. That's the history of this command name. So just dump this volume or this set of volumes into a new data set. That's what the 3D bucket does. So it just grabs-- the input is PB01 subject ID min outrun. So min outrun, remember that's a variable here, dollar sign min outrun, that's 03. Remember, the minimum outlier was run 3 volume 24.
So this is PB01 subject f t dot R03 dot t shift data ridge. And then the index is 24. And a subtle thing just to entertain you with unix stuff, we use double quotes here. We single quotes above. But within double quotes, variables are expanded. If this were single quotes, the dollar sign min tr would go to 3D bucket and it wouldn't know what to do with it. But with double quotes, the shell will expand this to be 24 before the program gets it. Anyway, that's just one particular volume. And we just name it something hopefully informative.
Now, we can go on to the next processing block. That's the aligned block. In the aligned block, we register the EPI with the anatomical-- the t1 data set. But the one EPI we want to register is really the EPI volume register base. That's this one. So we want to align this EPI volume with our anatomical. And which way do we go, actually? This computes an alignment of the t1 to the EPI, but we're actually going to invert that transformation and align the EPI to the t1 later on.
So this whole computer transformation, that will apply later. So this is running aligned EPI in that. We don't really need to go through all those details. But it's just a simple AFNI transformation that we'll have ready for us. So because of that, now we know how to go from the EPI location to the anatomical location. And then the next processing block is the TLRC. So in this case, we're going to standard space .
And to save class time, we're just using auto tlrc to do an AFNI registration. Usually, we would prefer to do a nonlinear registration. But instead of taking 10 minutes for analysis, it'll take six hours. And it's not worth doing that here, of course. So we're running auto tlrc. The base data set is the TTN 27. So effectively, what space do you go to, tlrc, or MNI, or what not.
When you want to think of the space that the data is being sent to, really, it's the template. If you just name the actual template, that's the best description of the space, unless you have no idea where the template is, in which case, that doesn't help. But so this template is TTN 27 if you would like to say you're registering to this template. And that's one version of tlrc space. If you use the MNI 2009 data set there, then you'd be in MNI space.
So the input is FTN at. NS means no skull. The skull was actually removed in the prior step. So this is going to take advantage of that. Align EPI and that strips off the skulls. And then this is the whole command. It transforms the EPI data sets. See if I can actually double click on the command again. It transforms the anatomical data set. And that transformation is stored in the resulting data set. But we also get a text file that holds it, just a normal dot 1D file with the transformation.
That's where we extract it with cat_matvec. The transformation-- you don't have to worry about the syntax here much. But again, the transformation from this step is actually stored in this header. And we extract it to a text file so that we can pass it down to the volreg block later, because we will want the EPI data to have that transformation applied too. And since we're not manipulating the EPI data yet, we won't leap back afney right now.
So the last alignment block in this example is volreg. In volreg, we're going to run 3dvolreg, which you see right here. You might note the prefix for the output starts with rm. That's the naming I use if I plan on deleting the file. So rm for "remove."
So this output is garbage, basically. So is the anatomical registration to the EPI. That was garbage too. But we want the transformation.
So up above, we did the align block. We know how to align-- go between EPI and Anat. Then the TLRC block, we know how to go to standard space. Now the volreg block, we know how to align each one of our runs data sets to that vrbase mid-outlier volume. So that gives us EPI to EPI base, EPI base to anat, anar to standard space.
Now we can put it all together. We do that with this cat_matvec command. cat_matvec only works on affine transformation. So for nonlinear, we get-- we use 3dNWarpApply. Anyway, but this is just grabbing these transformations. You don't have to worry about the syntax. If you want to learn about that later on, go ahead.
But it concatenates the transformations and puts it in here. This is a time series of transformations now. Why a time series? Because motion correction varies over time. So every one of those motion correction transformations has to then have the anatomy and standard space transformations concatenated with it.
So this gives us the time series of transformations that we apply with 3dAllineate. We specify the grid is 2.5 millimeters. Our input was 2.75 millimeters squared by 3.
The default in AFNI proc, unless you tell it how big the voxels should be, is to take the smallest dimension and truncate it to three significant bits. 2.75 is 11.11. So that last one is irrelevant and 11.11 is 2.5. So this is just a little truncation to make the numbers round in some sense, but basically preserving the original voxel size.
Why is the long dimension ignored? Well, if you have 2 by 2 by 2 millimeter voxels, so nice squares, but then long in the z direction, what happens when you rotate that? If you rotate it, now you're mixing-- the z direction's mixing in with the-- the low-resolution resolution is getting mixed with the high resolutions. So just so you don't lose any data, just re-sample it using the lowest resolution, the lowest dimension size.
Of course, it's better not to get long voxels. So that's what it does. Any questions so far? Mhm?
AUDIENCE: I'm guessing you haven't gone over it yet, but if you were to warp back to set like an atlas back to the [INAUDIBLE] space, there's a part where you do reverse or something?
PRESENTER: You wouldn't do that in here. You would just run separate commands by hand.
AUDIENCE: OK.
PRESENTER: And it depends on how you get there, of course, if it's nonlinear. But you can use the commands in here as a guide for how to invert it. Like the cat_matvec command, you can actually invert transformations in this to make a new file, and then apply that file.
AUDIENCE: OK.
PRESENTER: But Daniel's got a few examples of how to do this. And you're always sure that you can do it. And the surefire way to do it is to ask him.
AUDIENCE: OK. [INAUDIBLE].
PRESENTER: But he's got a he's got a few examples. I think even in the registration talk, you have--
DANIEL: It's in the ROI talk.
PRESENTER: Oh, on the ROI talk. Yeah. Oh, so you'll do that later?
DANIEL: Yeah.
PRESENTER: I'll mention this, just so you know. I say there's no masking applied, but that's not quite correct. A volume that's simply a volume of 1's is created early on, and that volume of 1's goes through the transformations here, the exact same transformation that the other data goes through. And the point of it is, you've got you've got this EPI box that, because of motion, is moving over time.
And now you're warping this into a bigger box, an anatomically-sized box, and it sits inside. So you've got more space than just your original data. And what happens at the edges is over time, sometimes you have data here. Sometimes you don't.
When the subject moves down, the part from outside their brain is coming in there. You don't have data. They move-- So that motion actually means at the edges of the brain, you don't always have actual data.
I mean, forget whether or not you're in the brain. Data at all. The EPI box is solid, but the subject's moving around. And then we rotate the box according to the subject. And therefore, at the edges, you may be rotating non-existent locations into your data, which will be 0.
And having 0's isn't necessarily the problem. But you have 0's, and then suddenly data, and then suddenly 0's. And so you can get weird artifacts at the edges due to that.
So after all that babbling, the point is, we'll create these masks over time. And then we'll intersect them over all time points across runs, and then apply that mask to your data. And that means after that, every voxel in your brain will have data over all time points. So anything where you lose data is just thrown away.
So that's a slight mask that's done. Whether or that intersects your brain, it probably almost certainly will down here. But anywhere else, hopefully not. But it depends on your acquisition. So that's the extends mask.
So now that we've run 3dvolreg, these d files are actually created by 3dvolreg. This 1d file option, that's where it saves the registration parameters. These are the registration parameters we've been plotting and that we include as regressors. So those are the files we actually want in here. They're concatenated across runs.
So this is actually the motion parameter file that we end up using. And if I leap to the terminal window briefly, I can plot that 1D plot, say -volreg-separatescaling. And then the d file are all.
So you've done this before. I don't know if you care to do it. But anyway, just as a reminder, there you go. There are the motion parameters. And again, you see the spike here. You see the jumps at the two run breaks. Again, from 3D volreg.
AUDIENCE: So what's the current state for motion evaluation? How much motion [INAUDIBLE]?
PRESENTER: For censoring?
AUDIENCE: Yes, should we check this box?
PRESENTER: I wouldn't check this. I would check just a little farther. Let's go back to that in the regression block. And that's where we'll actually do the censoring.
So now that we have the motion parameter file, this is where the actual final extents mask is created. We make a couple extra data sets just for convenience. One, we warp the EPI volume registration base to your stand-- to your final space. So you get that one volume both in the original space and your final space, whatever that is. For us, the TT_N27 space.
We also warp the anatomical data set with the skull to the final space. Because throughout the processing to get the anat to the final space, it's skull-stripped. You might also want a version with the skull. So we create-- here we go, prefix anat with skull form. So that will be the name of the anatomical data set in your final space. But it will have the skull still attached.
So let's look at this. So leaping to the-- well, let's leap back to AFNI. I'll minimize my terminals.
So now, controller A was the original TCAT. Controller B was the T shift. And then we went into registration. So let's change controller A, now, to be in standard space.
So on top, Controller A, I'm going to switch underlay to be the vol reg output, pb02.FT.r01.volreg. And what do we notice here?
A few things. This shape is a little different. This one's a little-- say, a little fatter than the bottom one. Why is that? The TT_N27 brain, the [INAUDIBLE] brain is wider. So this was stretched a little to fit that. So this looks more like TT_N27 brain.
What else? Anything? This is a little blurry. Yeah, of course. It's blurry because you've moved it around and re-sampled it. So wherever the voxels started out at, you're just rotating and shifting, but you're still re-sampling onto this fixed grid.
So each point in the new data set is going to be created by interpolating the eight neighbors somehow. We used cubic interpolation here. So it's going to look a little blurry because of that. And you can you can also-- of course, you can choose the interpolation method.
The data here, our current voxel, very clean. No spike. Now we suddenly have a spike. Why is that?
AUDIENCE: Not the same part of the brain.
PRESENTER: Well, that's true for basically every voxel in the brain except where we're at. Since the coordinates-- no, no, no, it could be for a while we're at, too. How does that-- I'll have to ponder exactly how that's working.
The goal is to keep the coordinates-- no, it should be like applying this. So we should be, more or less, very close to the same spot. But anyway, the main point I was going to raise was interpolation again. Our neighbor has corrupted us, quite likely, because--
AUDIENCE: Opposite-- the signal changes in the opposite direction.
PRESENTER: Well, we've got many neighbors. We've got at least eight of them. Yeah, that's true. That's going down. And I don't see-- this one goes up and down. So we've got a dual spike. Again, that's probably because the T shift widened it. Then we interpolate, and we get mixes of things. So maybe one was a bigger spike, and then the other? Many little things go on here.
The goal of-- in AFNI, when you click on one location, one space, it tries to keep the location basically the same. But the rest of the brain may not-- this has been rotated, and shifted, and all that. So the planar cut won't necessarily match, for example, if the subject had to be rotated like that. So it's a little harder to keep this in line.
So do we want to stay at this ugly time series voxel or move to a happier one? It's OK. We can shift later when we care more, or maybe when we talk about the linear regression.
Note, one little thing. I didn't know if one would eventually want it, but we don't make a copy of the template in this directory. So you can't overlay this on the template unless you copy it in yourself, or when you load AFNI, you can include it at that point in time. When you type afni on the command line, you can include the template directory.
But maybe someday-- we put so much data in the directory, that on one hand, you don't want to just wipe out all the disk space. And on the other hand, you want all these extra data sets there. But anyway, so the template is not here. So you'll have to do a template test outside of this step right now.
But at the command line, you can enter multiple directories. So if we had typed AFNI space dot for this directory, and tilde slash abin, then we'd have both directories in AFNI, and we could actually look at the combination. And with the combination, we could include all those data sets right in the display.
After babbling all that time, I could have just done it five times by now. But anyway, that's OK. It's not so important.
Going back to the script, then, now we've registered the data. Now, everything is-- for every subject that you've done at this point, your time series is in standard space. It's registered-- the time series is registered together, so every voxel location should be theoretically in the same place over time, and it should be in better alignment with the anatomy, and it should be in alignment with the template.
So all those steps are done right here. Again, if we had done non-linear registration, that would be applied, too. If we had done blip up, blip down, distortion correction, that would have been done here too. Those all get combined.
So the next step is to further blur the data. Just a quick reminder-- again, the two basic reasons for blurring is, if you average your signal with your neighbor's signal, hopefully, the good signal combines or at least doesn't distort itself, and the noise cancels out to some degree. If the noise is whitish, it should cancel a bit. And hopefully, you'll get a cleaner signal.
The other reason for blurring is, especially in the days of affine registration, you're aligning your lobes to your template don't necessarily match well. Even with non-linear, in one case, you could have four lobes. In one case, you have three. And what do you do?
So registration across subjects isn't perfect. There's still anatomical variability. And so the other purpose of blurring is to make this blob and this blob bigger so the overlap gives you a better group result. But again, with the nonlinear registration, the better we do registration, the less you need blur for that reason.
So we're just sticking with a default 4-millimeter blur. That's a full width at half max. That means two millimeters away would get half the contribution of the central voxel.
Our voxels have been re-sampled to 2.5 millimeters, so the first neighbor is going to get, I'm going to guess, say 30% of the contribution of the central voxel. And then you'll add up all these fractions and then have your new values. So we can look at the effect of that blurring in AFNI.
So now, the controller E on the top has been registered with a template. We're in standard space and now let's change the controller B data. We can set the underlay to be the blur result. Run 1.blur. So again, this is controller B that you set the underlay for.
Yep, lo and behold, that is blurry. So we've got some-- I mean, this is just one plane. This 3 by 3 box is 9 voxels here. So in three dimensions, you've got a bigger box. You're talking about 27 voxels, including first neighbors, and edges, and corners, and stuff-- so a central voxel plus 26 neighbors.
So that will be the effect of the blurring. If you resample again, you're resampling into the middle of voxels. So you only have, say, eight neighbors at that point, because a voxel coordinate is going to be in the middle of eight voxels. But here, you have a central voxel. They're not moving, say. And you have six first neighbors, and then I forget how many other neighbors. So 27 voxels are involved here, depending on-- and then you have your farther neighbors, too.
But anyway, so in 3D space, you could have more voxels with nice curves. And it's amazing-- you end up with quite a nice signal here. So there was basically nothing here before, and now you'll get a result. But that's what happens, of course, when you blur. You're going to expand the size of your results.
And even the boxes that had a nice curve-- this one actually looks a little cleaner than that one. Well, it's hard to say Being subjective, it's hard to really say. But that's one of the hopes, right? You hope there's noise cancellation. You hope the resulting time series is more clean.
And this is a small blur, too, remember. This is 4 millimeters. A lot of times, it's much more common to blur by 6 or 8 millimeters, and that will look distinctly more blurry.
So the next processing block is the mask block. That's where we create quite a few masks and then promptly ignore them, for the most part. The main mask that we make is, say, this full mask, main mask in some sense. We basically run 3D Automask on each of the EPI runs. And then we take the union of all these things across runs, just so we don't get too tight.
I mean, you could want to go either way. So depending on what you want for a mask, you can ponder that, especially when you go towards the group level. You want a group level mask, it's very common for people to take these full masks across their subject and then do some sort of intersection, where it could be a complete intersection, where if any voxel doesn't have data, we throw it out of the group mask. Or you could require 70% of your subjects to have data everywhere, in which case you'd include it.
So you can ponder that sort of thing. But this data set is very commonly used for creating a group mask. Another one, though, that is more newly created is this-- where did it go? We created an anatomical mask here, just a binary mask from where the skull-stripped anat has data or not. But then we intersect that anatomical mask with the full-mask data set.
So here's the EPI mask as input, and then there's the anatomical mask as input. And then, these are intersected. And so that makes the mask more tight to match, to fit the brain of the subject. But that, so an intersection, though, it's getting smaller.
So anywhere-- if you don't have coverage down here, you won't have that on this mask EPI anat data set. Also, your anatomical data set doesn't go out this way, so that will keep it tighter as well. So they have different attributes, but this should get-- that seems like a more reasonable mask to use, to me. That's a new one that's created. We're not doing anything with it right now, but it's there.
I won't worry too much about all these things. We create a group mask. It's worth mentioning, we create a data set that's called group mask. I should have called it something different. This is just a binary mask made off of the skull-stripped template.
Calling it "group mask" suggests that's what you should use for your group analysis. That's not the intention. It's probably a mask you'll never use for anything. But it's made off of the group template, so there you go. But you're better off using one of the EPI-based masks to create a group mask from.
So those are the masks that are created. And now, let's just take a brief peek and see what they look like using the GUI. How about I'll do this so you don't mess up your locations and stuff like that?
I'm going to save this coordinate by jumping to x, y, z. I'm going to throw away the old coordinates and just save this in my GUI, and then we can get to it later. I'll jump to 24, 94, 14.
OK, so now I'll set my overlay to be-- and again, you don't have to do this. Set the bottom, fullmask.FT. And it's instantly hidden.
So there you go. There's the full mask data set on top of the EPI data. Looks great. Why would you not be happy with this? Well, let's look at it with respect to the t1 data set, the anatomical volume, and see how happy we are then.
And then you see, the anat.final, or anat.final with skull. Let's say the skull-stripped version. So now it doesn't look so great.
Well this, missing this part, of course, is understandable, right? We don't have coverage down there. We wouldn't expect it. But you see, it drifts up here. And that's, of course, going to be due to the signal drop-out.
So what about the voxels at the edge here? Might you care about them? You might. They have actual signal in them that you could evaluate with respect to your experiment? They might.
Are we going to necessarily, then, just trust 3D automask to decide whether or not we should look at them ever? Not a great idea. So this mask doesn't get applied.
Also, remember, just for quality control reasons, we might rather not mask the data at all in the regression. So if we see blobs outside the brain, or motion, if we see results all over the cortex, all over the edge of the brain, we want to see this.
We don't want to just close our eyes, and mask it, and just assume everything's hunky-dory, right? Because it might not be, and you'd rather know. You can mask the data anytime you feel like. We don't need to do it now.
So we make these masks, and they can be used for various things. Like this mask may be used to compute the average temporal to signal noise ratio to give us more QA measures. But we don't have to actually apply this to the data in our regression model.
I'll set my underlay-- I guess it was vol reg before? And you can go, and you can [INAUDIBLE]. All right. And I will jump to x,y,z now, back to the same spot.
OK, so leaping back to the script, so now we've got a handful of masks that we can use or not use for various reasons. The next pre-processing block is scaling. So in our case, we're not doing some grand mean scaling or whatnot. We're going to scale every voxel to have a mean of 100.
So basically, here's a 3D T stat command that produces the mean time series value at each voxel. So this will be done per run. For each run, we're going to compute a mean at every voxel over time and then take the input from before-- take the blurred result from the previous step-- and that's our A.
So data set A is the blur. So that's our A in the expression here. We'll take the blurred result, divide it by the mean.
We defined the mean to be data set B. That's the format of a 3D calc command. This should look a little wonky if you've never seen it before. Is that an Australian term? It should look a little odd.
But 3dcalc uses A through Z to represent data sets, and then they're just applied in the expression. So data set A is going to be the blurred data set. B is the single volume mean.
So A is the time series. B is not. But still, you take A divided by-- you divide by the mean, multiply by 100. The effect of that is it is a constant scaling across time for each voxel, but the new mean is 100. Divide by the old mean multiplied by 100. The new mean should be 100.
So now the values in the data set, the shapes of the time series, shouldn't really be changing here. But you have a mean that makes the values be interpretable. So now, if you go from 98.3 up to 100.6, you can call it a 1.9% signal change. And the resulting of that, too, is if you make your regressors unit-scaled in some way, then the beta weights can be interpreted as percent signal change.
OK, so let's take a look at the result of that. So the input was the blurred data set. So I'll keep controller B as it is, and I'll just change controller A to be the scaled data. So I'll set the underlay and the A controller on top to be scale. PB04, run 1, scale, OK?
And we just wiped out our data. That looks awful, right? There is the result. What's up with that? Why is that?
AUDIENCE: Because it's all set to 100?
PRESENTER: That's right. Every voxel now has a mean of 100. Back here, we're 893 here. So we're hovering around 1,000 here. And outside the brain, we're probably hovering below 100.
So there was a very clear gradient from brain-area to non-brain area. Here, everybody has the same mean, and so we've wiped out that contrast. Optimally, there would be no contrast here. Well, you some contrast from the bold response, but that's about it.
Now, but how about the curves? Yeah, the curves are unchanged. They should look exactly the same, subject to any short integer truncation effects. But the shapes of the curve should be unchanged.
You might note, these look a little shorter than down here. That's just however the windowing is done. But relative-- plus the bottom window is taller. Is this called hacking the data if I stretch my window? Anyway, so now, this looks a little more similar. So anyway, those curves should look unchanged.
But you notice on the bottom, just to see something else, if I right-click in the graph window, I get this little pop-up which you can't read. But in there, it says the mean of there, towards right down here, the mean is 902 at this voxel. If I do the same up on top, the mean is suspiciously close to 100-- 99.9997.
So that's all the pre-processing. And remember, the purpose of all the garbage that has come before is just to get the data in a good place to be ready for the regression. So now we're ready to fit our model to the data and create beta weights.
So we're in standard space. Everything's registered together. We've done little things to, maybe, help out in some way-- T-shifting, blurring, scaling so we can talk about the beta weights more usefully. Those are all just little niceties to make the analysis work in a way that we like more. But now, we can actually run the linear regression.
Now, let's actually be reminded of the regression block before we look at anything up there. So back to the script, we have the regress block, of course, is where we do the linear regression. There are a few things to do First. So 3dDeconvolve is where we either carry out the regression or prepare for it if we do a 3dREMLfit later.
So we have a couple of things that are done first. The first thing is we de-mean the motion parameter files. We de-mean the motion parameters. Not as in ridicule them, as fun as that may be, but we just-- now, they'll all have a mean of 0 per run. So each run is de-meaned.
And the little point of that is they're going to go in the regression model. Now, all the motion parameters have 0 mean. And basically, all the polar terms have 0 mean-- except for the constant term.
That wouldn't have a 0 mean. But that means, if you wanted to convert to percentage of baseline instead of percentage of mean, now, the baseline terms will be more usable down here. But basically, no one does that, so it's just a nicety.
So we de-mean them. Then, in the next step, we compute motion parameter derivatives. Remember, you can you can apply the motion derivatives, or the first differences, as regressors of no interest as well, as well as the motion parameters.
But we didn't ask to do that. They're still created, because it's a little text file, in case you wanted for some reason. But we didn't ask to include these.
The next step-- now, we actually think about censoring. So we've got our motion files, motion parameter files. These two results, we don't really care about too much. This is going to take in that same D file, our all.1d.
We tell it we have three runs, and we tell it we want to censor motion at a level of 0.3. That's a number we gave in the afni_proc command, right? We told afni_proc we wanted to censor at this level. And again, that basically means 0.3 millimeters. 1 millimeter is clear in the three shift.
But in the rotation terms, a millimeter is about 2/3 of the way out from the brain, and 1 degree rotation is about a millimeter. How should you scale these? I don't know. So it seems fine to just equate them, unless you go to a different species.
So anyway, based on this motion level, we'll decide when to censor TRs. Let's look at the files that are created by this. So in the directory, we have these motion FT files. Hard to see the overlapping black terminals, huh? So we have a censor, a censor TR, and then an e norm.
The censor file, let me just briefly show that first. I'll just look at it with less, because I'm strange. It's a bunch of 1s. If I page down, now we get to a handful of 0s. That's how censoring files looking in AFNI. 1 means keep the time point. 0 means throw it away.
So this file is going to be given to 3dDeconvolve for censoring. And it's just-- that's all it is, a simple text file of 1s and 0z. The plot is no more exciting. It's flat, down to 0, back up to 1.
It's more interesting to look at this in context with the other files, which we'll see soon when we do the SS review driver. That's, again, the minimum quality control check that you should do with every subject. So that will be a better place to look at this.
The other file that's-- one of the other files that I mentioned here is this motion FTD arm. Just to abuse you with the types of files here again, I'll look at it with less again.
This is just another text file. You notice, interestingly, it starts off at 0. How is this created? In the script, this 1d tool command that does censoring motion, it goes through a few steps with the motion parameters.
First of all, it takes the first difference of them, so the time point to time point difference. So this is sixth column, sixth time series of the motion parameters. At each time point, it computes the difference of the current value from the previous value and starts at 0 because of that. There's no difference at the first, 0th time point, or time 0, then.
So it takes a first difference. And now you have-- and that's the same as the derivative, say. It takes that first difference, and now you have six time series of first differences. The first differences, if the numbers are big, that suggests the subject is moving.
Because if I had to move this far to register to the base volume at this time point, and then farther to register at the next time point, that means there's a shift between those subsequent time points, right? So that's per-time-point change in position by the subjects, according to our motion parameters, which are not perfect.
So we have the first differences, and now we take the Euclidean norm of those six numbers-- three rotations, three shifts. We take the Euclidean normal, square root of the sum of the squares, and that gives us a distance, say, that we use as an estimate of the change in location across time. Y squared, root of the sum of the squares? Well, a distance of 1, and 1, and 1. How far is that? It's the square root of 3, right? But now we're talking about six dimensions instead of three.
So what does this look like? So instead of using less-- whoops, I just blew my terminal away. Sorry, my fingers get wild sometimes here. So cd fddata6, ftanalysis. So 1d plot, motion-- oh, fd results. Oops. Oh, I hit Tab with ft and not-- so I missed the dot results. So 1D plot motion, ft enorm.
So let's just look at this file. This file is shown to us, again, in the review driver script. But let's just look at it alone.
So there's our estimate of motion across time. And this is an estimate, but it's actually distance, movements across time in some way. A rotation, it's hard to see how much you're moving.
In the middle, you're not moving at the edge. And you have to worry about-- so if I rotate and then shift, how does that play out in terms of real distance? It's hard to say. But these, it should be a reasonable estimate.
You notice we have two spikes here. Wasn't there just one motion? Why do we have two spikes?
That's right, it is actually two movements. They moved down and back up. It was two motions, because just one volume was out of place, but that means they had to have come back to the original location. So that's your major motion right there, and then we have a couple of little, little spikes down there.
And we had set the censor at 0.3 millimeters. So if you draw a line, a horizontal line at 0.3 here, any place that you touch this line, then you have to think about censoring. There's one other little point here.
When you censor, if you've changed your position between this time point and the next one, if the position has changed-- if there's a difference, I should say-- when did they move? This time point, or that one, or between them?
We don't know. This is a difference. We can't tell when the motion actually happened. So what does afni_proc do with censors both?
AUDIENCE: This is the output of this 1D too.
PRESENTER: That's right, that's right. But from the afni_proc command, you just say the 0.3, and that's what it uses. If you had asked for censoring of outliers as well, it will basically multiply this resulting censor file with the outlier censor file and have a new censor file that has, presumably, more zeros in it, OK?
AUDIENCE: So have we used the outline which we have counted in the beginning? At the very beginning, you counted outliers. So what did we do with them?
PRESENTER: The outliers that we counted in the very beginning, we're not doing anything with it except choosing our volume registration base with it. But if we asked afni_proc to, it could use those. We could say, censor any time points that exceeded 0.05% percent of or 5% of the brain as outliers. We didn't do that, but you could. Just another option.