15 - FMRI Analysis Start to End: Part 5 of 5
September 14, 2018
May 29, 2018
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Rick Reynolds, NIMH
For more information and course materials, please visit the workshop website:
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
PRESENTER: Let me make a few comments about this again just to hopefully strengthen the idea of what the linear regression is when we plot-- let's look at whichever. The no sensor window's fine. So here's our regression matrix again. Here's the model fit to the data.
At our current voxel, remember, what 3dDeconvolve does is-- you've basically told 3dDeconvolve, this is my regression matrix. So now it's 3dDeconvolve's job or whatever is doing the linear regression to fit this to the black data, the black time series, at this voxel location, and it tries to add up some multiplier on each of these time series and add them together so that the fit is as close to the black curve as possible. And as close as possible is defined to be so such that the residual time series has the minimal sum of squares. That's the purpose of the regression.
So it might have a beta weight of 99.3, 0.1, 0.002, negative 0.019, whatever, these beta weights on each one of these. And then beta weights on each one of these we can actually look at. So at this voxel, let's look at the beta weights for some of those other regresses just briefly.
This time series is for this voxel, so let's just look briefly. I'll set my overlay to be the visual reliable coefficient. Well, it happens to be almost exactly 2. The beta eight is 2. So 3dDeconvolve's solution for this fit model had a multiplier of 2 times this yellow or gold curve added into the fit, so 2 of this come out here.
And does that make sense? Well, if we go to the time series window here, if I just click somewhere, it's going to follow the black curve. So if my red dot is here, the value is 98.23. What should the value up there be? Probably just over 100, 100.11.
So this height is basically a height of 2, and this height is the beta weight at this voxel. It's the height of the gold curve applied in the fit. And you notice around the third bump here that that third bump is from the visual reliable test. These first two bumps are from the auditory reliable.
If we go to a different location in space-- let me just do this myself briefly. If I set my overlay to be the v minus agltt stat and now I have to lower the threshold, 27 is pretty strict for that. I'm going to just very quickly clusterize this. Cluster, Set, Report, Jump.
Well, that voxel kind of sucks. That kind of sucks, a lot of garbage in here. So these are showing big differences, but they're not too exciting. That may be-- trying to be a little more picky about the threshold, but I shouldn't be using a power of 3 there. That's better.
So this voxel somewhere-- where in the world are we now? If only we had a button that said, where am I? But, of course, that's not possible. So at this location of the brain, I'll go back to the All Runs underlay now that we see where we are.
We have this time series here, and you notice these first two bumps are smaller. These are big bumps. That's a little bump. So we can see the height of this is basically the beta weight for the visual reliable condition, and the height of this is the beta weight for the audio reliable. Can you see they're different? And they fit the data well, so it's significant.
So we see a big difference. What in the world is with this blue spike? That doesn't look good. Well, that's around that motion time. What happens in the regression model and actually in the output of 3dDeconvolve from censoring? Well, when you censor a time point, the residual, by definition, is zero.
Because of censoring, your residual value is set to zero, but therefore the fit value is actually equal to the data value. So your fit will be on top of the data value at any time point that you've censored. So that may look a little odd here, but that's why you get this the censor spike in there.
I think that's enough from the GUI. I just wanted to do one quick group analysis from this. Does anyone have any last questions about this analysis? The first level analysis, single-subject analysis is fully complete here. Any last questions?
AUDIENCE: So [INAUDIBLE] censorship, and here do you include the censored frames?
PRESENTER: The censored time points are included because this is all 450 time points, so what that means is that any censor time points-- the residuals will be zero, and the fit will exactly match the data. So the blue is actually on top of the black here, though you can't see them.
So now we've done our single-subject analysis. Let's do a quick, yet arduous, journey of a group analysis. So you can just watch, nothing too exciting here, but I'll try to go quickly. So I'm going up to Directories, back into the AFNI Data Six directory, and then I'll go into this Group Results directory here.
And you want a covariant or just pair test? How about just a pair test? So we've got this-- script five is a paired t-test. In order to do a group analysis and AFNI you don't necessarily have to move any of your stat's data sets around. You can just leave them where they are. You can include volume index selectors or volume labels selectors in your t-test command, so you can say, I want the v rel, the visual reliable, underscore coif data volume. So you can request it by name in the command, so I'll get back to that.
But that's not what we did here. We just threw a directory together that had ordinarily squared betas, the two betas for our set of 10 subjects, and then we had Remo, using 3D Remo Fit in the [INAUDIBLE] model for temporal autocorrelation in the noise. So we have Remo results for our subjects, too, so you can trivially play with these things. But those are just to show you 3D info on one of these files. It's just the two volumes with the v rel and e rel coefficient. So it's just the beta weights were extracted with 3D bucket and thrown here, and, of course, you can see the 3D bucket command at the end of 3D info.
So let's run a quick t-test. This t-test was generated with gen group command. If you would just want to do a t-test, or 3D [INAUDIBLE], or [INAUDIBLE], or something like that and you have 80 data sets, you don't want to type in a command that has 80 data sets, typos, and it's just a lot of work. The purpose of gen group command is to give d sets dash d sets with a wild card to specify all your data sets. And it will expand, this and it will include sub-bit brick selector so that you don't have to type that out 80 times. And the result of that is something that looks just like this.
So let's run a group t-test. Again, that's a long journey-- TCSH. How do you spell TCSH? Done. So now it's computed everything, and we can look at that overlay stat S5 t-test. That's the contrast and the actual beta weight of it. Where am I going? Let's threshold this at-- that's fantastic, about 3.01. Perfect. And I'll scale this to a magnitude of 1.
So this is a paired t-test. At every voxel, we have 10 subjects that of audio reliable beta weights and visual reliable beta weights. So the paired t-test for that compares the average of the 10 audio reliable values-- 10, just 10-- to 10 subjects, not many, not 450 time points, it's 10 subjects now-- those 10 values against the visual reliable 10 values and takes the V minus A contrast and the stats here. And that's what you're seeing.
I'll just very briefly clusterize this again, just so we can see some clusters. And lo and behold, we have some clusters in the visual area and some clusters in the auditory area. Fantastic. So the group test, of course, you'll be more picky about doing this well. But that's, more or less, all there is to it, in this case.
After this point, you need to decide. Based on the blur estimates of the data, you'll probably pick some cluster size that defines significance. So presumably 176 voxels would be fine. So this actually gives a very clear group result, even with only 10 subjects. But normally, you don't have those huge bold responses with such clear beta weights, such clear results. But here, 10 subjects, piece of cake. And the distortions across subjects are not great. So even with that involved, we get a result.
Just as a reminder, when I showed the pictures, the images for this, I used the contrast t-stat for my threshold, because I want to show in my paper voxels where the contrast between V and A is significant. So I set my uncorrected p value to p of 0.01470. That's not very great. Let's be more picky than that. Can we go down to 0.001, to appease any crazy statisticians in the room, like Gang? So a little tighter now.
Anyway. So uncorrected p of 0.001, and you decide on what cluster size you need for significance at 0.05, or whatever. And then you get your results. Any questions about this? We'll babble more about that stuff later, but here's an example just doing it here.
OK, so after this time, you have basically three choices. You can run away screaming, or not screaming. I don't know if that's two extra choices. Or we have assignments you can work on just to help you practice with running AFNI commands-- by yourself, or with your friends, or whomever-- but typing in the AFNI commands, thinking about-- so these are basically imagined questions your colleagues might ask you. How do you do this? Or what does this mean, or whatever? And you're trying to figure it out.
So we have a handout set of these questions, a separate handout with hints, and a separate handout with answers. Don't mess around, just open them all up. But first, read the question, think about it. Then read the hint, then think about it. Then read the answer. So you actually go through a progression, but that's a very helpful way to practice.
The other thing you can do is-- we will just roam the room chatting with people for a couple of minutes at a time. And you can also talk to us about your experiments or whatever data issues you may have. We will do this again tomorrow for the last class. One option during this time too is, you could, if you feel so ambitious, you could bring your own data, enact EPI and timing files, and try to analyze it, you know? You can do that. So you can ponder.
But anyway, so let me just show you the slides, the handouts that have the questions, and whatnot. And then your time is your own. So I'll CD into AFNI handouts. And these are called something, something with jazz involved. [INAUDIBLE]
OK, AFNI 19-- oh, the ones that are in green there, OK? I think green is accidental. They have execute permissions on them. So these are the handouts. AFNI jazz is just the questions. The hints are the hints. And then the answers are the answers. So you can just open them all and ponder them.
Of course, you can do them in whatever order you want. So the first one talks about running 3D bucket. Later on, one of them talks about understanding the x matrix. Here, problem 5 talks about the x matrix. So you can peek through these and see what's interesting to you or whatnot. Otherwise, chat about your experiments, or just run away.
PRESENTER 2: Do you want to show them the classified help [INAUDIBLE]?
PRESENTER 1: Oh, sure. Sure. Just a little help page that Daniel wants to show you. So if I go to AFNI, under the documentation-- so again, just at the main AFNI, say, we have these tabs up top. The documentation is this Sphinx documentation that we have.
And Classified Program List-- I bet that's what you mean. So 2.5 here under Educational Resources. And what this has is these blue headings are types of programs. So remember, we babble about having 600 AFNI programs to keep you occupied with. What programs should I use to do this, depending on what your "this" is?
If you want to, say, do a registration-- I don't know-- that could be anywhere here too. Do we have a Registrations Section? Correlation-- resting state-- I don't know. Let's just pick one, OK? Edit d sets header-- so you've created your own data sets, but the source of the data was not good, and left and right is incorrect.
So the data claims that this is the left side of the brain, but it's really the right. And you want to fix that in the header, so you need to be able to edit d set headers. We can click on any of those. There's the caption, and then there are a bunch of programs.
And some of these programs have big numbers in front of them. Those might be programs we would suggest you to use. There are a lot of programs that are somewhat obsolete, or antiquated, or otherwise not recommended. But we haven't filled this all in perfectly, so here's a list that can actually get you in the right spot. So 3D and full-- 3D Refit is the one that really allows you to change an AFNI header. If you have a NIfTI data set that you want to fix, NIfTI Tool would probably be your go-to resource.
So you may be interested, if you're going to use AFNI for an extended period of time-- hopefully, you're not done with AFNI as of this week-- if you use it for an extended period of time, you might want to be informed of updates that are, say, more than trivial updates. So every week or three, we send out an AFNI digest that is just a little text file of a couple of noteworthy items, so just to keep yourself up-to-date.
So if you want to know about that, you could just send us email to that AFNI Boot Camp thing. Or do they go to a website for that? Do they sign up for that?
PRESENTER 2: I don't remember.
PRESENTER 1: I don't remember either. So anyway, just contact us and, we can-- yeah, we can sign you up. It might be an NIH website where you apply to this, but you can figure that out.