04 - AFNI Interactive: Session 2 of 2
Date Posted:
July 31, 2018
Date Recorded:
May 28, 2018
Speaker(s):
Daniel Glen, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Daniel Glen, NIMH
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
DANIEL GLEN: All right. So this left-right issue has been a big deal. We found it with the OpenfMRI data, we found it with the Fcon 1000 data. The original user found it on one of the sites of an Fcon 1000. We looked at it and found it on a few more by creating this kind of script that automatically checks for it. We found it on the ABIDE database. We keep finding this problem.
You can't trust data, OK? You can't trust it. You can't. Even. Your own data. People send us data, we find problems in everybody's data. But when it's vetted by national, international groups, we expect that it's going to be OK. It's not OK. You have to look at your data even if it comes from somebody else.
And hopefully these tools will help. We have made tools that find a lot of different kinds of problems, but we haven't found tools that find every kind of problem. You have to look closely and see your new problem. Then we can make a new tool. That's good.
All right. We go through a lot of rants during this class. Don't worry about it. We'll continue on with that. It's tradition. OK. So now I'm going to switch to-- what am I going to switch to? Oh, yeah. OK. So if I've got something on the screen that I like, I will continue around this image viewer. I can open up this button, and it brings up another long menu.
It says a lot of things in it. I won't talk about 90% of this menu. It's got nice things here like projection. You can do minimum and maximum, mean and median projections. All very useful. If you want to save your data in a different format, you see over here we have save1.jpg. If you want to save it as a GIF for "jif," however you want to pronounce that, you can say save to GIF there. And it changes that extension to a GIF file, to the GIF extension.
And if you click on that button, you can say, I want to save this image, this axial slice, and I want to save it. I want to show this on my web page and I'm going to say, OK, this is my web page, and it will automatically end it with a .gif. And in my terminal, it's computed a GIF file for me. And it's there.
I do-- you can see it. There it is. And however you want to view GIF files, it's OK. You can look at it with AFNI, you can look at it with the AFNI image viewer. We provide a little program for looking at GIFs and JPEGs and so on, PNGs. It's a handy little program to have. You may use [INAUDIBLE] or something like that, EOG. Other ways to do that. There are a thousand programs that will do this kind of thing. But this is one that we wrote, so here it is. If you have AFNI installed, you automatically get this for free. It's all for free, so--
All right. Lots of different things in there. I'm going to have to close this because there's no space. Well, let's see. Down here, we have the Edge Detect and Flatten and Sharpen and the BG paint. OK. I won't talk about that. The Edge Detect is something I use occasionally. We'll look at that when we look at alignment. So that changes everything into an edge.
You could do the same thing by just clicking the E key over the image viewer. Let's see if I can get that. Yeah. I have to close the display menu to get it, but there it is. So the E key is a toggle for Edge Display. All right. OK. So close the display control. And now we're going to go on to the next thing. We need space. And also, these two aren't compatible with each other. You can either have the display or the mont, which stands for montage. So montage.
So I'm going to montage this. I'm going to show you this montage. The montage is a way to show multiple slices all at the same time. OK. So if I want to show four slices across and three slices down and I want to show them five slices apart with a two-pixel border and that border should be colored red, I can do all that there. You could of course choose your own settings and then select Draw. And there you go.
So for across three down, five slices. And you'll see that over here we have all these slices showing up on our sagittal slice, sagittal view. So our axial slices are now on montage, our sagittal slice is still just a single slice. They're still all connected to each other. I can scroll through my slices with the scroll wheel. If I want full brain coverage, I could just make this either more slices or farther apart. Say, 10 slices apart, and then draw. Pretty much a full brain coverage there.
So this is handy for creating reports. And I'll show you how to drive AFNI to actually get these kinds of reports automatically. Now you can have multiple montages in multiple slices. If you do that, the crosshairs will get confusing and distracting, so I recommend-- you could turn off the crosshairs over here if you want, or you could have just a single crosshair. And then you only see one set of crosshairs from the center slice of your montage.
I may have gone through that a little fast. Does anyone need a little help on that? It's OK. You do?
AUDIENCE: Where is the export GIF part?
DANIEL GLEN: Oh, the export GIF. So over here, so this button can open up and show you the different kinds. You can also right-click on the Save button here. And every image viewer has their own Save button. So if you right-click on it, you can select the format there too. And you can still save animated gifts and MPEG GIFs too. Now, how could you save something animated of something that looks like a single slice? Well, what you can do is save all the slices in a row and you can do that here. Let me turn the montage off first here.
So to turn off the montage, you do one by one. Back to your original, and then you select Set or Draw. I'm going to do Set to close the menu. And I'll resize it here if I can grab it. So let's say I want to go through all the, I don't know, axial slices. Go back to the axials. I want to look at all the axial slices first.
If you hit the V key over the image viewer, that puts it in video mode. And you can see which slice it is on the other slice of the coronal and sagittal slices. And it goes across, up the top, and back up through the bottom of the head. And then it'll continue to go like that.
AUDIENCE: V should do it?
DANIEL GLEN: Just v. Lowercase v. There's no luck with v?
AUDIENCE: I'm not seeing anything. No.
DANIEL GLEN: No? You're getting it with-- you're getting on the Mac? Yeah. Just-- it's not shift v, it's just--
AUDIENCE: I have the image menu opened.
DANIEL GLEN: It's on the-- you click over the image viewer. Now, to stop it, you can hit any key. So if you can find your any key, you can click on that. My any key is the spacebar, but you can click any key you like. Oh, I'm clicking the wrong one. OK. So each of these image viewers can be going around independently. You can setup recorders to record these. There's a record button that will eat up memory very quickly. We'll show that with Suma later, where it will eat it up even faster. So some ways to save your images there.
If you select Save as an animated GIF, it will save all of them there. That takes a little bit longer, so I won't do it here. But you could save that video as an animated GIF. That could go in your Facebook feed, for instance, if you feel like.
[LAUGHS]
All right. OK. So talk about montages. Now, let's look at some other ways to look at more data because more data is good. OK. So this is where we're going to use up our screen space. Big monitors are better. OK. So here, click on the New button on the left side of your AFNI menu. And you'll see another menu shows up. It's almost a clone of the first to menu. And there are two important differences. So what are the differences?
AUDIENCE: Different logo.
DANIEL GLEN: Different logo. Yeah. That's one important difference. The second one says something about Suma box, which looks suspiciously like Starbucks but has nothing to do with it. It is not a copyright infringement. But that's there. There's one more thing that's important here. The other thing that's important is, up here on the caption of the menu is a little window decoration. It says B. And the first one says A. And then it says some other things about what's in the underlay and what's in the overlay we'll talk about overlay in a second.
And so we can have controllers and B OK? So controller B is not showing any images. This is how you show an image. I somehow missed that one part, how to show an image. OK. So let's do that now with another data set. Let's click Underlay to select another data set. This time, we'll select the EPI R1 data set. This is our first run of our EPI data.
This is the data that we will analyze to death. OK? This is the one that we love. We like anatomicals, but we love EPI data because that's where our function is going to be.
I selected it and nothing is there. That's kind of disappointing, I know. If you want to see what it looks like, open up an image viewer. You can open up an axial, sagittal, coronal. I'll just do axial and sagittal because there's not that much room here. OK? And everywhere you click in one viewer, you you'll see that the other viewer is updated to the same equivalent location.
Now, this, the anatomical data is high resolution. The EPI data is about 3 millimeter resolution. It's much lower resolution. This is around 1 millimeter. It is 1 millimeter. And everywhere I click, I get a coordinate in each one. So I click on the EPI data. I get this coordinate. Minus 19, minus 79. And I get something close to it in the anatomical data. Not exactly the same because they have a different grid. This is showing me the center of each of those voxels. It finds the closest voxel in the other data set.
But they're all-- they are what we call, these controllers are linked together. The A and B controller are linked to each other. And I can click on the anatomical and it will update the EPI data set. And I can do this for lots of controllers. So we can control the locking-- the linked controllers here in the defined data mode menu. So click on Define Data Mode.
If you don't see the center panel, you can click on the Et Cetera button. That opens up this menu to the right, you'll see that we have these menus that have these right arrows on them. That opens up another menu to the right, another panel. OK. And at the bottom of the defined data mode panel, you'll see the Lock button.
And you'll see that controllers A through J-- that should be 10 controllers altogether-- can be all locked together. You can clear those that are not locked. You can make some of them locked together and others not locked together. You can lock time. If you're showing two EPI data sets, two subjects, you can see them simultaneously and the time could be locked together. So you can lock the p-values.
We'll talk about threshold. The thresholds can be locked together too. So there's a lot of controls there for looking at, let's say, multiple data sets at the same time. And this is for up to 10 data sets. Practically speaking, it's difficult to look at so many data sets at so many windows, so we generally only do, I think for myself, I've only done a maximum of about six at the same time. But it is a way to see whether things are roughly in the same place, your alignment is roughly-- I click here, that looks like it's in roughly the right spot over there.
Some other packages, which sound like some other packages only give you this way to look at whether two data sets are aligned to each other. We will look at lots of different ways to check for alignment. This is one way to look at a lot of data, but we'll show you some more. OK. So I just wanted to show you where that is and that the controllers are locked together. Let's see.
AUDIENCE: These images were already registered [INAUDIBLE].
DANIEL GLEN: These images were not pre-registered. I mean, they're pre-registered only by the scanner. The scanner reports coordinates and as long as one of them is not oblique, they should be in roughly the same place inside the AFNI viewer.
AUDIENCE: The lock-- so down the road, if we start adjusting these in their lock, these adjustments propagate [INAUDIBLE].
DANIEL GLEN: The locking is just a GUI part of it. It doesn't change the data set in any way. It just makes sure that the coordinates come as close as is available in the GUI. It doesn't do alignment. OK. So we've got the multiple controllers, the Et Cetera button. Now we're going to get to the most interesting part of our data is the graph of the data.
So these images are pretty prominently placed, these buttons here, but right next to it we have the graph. And if we select-- let's see a sagittal slice over here and we select graph, well, we get something that's not as exciting as I led you all to believe it would be. So this is-- and I've got the crosshairs set in a weird way here.
OK. So I've got the center voxels value. Anatomical data set. So we use graphing not so much for anatomical data sets, we use it more for our functional data sets where we want to see something across time. So I'm going to close this now, close the graph, and do the same thing on the EPI data.
OK. So here, my sagittal graph. And now I've got some interesting data. Now we're starting to look at the stuff that we're going to spend the whole rest of the week analyzing, these graphs. So we can click anywhere and we get the graphs of the voxels that are shown here. So here, with this box is is 3 by 3 voxels. So we have 9 graphs corresponding to those 3 by 3 boxes. So every graph shows us the intensity across time.
So we have-- this is at time 0. It shows us our index is 0. It shows us the time that corresponds $to the time is not going to be exactly 0 because the we have slice timing. We don't acquire all of our slices at the same time. And AFNI knows that and it's going to show us that here. We have the pre-steady state peaks. You'll see that virtually everywhere in our data we've got these pre-steady state peaks or drops.
And before the signal stabilizes, we see these high or low intensities. And generally in our analysis, we're going to ignore those. And we can ignore them in the menu too by clicking on the Fin button and selecting Ignore. And you can say up, and you can keep going back and forth to that. Or you can select how many you want to ignore. Generally, you ignore five, six seconds worth. So a three second TR-- I'm not sure if it's three or two or whatever it is, you can put three there. I think we use two in the class, so I will go back and select two, come back to that. And you'll see the peak is gone and it won't affect our scaling of it.
So let's look some more at these graphs. I'm going to close this one for now to get some more room, make this a little bit larger. OK. So some of these have these nice up and down features. We're expecting to see something going up and down because we have this 15 second block stimulus and then a 15 second rest. And so we're going to expect to see something every 30 seconds in different parts of the brain for this very simple somatosensory stimulus. OK.
If you click the-- let's say, let's do the M key, the upper case M, shift-M over the graph window. So you can see you can increase the number of graphs. And as you increase the number of graphs, this box gets bigger. We can go up to, I think it's 16 by 16 graphs, 256 different graphs of voxels throughout time. It's a lot of data to look at. That's a useful way to look for trends in large areas. And as I go through the slices, the graphs are updated.
So it's very interactive. I can go back down to it to single graphs by doing the lowercase m over the image viewer. Now bring it all the way down to, say, one. And you can click on other graphs and change. So here's one that's up and down. And you'll see if you go particularly along edges, you see that sometimes you'll see these sharp peaks. And this is where you actually look for motion. This is a good way to look for motion anyway.
You would look for motion by looking for sharp peaks, either up or down. That's showing that our intensity is suddenly changing across time. And the reason for it is the subject has moved. And you can click you can click on the time point and it shows you that time point. And if you use the arrow keys, you can rock back and forth. And it's generally easier on the sagittal slices.
Mostly, people in scanners will rock their heads a little bit. If you've got animals that are head posted, there's a different set of problems. They don't rock their heads like that, but for humans it's generally like that. And here, this is a pretty good subject. We really only have one major time point that has motion. And we even had to synthesize it to make this. So the motion is a little bit simpler than what it really is in real life.
So on a graph window, what you can also do is you can click on the V key, just as we did in the image viewer if you do that on the graph viewer. So if you do this over the image viewer, the V-- we can now travel through time. This is incredible, isn't it? Traveling through time just with AFNI. OK. So we're traveling across all of our time points. You see the little bouncing red ball is taking us through time, our own little time machine. So we can see what happens on each of these.
And we recommend that you do this for your subjects. Look to see what's happened across time because this is your data. This is the data that we're analyzing. And if you look closely, you will find problems. You just have to be sure that they're not affecting your analysis, what your interpretation is. And to stop, you click any key, spacebar, and then you're you've stopped at that point in time. It also works with scroll wheels and arrows and that kind of thing. Any questions about time travel?
OK. That was easy. All right. Oh. I have to do some ideal-- yeah. So we're going to be looking at an analysis that has that stimulus timing. And we want to look at what is the best case of matching our model. And we can look at that by selecting the Fim and Pick Ideal thing button. Pick Ideal brings up this. And the analysis, AFNI will generate this EPI R1 ideal time series for you.
These files that end in .1d, these are text files. You can open them up in any editor. And they're just a single column of numbers. This has 152 numbers in it. And then just select Set. And you can see we're looking-- the ideal voxel will have this up-down business that looks just like that. Real life will not give us that. Real life gives us something like-- let's jump to a different location that I wrote in my notes. Let's jump to something like this. You can find a lot of similar locations.
Minus 22, 72, 18. And then on the set. OK. So that's a good voxel. I know it looks noisy, it doesn't look as nice as the red one, but that's a really good voxel. It's a really good fit. This is giant percent signal change. We'll look at that later. So this is-- and if you click somewhere else, just to mention this, you click someplace else over here, I go, oh, I missed that great voxel I was just on, I can right-click again and select Jump Back. This only works one level deep. If you jump two-- if you click twice, you can't jump back or you could jump back to something else that you don't like. All right.
Got to make some more time here. So I am going to-- I think I'll close-- yeah. Let me close the second controller. And I'll go back to the first controller and now we'll talk about the overlay. So click overlay, and we'll select the same EPI data we were just looking at, but not as underlay. This time as overlay.
And now we're looking at it in color. The overlay is showing us color over our background underlay data set, which is why we call them overlay and underlay. And it looks very different here. You can see the anatomical data set has a much larger coverage. And we have a new menu that popped open, this Define Overlay panel. So let's look at this Define Overlay panel. We can change some things here.
Well, first of all, let's look at the overlay button over here shows you what overlay you're looking at. So here we're looking at overlay number 0, the first-- we start counting an AFNI at 0. So this is their first time point. If we want to look at other ones, we can click on that button and it shows a 0 to 151 in this long list like that. We can also-- this is another kind of hidden menu. If you right-click where it says Olay, if you right-click there, you'll get a list that you can scroll through. And you could pick a time point that you're like.
Oh, I forgot to mention one thing on the pre-steady state. Some scanners, many scanners throw that away. Don't throw that away. These things that they called dummy scans, you will sometimes need them because they have excellent spacial contrast. They're good for alignment. They're not so good for our functional analysis, but they're good for alignment, so keep those.
If you have a low flip angle, you will have a contrast that's not very good, spacioal contrast that's not very good but your pre-steady state may still be good, so keep those. And you can use those for alignment. All right. OK. So we can look at data like that. Right now, it scales from the minimum to the maximum.
And this minimum and maximum is 0 to 28, 89 on this particular subject that we're looking at here, subject number 21. If you want to change that, select the Autorange button and let's put in something lower. So instead of 2,889 maybe I want 1,500 All right. 1,400. Doesn't matter. 1,400. Too low, so let's try 1,800. Starts to look more reasonable. OK. I'll go with 1800. This is all positive, so maybe I'll click this positive button below the color bar.
And again, it's sitting right on top of our data. So we have these two data sets that are supposedly in alignment. We can look and see if they are in alignment by changing the opacity, that eight key to the right of the image viewer. And every image viewer has a key with a number next to it that says-- that controls the opacity. This is like the transparency but in reverse. The higher the opacity, the less transparent.
So let's make it less opaque. So I change it down to three. And you can see through it. You can see through your data set. And you can see that these data sets, this data set is roughly in alignment. It's not exactly in alignment. And we'll see some ways to see that in more detail. And in some areas, the intensity kind of has dropped off. So down here, in this is this inferior section, you find that some areas are not covered well by the EPI data. And if you're interested in that, you have to scan your data differently.
So you have to accommodate for that. So we have that magnetic susceptibility dropout is causing this reduction in the intensity. OK. All right. Let's continue on. Oh, to turn on and off overlay, you can click the O key. And that's just a toggle. If the overlay is off, you can click the U key. And both of these ways, both of these keys are good ways to see whether your data is in alignment. So if it seems to move between the two, probably not in alignment. If it looks like a crisp match, they are in alignment. So that's something you could do. You can also do the E key. That's the one I showed you before. It's more interesting with an overlay data set over that.
Let's see what else. OK. I didn't cover a couple topics here that I wanted to mention before. And I want to go back. I want to turn off the edge display. If you've changed your contrast, so I was playing around there with the contrast and the brightness and I've done some funky things to it, and I don't like it, you can just click Norm, which in Boston is just best set as "non." OK? That's from an old TV show if you're-- well, depends if you are from another age or another place. And I think that's just a few blocks that way. That way? I don't know which way I'm returning. OK. So that's the norm.
Down here, I didn't mention this, this is the underlay is automatically scaled between the 2 percentile and the 98 percentile. If you want, you can see-- I'm going to turn off overlay for a second so you can see this a little bit better. If you click the M key over the image, it will be scaled from minimum to maximum. This is the default way most software packages would show you the same kind of thing, but we find that if we set the bottom to the 2 percentile of the intensity and the top the top to the 98 percentile, you can see your data little bit better.
But there's controls for that. You can even do control M, and that toggles between minimum and maximum for the whole data set. You'll notice that the 2% to 98% is based, that percentile range is based on that particular slice you're looking at. So if you look over here at a slice near the top of the brain or the-- well, let's click over here. Axial slice, so you can see that. So pretty much outside the brain and it's still very bright. If you do M, it gets much dimmer.
So if you want to control it to be over the whole data set, this is min and max of the volume. And if you have multiple volumes like on an EPI data set and you want to see everything scaled the same way, you could do control M again. And that scales across the whole data set min to max. There are more controls for that under the grayscale bar. All right.
Let's go back to-- change back to the 2 percentile to 98 percentile. And let's see. Let's change our underlay data set to the EPI data set that we were looking at before. And let's change the overlay to our functional analysis, which should be here somewhere Func Slim. This is a slimmed down version of our functional analysis. And we're going to look at some other parts and change this.
OK. So let's turn the overlay back on. We see a lot of stuff. And I select our overlay. For our overlay sub break, let's pick-- this is V Rel 0 coefficient. This is reliable visual stimulus and the beta coefficient for it. And instead of the auto range or our scale of 1,800, this data has been scaled in such a way during this processing that the intensities correspond 2% signal change at the voxel level. So we have effect estimates at the voxel level.
So this is the thing that we like to see. This is how big of an effect we're seeing at every voxel. So let's change that to something like-- red will be shown as 2%. And we're expecting both positive and negative effects. So let's change that there. Now, we do the analysis over the whole data set even out in space here outside the head. We do it over everything because we want to see where there are artifacts, where there's ghosting, what's going on in this data set. We can mask it later because everything is done, it's a massively univariate problem. So we're just looking at everything at the same time in the same way. We can mask it in the later part of the process.
OK. So this is our percent signal change. And now we'll talk about something that we call thresholding. So we can threshold our data. And let's threshold by the t stat for that signal, that percent signal change. How well does a reliable visual stimulus-- how well does that part of the model fit that that data? How well does it fit that ideal curve?
So we're going to threshold by some level of significance. The t step. And here we have the threshold bar. This is a slider. And as we raise it up, we get rid of some of the voxels here. And you'll see there's a p-value that shows up in this too. So here we raise it up to the top, the p-value is 0.31. Our t stat is almost 1. We want to raise it up to higher than that. We can change this with the power of 10 slider. This controls the power of 10. That threshold goes to--
So let's change it to 10. So now instead of 0.56 we have 5.6. And most of our voxels have been thresholded away. And our p-value at the voxel level there is at most 3.1 times 10 to the negative 8.
So this is one of the trickiest parts of understanding the AFNI GUI is to understand how this threshold slider works. So we have the overlay, which we just spent some time looking at, which controls the colors of what you see. And then we have the threshold which threshold on something else. And what we threshold is on some level of significance usually. So something that shows us a p-value, how good the fit is. So we can reduce the amount of data that we're looking at by thresholding at the voxel level with a voxel level p-value.
So the threshold says, whether we see it, the selection here, our threshold sub-brick, this says whether we see that any data at all. And the overlay says what color is it of the thing that we're going to see. So we're looking at the effect size as the overlay, and the threshold is our t stat. So we threshold by the t stat to see our effect size. This is different than other software packages, where they may look at just the t stat. We like to look at effect size because we think that's what you're interested in.
So if I click some place, I can see what that effect size is here. So this is a 2.3% effect size here. These are gigantic numbers for fMRI. OK? 2.3% of the signal comes from the stimulus here. And you can see that in blue we have negative. We have negative effects too. We have deactivation. You can decide what all this means for yourself. There is negative, ther is positive, we have both things going on at the same time.
Let's see. Are there questions up to this point? Because this is a little bit complicated in this part.
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: That's right. We just wrote that thing that we put on by our archive about two tailed tests. OK? So we've discussed this a lot now among ourselves. So it's really at the top of our minds is that we're looking at a two sided test. We're looking at positive and negative results. We don't look at just the positive side. And we accommodate how many tails we think we're looking at. We're looking at two tails.
So it's fairly straightforward. If you're only looking at one tail and you expect-- and then you're going to look at the negative tail, you have to change that by some factor a factor of two. So if you want a 0.05 false positive, you need to look at 0.025 on each tail. It's fairly straightforward, but surprisingly not done anywhere else in fMRI. Yeah. We find a lot of surprises for us.
Another effect that we did to discuss is the multiple comparison. Well, this is a larger effect of multiple comparisons. So we're looking at 100,000 voxels. If we threshold each one at 0.05, which is somewhere around there, you see we have a lot of voxels. Most of these here, you can see they're outside the brain. They're probably not real effect. 100,000 with a 0.05 value, p-value gives you 5,000 voxels that are just false positive by chance.
So you can handle false positives by different ways. One is to use the false discovery rate. And we show that here as the q value. So this is the fraction of voxels that you expect to be false given the distribution of p-values in your data. And so you can raise that up until you get an FTR of 0.01 or 0.05 or whatever you like. This is controllable with a scroll wheel or with arrows. You can even right click, you right click on this. It's another hidden menu. And you can set the p-value and q-value.
Another way to do this is to look at what's called the familywise error. And one way to do familywise error correction is to look at clusters of voxels. This was the-- another series of controversies over the past couple of years is how to cluster voxels. We will talk, I would say ad nauseum, about clustering voxels. But there's a nice interactive feature for looking at clusters of voxels, the clusterize plug-in, so you can click on Clusterize.
So Clusterize, you first say how close do voxels have to be to be called clusters. So by default it shows the boxes have to be touching face on, not by edges, not by corners, face on. But you can change that to one of the three choices. One, two, or three. You choose how many voxels have to be in a cluster.
Now, this will change depending on your voxel resolution. You can't just say 3 voxels or 80 voxels or 20 voxels if you change across resolutions of your data set. Find a resolution, you need more voxels. Pretty simple. But we will give you a formal way to compute all this. Several ways, actually.
So let's say I want to use 200 voxels in my cluster. Fairly large clusters for this data. And I want bi-sided data because I'm expecting both tails both positive and negative results. And bi-sided means that I will treat negative voxels together differently than positive voxels together. A cluster has to be either negative or positive. But I will accept either kind of cluster. I won't take mixtures of voxels within a cluster.
It's a nuance and an important nuance of clustering. And then I select Set. And watch what happens to the overlay colors as I do that. All the things outside the brain disappeared by clustering because the chance that they happen together is much smaller. And that's what this is all about. Then I can go to the report. And this shows me a report of the clusters that are there.
So I can go through. I can jump to the peak voxel in each cluster. And I can flash that on and off. I'll make that a little less opaque opaque so it becomes less opaque for you. So I'm going to flash that, go down to the next one. Jump and flash, jump and flash, jump and flash. So lots of things. Lots of clusters will be showing there. I can increase my threshold and I can see it in different ways.
So the clustering will happen differently, completely differently, as you change your threshold. You go up and down through your threshold, the clustering is interactive. It updates that cluster table and the cluster results. And it's using a mask from our computation. So this is actually masked for the brain inside by 3D clusters. And you can see what happens without by clicking that off. It's not a very big difference, but there is a difference.
OK. Any questions so far about that? I have four minutes left. Yes?
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: So the color bar is flipped by just clicking on them. And by the way, if you want to choose other color bars, there is a large selection here by right-clicking and choose Color Scale. And you can add your own color scales. OK.
In here, we can select another data set. So I can select an auxiliary data set. And let's say I want to look at my EPI data. This EPI data that we've been looking at. OK. I selected one. I can select multiple ones like that. And select plot for the first cluster. And you can see this is the average over that cluster, the mean through the cluster. And this has not had its pre-steady state data removed. So it has that large initial peak, but has that up and down for the rest of it.
I can say start at that at the third time point up here if I want to. And I can save that time series as 1D data set to a 1D series to a file. I can write out this cluster as a separate data set. Or I can take up here. I can see the equivalent 3D cluster command in the command line, in the terminal, or I can save all the clusters with the Save Mask button. And it will save it as Clust here.
So just to show you I will do that here. Save mask. And if I find my terminal, you'll see that it wrote a 3D clust command. I'll show you something a little bit like this on Wednesday with a new program called 3D Clusterize, which is very similar but its syntax is a little bit easier.
OK. So we're there any questions about clustering? We'll talk about a couple more-- two three more times.
AUDIENCE: If I understood it correctly, if you don't choose bi-side, if the cluster has positive or negative voxels, [INAUDIBLE].
DANIEL GLEN: Yes. It'll be a two-sided mixed cluster.
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: If you have pretty significant p-values, then it won't have a big effect. All right. I'm going to finish with the cluster-- if you want to turn off clustering, you have to clear it. So I'm going to do that here. I'll show you the difference. Clear it and use all those voxels outside the brain and throughout. Come back. And now you're looking back at your original data.
And just quickly to show you what else is there in the AFNI menu, you have defined data mode, we have things called plug-ins. OK. And we'll talk about the draw data set. There's a very nice renderer in there. We have ways to do scatter plots and histograms that are very useful. We have the vaulted surf plug-in for sending data to Suma.
These are all very useful. We'll use data set number n to show lots of graphs all together. So that's hiding under the Plugins menu. And I'm going to close that. If you want to quit AFNI, you click Done. It gives you-- if you don't click again in 5 seconds, it will say you're not really done, you didn't mean to do that. If you double click it, it will close that controller. If you do Shift-click, it will close all of your controllers. So depending on what you want to do.
Yeah. I don't know if I have time for the drive AFNI. I can do drive AFNI later in the week too.