11 - FMRI Analysis: Start to End: Part 1 of 5
Date Posted:
August 7, 2018
Date Recorded:
May 29, 2018
Speaker(s):
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Rick Reynolds, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
PRESENTER: This class will do a full single subject analysis, and then, if we have time, we'll do a quick group analysis at the end. It continues after lunch. So we'll do our first half in the morning, and then second half at 1:30. The slide for this is AFNI 16. I use [INAUDIBLE], but whatever you use. AFNI 16 start to finish. I won't follow the slides that much so you don't really need to care about it, but just so you know.
And the software pictured in the slide is called Uber subject, which we do not really take the time to present anymore. However, I think I'm just going to give you a very quick demo of running this, and then promptly ignored it after that. But the Uber subject is a graphical interface that will run AFNI PROC for you. And AFNI PROC is the program I'd rather you become comfortable with. But for a first pass, you might be interested in this. So I'll just do a quick demo of running an analysis with Uber subject.
So I'm just going to-- and don't do this, just watch. I'll just show you. I'll just run Uber subject from the command line. I mean, whatever, it doesn't matter what directory. And basically, you're going to give it your data EPI anatomy and stimulus timing files, and then any processing decisions you care about. For the most part, we'll basically stick to whatever is default in here. So our subject Steve group ID horses. So one of our horses, we have a big scanner for them.
So then just we'll pick our anatomical data set as under AFNI data six, FT analysis, and then subject is FT. That's the same data we processed yesterday, but the full set of it. And then, we'll grab the anatomy from there. And same for the EPI. Now, we've already specified that directory to look there for EPI. Scroll down and get the stimulus timing files. They are also in the same directory, so it's easy to find.
It automatically finds the labels from the file names because I've labeled it-- named the files nicely. But it doesn't know the basis function. So remember these are 20 second stimulus presentations. So we're going to use block 20,1 just like we used yesterday. So again, that's a 20 second box car for the duration of the stimulation being convolved with this 15-ish second block function. B- L- O- C- K is a specific function shape. You can evolve them. You get a 35 second response function that we'll use to model every response.
And again, that will be identical. We'll assume an identical response to every stimulus per stimulus type. So all the visuals will be the same and all the auditory responses will be the same. Anyway, and then I can do-- specify a GLT if I want to take the difference, for example. I'll hit this in it with examples. And it includes the contrast as one of them. We're going to remove two time points. And we'll do motion censoring. Blur of 4 millimeters wonderful.
I'm going to tell it I have four CPUs on my laptop. That only applies to the regression here. I won't regret [INAUDIBLE] motion derivatives. I'll execute 3D Remo fit, and that's good enough. Extra line options LPC plus easy. So I'm just walking down the GUI and choosing some things. If I hit this first button to apply this-- apply everything that I've done, it shows in one window an AFNI PROC command.
And again, this is the command I'd rather you grow to understand. Using the GUI is just maybe the first time you play around. The GUI has a very limited number of options to call-- apply it to AFNI PROC. AFNI PROC itself has, I don't know, 200 some options to it. So there's a lot of control there. I don't put everything in the GUI. If you want something in the GUI, you can let me know. But for the most part, AFNI PROC is a better way to go.
You can actually edit this right here and use your edited version when you do the analysis. But that's a little hit and miss. And just so you know, there's another window that reminds you of what things you've changed from the default. So we're using four CPUs, [INAUDIBLE] exec, and removing two time points from every run.
And then, we can just tell it to get to work. Now it's off and running. So this is-- in that time, I fired off a single subject analysis. And you can duplicate this basically for all your subjects if you want. But again, the recommendation, don't do this. So I'm just going to trash this right here and now. It's dead.
So what we'd rather you do is do this with AFNI PROC. So we'll talk about that and as we continue then. But our goal is for this course-- or you can-- a common goal you might have if you enter a research team or something like that is, you get a bunch of data dumped in your lap and your boss says, OK, analyze it. Make a group result. Whatever that might mean.
In our case, we're going to assume our goal is to run a group analysis on the single subject response magnitudes. And remember what that means. These are the beta weights from our single subject analysis. In our example, we have audio reliable and visual reliable stimuli.
And the beta weight is just the magnitude of the bold response. How big was the bold response at each voxel in the subject's data set, or in the brain presumably. But we can be more general than that. And then, you want to hand all these-- that you do that for every subject, and you want a t-test or something like that. So we'll keep this pretty simple for now.
So that's our goal is to run this group analysis, presumably, in this case, just a t-test, or we could even do [INAUDIBLE]. So how do we get there? We want to create beta maps for each subject, and they should be aligned, presumably, to some well-known template. Which template should you use? Daniel has talked a lot about that, which is appropriate for you.
My general suggestion is to-- most people care about an Atlas. They want to use an Atlas for something with respect to their data. You might consider what-- if you're in that boat, decide on the Atlas that's appropriate for you. And then, find a template that is aligned to that Atlas. And now, you can align all your subjects to the template, and therefore, they will be aligned to your Atlas.
What is better, [INAUDIBLE] space, [INAUDIBLE] space, that doesn't make a whiff of difference there. You just pick something that is a space that you actually want to talk about. So once you get your data in that space, then you can just run your group analysis program. Gang will have lots to say about that. So how do we get there? How do we create these aligned beta maps?
To do that, we write a single subject analysis processing script. In our case, we'll do one script-- we'll do pre-processing through linear regression. The inputs in that script will be the anatomical [INAUDIBLE] data sets along with stimulus timing files to tell the regression program when the stimulus events occurred. And included with that, we will tell them how to model our stimulus responses.
And then, you have some controls. Your processing decisions like the blur size, and there are lots of options, right? But if you start with some data, and even if you've got scripts from-- written by some monk trapped in a wine casket for 100 years, don't just accept these as gold and go with them. Try to think about all the choices that are being applied in the processing, and whether they're really appropriate. Or at least, understand them. Even if you accept them, try to understand them.
If you've got existing scripts, I highly recommend running it through AFNI PROC 2 and then just comparing. If you see big differences, then you can worry about why you might have them. It's very common that older scripts have issues that-- some choices that were maybe appropriate before, but aren't so appropriate now.
Why do I talk about scripts all the time? Again, so you have documentation as to exactly how your data was processed. If you're just clicking on buttons and you get a pretty result and you babble about it, so exactly how was this step performed? What was the blur size? Whatever. And you don't want to say, I don't know. I hit some-- I hit these buttons and I got this result. That's not good enough. You want to know exactly what was done, and the scripts are a record of that.
So the output from this script would be those beta wave volumes. And of course, you get t-stats and all sorts of information along with it. And quality control information. So writing these scripts used to be an arduous journey. But now, typically, we'll suggest using AFNI PROC. And that's its job. You tell it where your data sets are, you tell it where your processing options are, and then it does the work for you. But its job is to basically write a processing script. You can also say, please execute the thing. That's a trivial add on. But it's basically generating processing script.
It's a nice idea if you do this for your analysis to even include the AFNI PROC command and the AFNI version in the methods, or in the supplementary information. And that's a very specific description of how your analysis was done because someone could look at that command and they know, therefore, what the processing script should look like.
Again, general suggestions with this. You've got some data and analysis dumped upon you, and it's your job to go through it. Again, what's the best way to do the analysis? There is no single correct way. We don't even know what the correct way is. So all we can do is bicker incessantly. But you, as the researcher doing the analysis, you want to understand the steps well enough so that you will feel they are justified and that the results that you generate make sense and you think you've accounted for whatever you need to account for.
If you're just hitting buttons-- we're registering to some template. I don't really know the details. It shouldn't be magic. Don't just rely on the software doing its thing and assume that everything is good. Usually it is not. So here, just focus on understanding the processing steps. Practice the good habit of reviewing the results. That's one of the biggest things to-- that's a change in your mental focus. Actually looking at the data and seeing if it seems reasonable.
No software is going to, in general, be able to figure out whether or not everything is good. Whether there is a problem in this way or that way. The software isn't that great at it. And someone has to recognize problems. And oh, let's try to detect this problem. Let's look at the dice coefficient between the mass. Oh, the dice coefficient is not that high. Did a screw up registration? Well, no it's OK. My T1 goes way down. And includes the cerebellum. My EPI data is actually pretty high up. So the overlap shouldn't be that good. This is looking at it visually, I can see registration is good.
You with your eyes can tell a lot more than the computer can. Some things the computer is good at, some things we are better at. But looking at the data is where you will get comfortable and you will be able to distinguish things good from bad pretty well.
So in here, we'll review the processing steps along with the data. As you acquire data, say you're starting a new study, I suggest the first couple of subjects you scrutinize very well. Say do what we are going to do here, go through the processing steps one by one, and make sure you are happy with it. After you've collected a couple of subjects and you're comfortable with a processing stream, then it's-- you can back off and just do more simple quality control with the newer subjects just to make sure nothing looks terrible.
But at least, in the beginning, make sure you are comfortable with the whole stream. So babbling about the scripts, again. The scripts are records of how the data was processed. They're easy to apply to new subjects and easy to repeat. You should expect to reanalyze everything from even simple things like-- our new data is acquired on the 7T. We have small voxels.
The voxels are 1.7 millimeters on a side. You shouldn't be dropping an 8 millimeter blur on this data, you're just wasting-- you're wasting our high resolution. Oh, so let's change the blur to be 4 millimeters, full with half max. So now you have to reanalyze all the data. So you should you should expect things like that. And so the basic point there is you want to just be organized upfront so it's easy to plug in a new subject.
Another reason you might have to reanalyze everything. Again, you can either start your analysis with a version of the software and continue it for two or three years as you collect data each new subject, and then you finally have everyone and you do your final analysis and write a paper about it. But now, you're using three-year-old software. Have we improved in three years? I hope so.
So maybe you would rather use newer software for your analysis. Maybe registration can be a little better than it used to be. That will improve your group results. Little things. So you might even consider keeping your software up to date throughout the years. But then when you get to the point where you're writing your paper, you've got all your subjects, you really ought to not blow away, but hide the old-- move the old result and do a full analysis so that every subject is analyzed with the same data-- with the same software as well as with your group analysis. So you don't want to have a bunch of versions of software going into your final result.
So a little tedious, a little extra work, but something to think about. So a brief review of the stimulus conditions. This is exactly the same that we talked about yesterday. We have a speech perception task. There are just two conditions. Very simple here. Person speaking words for, say, 20 seconds. And there is an audio aspect and a video visual aspect that are present in both stimulus conditions, but one is degraded.
In the auditory reliable one, the visual aspect is degraded. In the visual reliable one, the auditory aspect is degraded. But otherwise, every stimulus has both the visual and audio aspects presented to the subject. They're not silent or something.
So we have three runs of 10 blocks. Pseudo randomized blocks of-- there are five conditions per type in each run. So we have a 20 second stimulus of one type and a 10 second fixation. And 20 seconds of possibly the other type or not, it's random. And another fixation, et cetera. So we do this across three runs. And then, I guess that's good enough for this. To anatomical data sets collected. Again, in their case, they were going to do a surface analysis. We can actually run a surface analysis in here on Thursday just to see the difference.
The EPI data sets, we have 33 slices and 152 volumes per run. We will remove the first two volumes as premagnetization steady state. Usually, you'll remove more than two, but we didn't have any pre-steady state data. Again, we just fake that. So we only bother to fake two time points. So that's what we'll remove. Sample size, 10 subjects. Not nearly enough to get published right now, but this is fine for a group analysis in class.
So AFNI PROC. We've babbled about this quite a bit already. So this is basically going to generate a processing script. You don't have to master the script that erases-- the scripts can be 300 to 500 lines long. They're actually fairly long and detailed. And I don't recommend you ignore them, but you don't have to master all the shell syntax. But you want to understand what they're doing. So you should actually look through them to follow the steps that are being applied.
If you want to really learn things well, you go off on your own, you type commands on your own following the script. Try this command, see what it does. Look at some of the other options that you might apply for that command. Move on. And then, when you're doing an analysis, you don't do anything by hand. You do everything with the scripts. Yes.
AUDIENCE: Where is AFNI PROC stored? Are you saying you have to open that in an editor and look was it there?
PRESENTER: Right, you will write a command in an editor. So you'll just say run-- you'll say AFNI PROC and give it some options. And the help has examples that you can start with as well as the class demo.
AUDIENCE: You never have to open AFNI PROC by itself?
PRESENTER: Right, you run it. Just like 3D calc. It's just a command line tool. Uber subject is the graphical tool that will write and AFNI PROC command for you. But AFNI PROC itself is just a command line tool.
So what will AFNI PROC do? This script is going to-- the generated processing script will copy all your inputs into a new results directory. So if you have to rerun it, for whatever reason, you can just remove that whole results directory and start over. It doesn't have original data in there.
And then, in that directory, we'll do all the processing, time shifting, align, et cetera. We will leave the results in place to allow for review of processing. We don't delete much of the data as we're going through. We delete some things that are just considered too trivial, but most of the data is left in there for you to have the option to review it and look for-- just to see if you're satisfied. Are there problems? If there was a problem at the end, you want to be able to trace back and see why-- what led to this? What do I need to do to fix it? So you want the data there for review.
It also generates some quality control scripts that you can run. It generates one called ss review driver, which you run and it will walk you through some basic quality control for that subject. And you should do that for every subject. That script is the minimum quality control. We'll get to that. The script is in-- the scripts are written in t-shell syntax. Again, that's-- t-shell is a weaker language than [INAUDIBLE], say, but it's more readable. So if you don't care to go too deeply in a shell language, this is just easier to read.
The generated PROC scripts are written to be easily read and modified. But you shouldn't modify them unless you absolutely have to. Again, these scripts are three, four, five, 600 lines long. If you change something up here to-- are you really going to make sure and whether that change affects anything down here? Are you going to trust that you [INAUDIBLE] get everything?
You'd rather change the AFNI PROC command to do something. If there's something AFNI PROC doesn't do, let me know. Once in a while, you're stuck modifying a PROC script, but that should be rare. Typically, when you're doing this across groups, some people generate a PROC script and then run that PROC script per subject.
I don't recommend that. I recommend having a loop going across your subjects to run AFNI PROC per subject, and then you have a processing script per subject. And it just keeps everything separate and then you can-- it's easy to track what happens that way. Keep all your results together.
The remaining steps in here, we're going to see the under AFNI data 6/ft analysis. FT is a subject ID for-- that's the data we're going to analyze here. Then, we're going to review the contents of that directory, see what the input data looks like. We've already perused this so we don't have to look much. We'll review the AFNI PROC command that will run the analysis for us. That's in the script so5@ap.uber.
Then, we'll just run that. Running this script will actually do the analysis, then we'll go look at the results and run the quality control driver script. And then, hopefully, we'll have time later on to run a t-test. To actually do a group analysis. I don't usually spend more than a few minutes on that. So any questions before we get started with this?
From my home directory-- from the home directory going to AFNI data 6, as you did before, can you see the blue here? Should I get rid of that blue and just make it white? Is the blue readable? What are you nodding? Yes? White? OK. So at least in this terminal. OK, so we'll cd into AFNI data 6. I'll do it one directory at a time. Well, I've got some extra garbage in here. Let me remove those data sets at least. I've been goofing around in here, so I've got extra stuff.
So under the AFNI data 6 directory, we have FT analysis. So let's go into that directory this time. So that's a different one than before. So I'll cd at the ft and hit Tab. We'll use file completion.
So in that directory, we have a handful of things. We've got in the FT directory. This FT directory-- if I can click. This FT directory has the raw data, the [INAUDIBLE] and timing files. That's what I accessed when I ran Uber subject at the beginning to fire off an analysis. Then, it has a handful of other things. It has some [INAUDIBLE] some results.
But then, it has a bunch of s scripts. S0 something and the S1 something. The S0 something scripts are AFNI PROC commands for various examples. The S1 scripts are the processing scripts that AFNI PROC generates. So the S0s correspond with the S1. So for example, in here, we're going to run as S05.ap.uber. And the Uber is because this is an AFNI PROC command that was generated by that Uber subject program.
So I just dump that on the terminal window. It's a little tall. I will scroll. That's good enough. You can see it on your laptops probably better. So this is just a little script. Basically, it's one command, but setting a couple of variables up top. So this just sets a subject ID and a group name. This was subject FT and our group was horses. And I still don't know how to scroll on my laptop. Move it a little to the right.
We note a directory where-- a top directory where we receive the data from. In this case, that's just our subject ID FT. The data is in the FT directory. But it's nice to do these sorts of things when you're writing your own scripts. Specify this is the top of my data directory. Underneath there, I will have raw data or whatever you have.
So you'll have your [INAUDIBLE] and then subject directories under there. So organize your data, then it makes writing these scripts very easy. So anyway, here's the AFNI PROC command. And whose job is it to create this? Really, this is on you. So this is what you want to get right. This is the command that's going to do the analysis or set right the analysis script. So you give it your data sets, you give it your timing files, and you tell it how to process the data. And then it will write the script for you.
So we give it our subject ID. Give it the name of the script. Most of these are optional. By default, it names the script proc.subjectid. So this is unnecessary, but it's in there just for clarity. We overwrite the script in case we've run this before. I didn't even really like doing that, but anyway, it's in here.
The blocks option. This is a basic list of processing blocks that the program will go through. It adds a couple extra ones that are automatic. But we are telling it and in this order to do time shifting with the t-shift block and a line block the lines the EPI and the anatomy together in one direction or the other, depending on what you tell it. TLRC. That's the standard space block that aligns the T1 data to the template, whatever template you specify.
Then the vol rate block aligns the EPI data together. The vol rate block actually comes last here because it relies on the align and [INAUDIBLE] blocks for their transformation. So in the vol rate block, all the transformations will be put together and then applied to the EPI data in one shot so that you don't keep sampling the EPI data.
Then we'll then we'll blur and mask. The mask block doesn't actually-- it just creates a bunch of masks. But then for the most part, it promptly ignores them. It doesn't actually mask your EPI data unless you tell it to, which we are not. And then, we scale the data. Again, we scale every voxel to have a mean of 100, and then the regress block does the linear regression to create the beta weights. So we are specifying what to do here and the order to do it in.
Many of these blocks you can actually shift the order for. Yeah, I won't mention too many other options with this. But again, AFNI PROC has a lot of options. You can do the blip up, blip down. 200 some options. You have a lot of control here. You can also use this to do very short things. If I just want to run a linear regression, you can say, just do the regression. And then it makes it 3D deconvolved command for you and stuff like that. If you just want to do volume registration, you can do that.
So now we give it the data sets. The order of these options does not matter. But I tend to write example commands in a somewhat chronological fashion so they're easier to follow. So copying that, we give it the anatomical data set, which is [INAUDIBLE]. The EPI data sets are under the top [INAUDIBLE] EPI run 1, 2, and 3. We tell it to remove the first two time points from each run for the [INAUDIBLE]--
AUDIENCE: Yesterday, we looked at everything as one block and this time [INAUDIBLE].
PRESENTER: Yeah, yesterday's analysis was with one data set that had all three runs in it. That's equivalent to what this will do because we told 3D deconvolved there were three runs. But even up through the 3D deconvolve command tier, it will use different data sets. It will concatenate them after the fact just for visualization. [INAUDIBLE] outlier. So again, we're going to register everything to a time point that probably doesn't have much motion in it. So that will vary, of course, per subject, per scan.
We're going to line the EPI to the anatomy. That really doesn't matter much because we're going to go to standard space 2. In the vol rate block, we're going to go to standard space of vol rate TLRC warp. We're going to apply a blur to the EPI data of four millimeter at half max blur. The stimulus timing files are the EV1 vis and EV2 odd dot text files.
We'll look at these again before we run the analysis. And then, we have labels that correspond with those timing files. So vis and odd. And then the basis function, we're giving one basis function here. It will apply that to both files. And if you had varying basis functions, you can list them. But then, that option would be regress basis multi.
Sensor motion, we're going to sensor any time points where this subject is moving too much.
AUDIENCE: [INAUDIBLE]
PRESENTER: That 0.3 is basically in millimeters. It's a combination of millimeters and degrees. But a one degree rotation, say, 2/3 from the center of the brain out towards the edge is about a millimeter. So where do you think about it at the cortex, in the edge, the average over space. So for the most part, I figure a millimeter of a degree is about the same. If you're dealing with rats, it doesn't quite apply anymore.
And then, any GLTs that we want to do, for example, we'll do this vis minus odd to just-- a contrast between the two main conditions. Compute the fits. That's just to save ram on your laptops for the classroom, or possibly even on your cluster if you use a lot of RAM. This just makes the 3D deconvolve command use a little less. Otherwise it's irrelevant.
Compute an ID. I'll talk about that later. Estimate the blur in the EPI time series and in the residuals. The blur from the residuals is what we'll actually use later on as part of the correction for multiple comparisons. So you've analyzed 100,000 voxels. You'll use this as part of the clustering routine.
We're not going to run cluster simulation in class just because it takes a little longer. It takes 10 minutes or so on most laptops. You could do that if you want to do it. But it's off now. And execute just says, don't just generate a processing script, run it.
Any questions on the options here? So again, you're not mastering anything here. This is just an example. You can ponder it more later.
AUDIENCE: For [INAUDIBLE] that's where you're specifying the Atlas to work too?
PRESENTER: We didn't actually specify the template here. Therefore, it's TTN 27. So it'd probably be better to put that in there even if we're using that as a default. So you can specify which template to use, but we just don't have the option in this example. The examples in the AFNI PROC help actually do include that. And as long as I'm on that, let me just point you at that briefly. Because I don't think we looked at those help pages yet. If we go to the-- you don't have to do this, just watch a second. But if we go to the AFNI website and you want-- you can type afniproc-help, but it's going to show you-- I forget how many thousands of lines long that is. It's a lot of help to search through.
But on the website, if you go to the Documentation tab, under there we have-- part three here in the table of contents is all program help. So that has the help output from all of the program. And the format of this-- it's the same text. But it's got kinder and gentler format for you. So here are all the programs. And let me find AFNI PROC. Numbers come first, then @ symbols, then upper case comes before lower case. So the ordering of files may be a little confusing.
So now we have lower case. AFNI PROC is here. So here is AFNI PROC. I'll just click on this. And down here, the help is broken up into these sections because we just put these tiny characters in the help that let us automatically break this up. So this is the exact same text that you'll see on the command line, but it's a little easier to go through.
And in fact, you've got a default. You've got all these examples here. If I go to example 6, say, click on that, it takes me right down to that example. And as long as [INAUDIBLE] base, we're specifying that I have an I 2009 C template. So this is a nice way to look at the help.
Not all programs have the nice formatting yet. Some of them are just a pure dump of the help. But the more complicated ones we will try to format nicely over time. So before we start the analysis, I want to just briefly review the input data, and that's under the FT directory. How do you spell FT? So we've got our timing files here. I'll just [INAUDIBLE] one of them.
Remember, we have three runs. So there are three rows of numbers. And these numbers are just-- they're not necessarily integral, except in this case. But real numbers that are seconds from the beginning of each run. So this is the auditory reliable condition. So time zero, that first event was at the beginning of the run. And remember, these files that we give to AFNI PROC need to account for the fact that we're going to remove two time points.
So at the scanner maybe-- since we're deleting four seconds here at the scanner, maybe 4, 34, 154, here we remove two time points. We have a timing to a program that can subtract 4 seconds or 12 seconds or whatever very trivially for you. So those are the timing files. And then, we have FT [INAUDIBLE] and then the EPI.
So I'll just briefly run AFNI to see how reasonable it is. You don't have to even do this. Look at the overlap. The alignment is good enough. The alignment hasn't happened yet, but you want them modestly in the right ballpark. If the [INAUDIBLE] here and the EPI is over here, you're going to have to work harder to get them to find each other.
So this is good. If I set my underlay to be the EPI, and I'll turn off the overlay and just look at the graph briefly, the EPI looks OK. I see contrast in it. It's not fantastic, it's OK. Open a graph window. Time series looks decent. You can see the red dot. The current data point is way up high. Again, that's an indicator that you've got your pre-steady state data in here. So you'll want to remove time points.
If you don't see that tier and you are removing time points, you better make sure of what you're doing or vise versa. OK, so the data looks good. You spend a lot of time-- analysis, the full study can take years, right? Even for one-- to collect one subject's data. It takes time to contact them, set up the scheduling, get them down to the scanner, multiple people have to worry about the scanner. It's very expensive. You collect the data. Spend a few minutes looking at it.
The few minutes you spend verifying that you've got good data is well worth the investment and getting all this and writing a paper that you're going to put your name on. So look at the data.
So before it's lunchtime, let's get this thing fired up. So remember that PROC script is S05. This generates and executes the PROC script. So we can just run this and then run out of here for lunch. So TCSH, we'll just run that command that we looked at earlier.
Again, this is the AFNI PROC. AFNI PROC is going to write a PROC script, and then start executing it. And we can babble about that when we get back. So your laptops-- you'll probably want them to do some work over lunch. So we're going to send-- get this off and running. So this will take probably 10, 12, 15 minutes, depending on your laptop. So give it at least that much time with the top open.