01 - AFNI, FMRI and Dimon 1 of 2
Date Posted:
July 31, 2018
Date Recorded:
May 28, 2018
Speaker(s):
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Rick Reynolds, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
RICK REYNOLDS: This is the AFNI bootcamp. Our weeklong course will focus mostly on FMRI and doing it with AFNI software. I don't know how much you've worked with any of this. I guess I can't show you too much right now. I can dazzle myself here for a moment, but that won't be so useful to you.
But hopefully, in your home directories, you've got the data directories like AFNI_data6 and AFNI_handouts. So the AFNI_handouts directory has PDFs for classes. If we don't remember to mention what PDF we're using, just let us know.
So anyway, this particular class-- we'll do the AFNI_01 handout. OK, so for example, here I am in my home directory. So in my home directory, I have a bunch of garbage here. AFNI_data6 is the directory that will mostly work within the earlier courses. And then, again, AFNI_handouts is where all the PDFs are. So ls on afni-- for example, I'll type AFNI_h and hit a Tab key. You should get file completion with that. But anyway, here are all the PDFs. Or of course, you can browse into the directories, but I'm afraid of mice.
So for this class, my program is [INAUDIBLE], so do whatever you want done. On a Mac, you'll probably just go and use your GUIs and go into the directory and double-click on it or you can type open space, afni_handouts/afni01. Yes. So good. Everybody have that open and stuff? If you don't, just raise your hand, and Daniel and Gang can come by and make sure things are good. I'm not going to use the PDF. I'm going to use a PowerPoint since Bob has already set it up to ignore some of the slides. So anyway, my name is Rick Reynolds. I work with the software here. I work under Bob Cox who originated this stuff in 1994. Daniel Glenn will follow
me and then Gong Chin hiding in the corner. He's a statistician, so he'll handle most of the statistics type courses. Our group is not very big. We have about six developers in it. People come and go a little bit, but not too much.
So we focus on writing software for things we are more confident in. And we don't immediately try to spit out new software for every newfangled application that people think about. So we try to focus on things we consider a little more robust and defensible, so we don't have programs to do everything under the sun. On the flip side, we do have about 600 programs. So we've written a lot, but we don't do everything.
So here you see, Bob has found lots of logos to show you. That's the main purpose of that slide, I think. So the purpose of the software, again, is mostly to provide an environment for FMRI analysis and for development of new software. If any of you are software developers, it's pretty easy to write a new piece of software that works with AFNI, even a plug-in that connects to the AFNI GUI. It's very simple to write that aspect, and then you can do whatever you want with the data that you access via the plug-in.
Do we have a laser pointer up here, or should we use a mouse? Mouse, you'll record better. OK. So of course, if I click, that's going to go to the next slide.
So perhaps the most important principle we live by is to allow people to stay close to the data. Look at your data. Don't expect to start up some software, click a few buttons, get a pretty picture, and write a story about it, because we don't really view that is terribly scientific. To a fair degree, you want to understand the steps that you were running to get the pretty picture that you want to write your story about.
You should understand the steps well enough-- nobody understands top to bottom of everything. Nobody does. But you want to understand well enough that you can justify the processing steps that lead up to your final results and so that you feel confident. Yeah, I know we could have done this or this or this. I mean, there's an infinite number of choices you can bicker about, but you want to at least understand well enough to defend what you're doing. So looking at your data is a big part of that, not only to look for things that seem like they might be anomalies or problems in some way, but also just the more you look at the data, the better you understand what seems like good or at least typical data in your environment, and what seems like something's wrong.
So we give a lot of ability for assembling pieces to make customizable analyses. We have examples of how to just run an analysis. But for the most part, we give you a tremendous amount of flexibility to analyze the data as you see fit, because it's not one size fits all. Your Voxel resolution, the intensity of the data coming right out of the scanner-- these simple things-- the type of subjects you have-- these things affect the processing stream and might affect your decisions on how to analyze the data.
So you have a lot of choices. We give you choices. We try to overwhelm you into inability to do research with choices. That's really the main thing. So you want to try to understand the analysis and the tools.
Providing mechanism, not policy-- that's not exactly true anymore, because there is a lot of arguing in FMRI, so we've had to step up and argue, too. So we do give suggestions on ways to do things, but again, in the end, the responsibility is on the researchers. But having said that, that doesn't mean we try to make things difficult. We'd like to do that, but we don't always try. But again, you want to understand as well as you can.
Some random principles we live by, we're pretty responsive for fixing bugs. Release early and often. We don't have a release schedule, except to say we release as often as we add something that we think should go out to someone. So we may create a new set of binaries three or five times a week, or maybe it'll take, oh, a couple weeks between if we're traveling or something.
So if you ever wonder if your AFNI binaries are out of date, the answer is yes. They'll almost always be out of date. So what we typically recommend is on your personal level, when you're doing your work, you can up set up some cron job or something to update AFNI every week or two, or even every day. If there are no new binaries, it's not going to download everything.
But you have choices when you're running an analysis that takes a few years, right? So typically, a study is going to take a couple of years unless the data is just sitting there, ready for your full analysis. Otherwise, you have a choice. If you want to stick to one snapshot of software, that means you're going to analyze that for two or three years using that software, and write your findings.
That means after two or three years, you're using two or three-year-old software. Hopefully in that interim, improvements will be made, and you're not using them. So there's a trade-off. If you just stick with one version, you don't get the enhancements, whereas if you update all the time, the flip side is when you get to the end of the study period and you have all your data, and you're getting closer to writing things up, you really have to reanalyze all your data now, because you don't want to write a publication based on 12 different sets of software.
But still, that's a nice way to go. You just analyze the data as it comes, keep your software up to date, and then do a full analysis once you have all your subjects. You don't have to do it that way. But that's the way to stay the most current with the software, if you want to. A lot of people just use one version and stick with that, too. Either way is OK.
For helping people, we have our message board. You can go to our website, and you can post questions. You need a login for that. But once you have a login, then you just post your questions. You can search the message board without a login. We have many thousands of messages on there over the years. So it's likely that whatever you're wondering about has already been asked, so you can just search on some keywords.
AFNI has many programs and options. Putting the programs together into a fairly unique analysis stream is tedious and time consuming. And most people don't want to do it. However, if you really want to get good and understand the analysis in depth, it's a nice idea to wade into those waters, run individual commands yourself, and to understand the options and get a good feel for things that way.
Most people don't want to take all that time. And it's OK if you don't choose to do so. But that's really where you will get the most understanding-- typing in commands yourself and looking at the results and thinking about how the different interpolations or whatever affect the results.
Having said that, once you go to do your own analysis, you don't run your own analysis that you're going to publish by typing in commands. You don't do that. You have a fixed set of scripts that you run. And then, in seven months, when your boss questions you about, so how did you do this, you don't say, I don't know. I ran some program. I don't remember what its name, and it had options.
You want to have a record of exactly what you do for an analysis. So the main point of that is really to try to script absolutely everything. You can do a lot of things without scripting. But then, you don't have the record of exactly what you did. Some things are automated for you. They write the scripts for you so you have them, but it's just a good way to think.
Afni_proc.py-- we'll focus on that tomorrow. That's a program to write a single subject analysis script for you, and of course, running it as you can tell it to run it, too. But that does all the pre-processing at the single subject level up through linear regression. And you can get beta weight, beta maps, and stuff like that from this.
There's uber_subject.py. That's a graphical interface on top of afni_proc you can use if you want to. That doesn't have nearly the set of options that afni_proc has. So if you like graphical interfaces and you like the comfort they provide when setting up an analysis, you can run that as, a practice step but that's going to generate an afni_proc command. And I suggest that you look at the afni_proc command and try to make the options suitable to your analysis. We'll talk about that much more tomorrow.
And then align_epi_anet down here, that's a very nice program. That's from Daniel, to do registration. It's-- obviously was written to align anatomical data with API data, but you can align any two data sets like that. You have a lot of control over cost functions and in little details that are often-- make life difficult. But it's pretty easy to do different types of registration with this program.
What is functional MRI? So if you've got like five seconds of activity, there's a delay. If you start using an area of your brain, the blood will start being fed to that area. And then, there's a delay. And then, you'll see hemodynamic response arise. It shows a five-second plateau. I don't know if we used to think like that or what. That's not really how we work with this.
We can evolve our ideal hemodynamic response curve, which doesn't have any plateau. It's just a Gaussian type of curve-ish. But then we can evolve that with a boxcar function, that is five seconds of stimulation-- can evolve that with a incomplete gamma variant curve and you get a bigger incomplete gamma variant curve.
That will not plateau until your stimulation exceeds the duration of the basis function, the simple curve. So if your simple curve is 12 seconds, you don't get any plateau until you have 12 seconds of stimulation, or more than that, say. So this isn't really how we analyze data.
Then, there's a fall. There may be an undershoot. We have different basis functions you can choose from. Again, the choices will come back. Some have undershoot. Some do not. Curves with undershoots are not typically the first ones we recommend, but you can use whatever you like.
So how do you do an FMRI experiment? How many people have run experiments or analyzed FMRI data? OK, great. So we don't have to babble too much about this. But the main thing is when you're running an experiment, functional MRI analysis is based on contrast.
So if you stick someone in the scanner for 30 minutes and give them visual stimulation the whole 30 minutes and then analyze your data, are you going to see anything in the visual cortex? No, not a thing. It'll be just like they're dead or something. I guess there's probably blood and breathing and stuff like that, so they're probably not dead, but maybe sleeping. But you won't see that the visual cortex is active, even though it is the whole time. It's because it never goes down.
So you need a contrast. You need it to be active, and then, you need it to be on and off and on and off. And you need this variance in the data, in the MRI signal that you analyze to detect anything. So it's all about having some contrasts.
Some people have experiments they design, where it's different types of stimuli that last the whole experiment. And if there's related stimuli, like hearing different stories or again, different types of visual stimulation, if they don't get periods where they're not getting anything, like if you have a fixation-- if you don't have the fixation cross or something like that, there's no, say, baseline condition. So again, a brain location that is responsive to all the stimuli will look the same as a location that doesn't respond to anything, and you can't tell the difference. So it's good to have a contrast in your experiments, where there's some down time for any bold responses to start dropping back towards baseline, even if they don't get all the way there.
So most of these, you're probably comfortable with. Just an example of what some data looks like-- this is actually fairly clear data. Most of your-- if you're running experiments right now and looking at data, it doesn't look like this. You don't see these big on and off periods. Sometimes, you might but the vast majority of experiments have a lot of different stimulus classes and the stimulation actually last, say, 500 milliseconds or possibly as long as two seconds.
But they're all fairly short. And these short stimulus events, you can't really see them in the data. It's only once you get longer events that you see a more consistent signal. And why is that? Because the noise in FMRI data is as big as the signal.
So the signal has to go up for a little while and have the noise up here for you to visually tell that there's a rise in the signal. And here, it falls back down. So this is actually quite nice data, though. This is also very old-- 27 seconds on, 27 seconds off.
Here, the red curve is the ideal. The black curve is the actual data. You see how noisy it is. And the blue curve there is the ideal fit to the data, plus some polynomial trends. So you'll notice when you've looked at the data, sometimes at a Voxel, you'll see it going up a lot, even with the signal on top of it, or down, or even some quadratic or cubic trends in the data.
So this, you can actually see a fairly clear, at least quadratic drift in the data, going down, and then it curves back up a little bit at the end. That's quite common due to the scanner heating or slight motion effects there. We don't have a great handle on all the causes of that, so modeling it well is difficult. But even if we know the cause, like slight subject motion, it's very difficult to model it perfectly. So we approximate it.
In AFNI, we approximate it with polynomial regressors. Or you can use sinusoids like SPM and FSL do. Either way is OK.
So for most of the experiments you see, some of the Open Data has longer stimulus events, but most of the experiments you'll probably work with yourself have short stimuli. Again, a good Voxel, a good time series may just look like garbage to you. But it's only with the many repeated events and doing the regression with a long time series, then you can get a beta weight, a magnitude of the bold response that you can rely on a bit. But that's why you need many events, because the data may look like garbage, but over many, many stimulation events, you can see, well, on average the signal looked like this. You can figure that out, or the computers can figure that out.
Some fundamental concepts about AFNI or in general-- the basic unit of data in AFNI is the data set. That may have to change-- I don't know-- because some of the data that's put out for open analysis, they use the term data set to mean the whole experiment, the collection of data. We'll more often refer to data set as one single file or a pair of files, but one collection of 3D plus time data might be a data set. So how that's used might vary. There's a lot of jargon.
You can have a collection of one or more 3D arrays of numbers. You can actually run most of our program with single time series, too. You don't even have to have volumetric data in your hand. You can have data at one Voxel or one electrode recording. You can run these simple data types through a processing stream, too.
Each entry in the data arrays at a particular spatial location in a 3D grid. So typically when you strap someone down in the scanner, you're not collecting volumes. Typically, you're collecting slices. So if you're collecting axial slices, an axial slice will go like this.
So you may collect, say, inferior to superior. You might collect a bunch of 2D slices. If your TR is two seconds and you have interleaved acquisition, you'll collect slice 0, 2, 4, 6, 8 et cetera, and then, you'll come back and collect slice 1, 3, 5, 7, 9. So that's how the two seconds will be consumed by the scanner, though-- I don't know, 40, 50 milliseconds per slice or something. And then, over the 2 second TR, you'll collect your 30 slices or whatever.
So that's how it's collected. And then, you'll put these slices-- you'll stack them into individual three dimensional volumes. So the single slice may be 96 by 96, and then you'll have 33 slices or something like that. That's your three dimensional grid of data points. And at each location on this grid, you'll collect a whole time series of data.
The storage of them is usually spatial first, in that you'll have all of this direction's points. Then, you'll move back along your axial plane. And then, you'll collect the next set of points. So this is how they're stored on disk. And then, you move to the second time point, and you start over. That array is how they're actually stored linearly on disk, but you'll visualize it in this three dimensional volume.
When we talk about coordinates for a Voxel, a Voxel is a volumetric pixel, say, so one of these little boxes in your 3D array. The coordinates for it will be the center of the little box. That's the actual coordinate for a Voxel.
It's unambiguous. If you assign it to a corner, which corner? There are eight of them. Panic ensues.
We have a four dimensional collection of data here. Originally, when Bob Cox started writing AFNI, all the data analysis was done one slice at a time. And so going to 3D was a big deal back then.
That's why half the AFNI programs are called 3D something, because he wanted researchers to know, OK, now, you're in the 3D domain. Remind you. Remind you. Of course, that's ancient history now. So that's the reason for some of the naming.
So he made the data sets. One data set was called a brick file, just again, to remind people that it's three dimensional, a brick, solid object, box. But then, it went to 3D plus time. Now, it became a four dimensional data set. But it's still named brick in the file system.
So what do you call one volume out of this 4D [INAUDIBLE]? So he started calling it sub-brick. So that's a little weird. So if you hear sub-brick, that just means one volume out of a four dimensional array of volumes.
I will try to use the word volume. I'm trying to let go of that jargon, because it's confusing. But anyway, just so you know, when the data is stored on disk-- and I'll take you here in a moment just so you can see. But I'll go into AFNI data six slash AFNI. You don't have to do this right now.
But for example, we have a data set here, a NAT plus a ridge brick, and a NAT plus a ridge head. So those two files in AFNI comprise one data set, one three dimensional data set. This is an anatomical one, just one volume.
So when we look at this, the ANAT or the EPI run R1 or are all underscore VR. This piece of the file name is the prefix. So when you tell some program to write a new data set, you'll often give an option to say what should the new data set be named? You'll use dash prefix. So the prefix in this case would have been R all underscore VR, so all rungs volume registered. And that was the naming that was used in this random example.
And then, This ORIG part was used to tell you basically what type of space it's in. ORIG means original scanner coordinates. So you haven't aligned it with any template. In the olden days, people-- actually, the reason the AFNI graphical interface was created by Bob was to take your data and warp it into alignment into TAILORx space in some sense, just like TAILORx told us never to do, and we immediately went out and did it.
So the purpose of the GUI was to define where the ACPC line and where left and right was, and to define a box that the brain fits in tightly, and then, to use these things to rotate and stretch slightly, and to do this 12 piece AFINE registration to be in TAILORx space. That was the reason for the GUI.
But anyway, so that orig, and because of that, there was an ACPC piece to the name here that said, you've done the ACPC registration. That means you've rotated the brain so that there's like a midplane cut in the middle. And the ACPC line is right in the center. And the anterior commissure would actually be at coordinate 0 0 0. That would be the ACPC step, but we don't do that anymore. So you won't see those files often.
And then, the other type is TLRC. You should have some here. ANAT has a TLRC. And that used to mean TAILORx space, but now it's just any template space. That means you've registered to some template, whatever template you have.
In AFNI, we don't really care what templates you use. We provide a handful of choices you can use. You can use your own. You can make one from your own subjects.
It's pretty easy to do. Daniel can talk about that a bit. But once you've registered all your subjects, whatever they are, human or whatever to a template, the files will say TLRC at that point, to remind you they're in some standard space.
What is in that? So there is that brick file and the head file we saw on disk, that I now have promptly obscured. The brick file is the actual brick. That's the data. Bob used B-R-I-K to be the file that has the actual 3D or 4D data in it. So that's nothing but the raw numbers in whatever format you've chosen to save it in. Often, it's in short integers, which is two bytes, 16-bit integer numbers to hold each value, possibly scaled.
Anyway, so the brick file has the raw numbers, and then, there's a head file, that has header information. That tells you where the Voxels are in space, the resolution-- how big are the voxels? What space are you in? What's the orientation of the data?
So if I've got this box of data in here, and I want to access this Voxel's location, where is it? How do I do that. Well, you have to know that it's in LPI orientation. So you have to know that the data values go left to right, and then posterior to anterior, then inferior to superior.
And with those pieces of information, you know at index 17, 326, I'm at this coordinate in space. And this is my piece of data for that location. So that's going between coordinates and, say, Voxel indices in three dimensions.
So how you do these things, that sort of information is stored in the header. You might have some temporal information, like what's the slice timing? At what point in time did I acquire slice 7? You need to know.
If you're going to do slice timing correction, you'll need to know. So you'll need to know that, OK, you can at least know that my TR was two seconds. I acquired slice 0, 2, 4, 6, 8 and then 1, 3, 5, 7. So from that, you can compute that 7's a little more than halfway through the TR. Maybe it's at time 1.16 or something. So you can figure out the timing base, but that has to be stored somewhere in the header.
And then, when you get some tstat or fstat, or whatever results or correlations, or what have you, you want the statistical parameters, of course, to be stored in there, too. So there's lots of garbage in the header files. Then, in AFNI, those are just text. You can read them if you choose to, though they're not written to be nicely read. But they are text.
AUDIENCE: This is a bunch of garbage text. How do we interact with it if all that information is there?
RICK REYNOLDS: So you'll use programs to get at the pieces of information. Like if you want to know how many time points there are or things like that, you can use 3Dinfo-nt, the number of time points. Well, we would do that anyway, but why don't we do that right now, since we're talking about it? So open a terminal window, please, and let's get a little information from a data set.
So I'm in my home directory anywhere, wherever you stuck AFNI data 6. So if you type ls, you want to be able to see this. Hopefully, it's in your home directory if you're just learning to navigate now.
OK, so I'll do this in two steps, but usually, once you get comfortable, you can jump many directories at a time. But let's CD into AFNI_data6. And as you're learning to navigate a new directory structure, like for example, if you're not familiar with our class data, you don't know what's in AFNI_data6, anytime you type CD, just follow it up with an LS.
LS lists the contents of the directory, what's here. Just do it every time. And then, you get familiar with where everything is.
So under AFNI_data6, we've got a bunch of directory, some DICOM files-- I don't know if those are DICOM or GEI files. I don't remember. But we have a bunch of directories under here.
We use the word AFNI way too much. So AFNI can mean many things. Capital AFNI is, say, the name of our software package. Little at afni is the name of the graphical user interface you can use to look at volumetric data, but little afni in the directory structure is where we often put some of the AFNI data.
We didn't start doing that. Some users started doing that, but we took it on. And now we've got AFNIs floating all over the place. So that'll be a little irritating and confusing, perhaps.
But anyway, let's CD into that AFNI directory. So I went into AFNI_data6 and then into AFNI. How many people are fairly comfortable with typing commands and navigating with a keyboard? Oh. Well, we're in good shape, so I won't go too slowly with this stuff, then.
So now, we're back into that same directory that I was looking at before. And let's look at, say, the API data. That's a little more interesting. We can do is we can do things like 3D info. There is a 3D program. And info, we're just getting information about a data set, EPIR1 plus orig.
We have a few EPIR1 files here. But once you type the plus, then you can hit a tab for file name completion. And you can follow this up with the dot. If I hit tab, it includes the dot. You can include the dot head or dot brick. Those are all fine.
But if I simply type 3D info on this data set-- I'm going to make this slightly smaller just to see if we can fit on the screen. OK, fantastic. So here's my command at the top, 3D info on the data set. And this just gives some of the information. That's not everything, but the most common things.
The data set name, some unique identifier. We're in orig space. It says the space of the data. Most of that up there, I won't focus on too much.
The data is plumb. That means it's not oblique. It wasn't collected at an angle.
When you collect your data at the scanner, a lot of people collect the slices so that they're parallel to some axis on the subject. So if the subject is in there a little crooked, they may try to do the slices to cover some area well or to follow some anatomical structure in the subject. So as far as the scanner's concerned, they're not flat with respect to the scanner. They're at an angle in the scanner. So those will be oblique slices or oblique data, and then, we'll have to keep track of the fact that they were acquired like that.
But anyway, so when you see plumb, that means it's not oblique. It's not at an angle. It's on the cardinal grid, we might say.
And then, the orientation, this data was collected right to left, anterior to posterior, inferior to superior. So in AFNI, we'll call that orientation RAI. For your confusion pleasure, some other software packages may refer to this exact same storage structure as LPS.
And whether right is negative or left as negative in this case, that it depends on the software, too. So not only do you have to have the orientation in your paper, but that doesn't tell you exactly-- if you see coordinates in the paper, you don't necessarily know if they're right or left, A or P. It's very messy. The field is not uniform that way, so we have to suffer a bit.
So to some degree, in your paper, having a coordinate negative 1720 to 31, you might not list all the coordinates like that. It's unambiguous to say 32 left, 14 anterior, 15 superior, whatever. So if you put something in there that says exactly, this is left, rather than negative, then it's not ambiguous.
Anyway, moving on. So the next few lines talk about the extent of the data. What's a bounding box for how this is located in space? So it gives a range of, say, right to left coordinates, 2.75 millimeters. You can see the axial slices here are 2.75 millimeters squared. But then, the slices themselves are three millimeters apart.
We have 80 by 80 voxels and 33 slices in this case. Beyond that, we have 152 time points. We have a TR of two seconds, and then, some details about the data.
Let me add an option here. You can just watch. You don't have to catch up with this. There's a lower case verb and an upper case verb, just depending how verbose you want the details to be.
This is going to include details about every volume in the data set, and there are 152 of them. So I do want to have this just flashed by the screen. So I'm going to get a little Unixy. I'm going to pipe this through less. But do that if you choose to. You can just watch, too.
So now, we'll see everything. And we can scroll down and up through the output. But this includes the time offsets per slice.
So you can see that slice 0 was acquired at time 0. Slice one was acquired at time 1.03 seconds. So that suggests that you're using interleave slicing starting at that slice 0. Interleave doesn't always start at 0. On some semen scanners, if they're even or odd number of slices, depending on the parity-- I don't recall which, but it'll start at slice 1 and then come back and do 0.
And a lot of software doesn't start at zero. In AFNI, we start counting most things at zero as like a temporal offset or a spatial offset, or like the C programming language. In MATLAB, for example, you start at one. So all sorts of little things to confuse you with. It's very fun
So anyway, you can see that slice pattern there. And you can see how the two second TR is divided up across the slices. And then, for each volume in here, you see volume zero, the data type is short. That's the 16-bit signed short integers. And you see the range, 0 to the 3700, 32 72.
So you can even note that that first volume is higher than the rest of them. And the second volume is still pretty high. This is like pre-steady state acquisition. The initial volume is at higher intensity.
Then, it goes down. And then, you reach a steady state. So you'll probably throw away those first couple of volumes. Yes?
AUDIENCE: This is a small data set, right? There's only 152.
RICK REYNOLDS: This is, say, one run. So 152 is not terribly small from what I've seen. People sometimes go down to 100 to 300, say. But it depends on how fast your TR is.
So here, the time step is two seconds. So we've got 300 seconds. This is five minutes of data. That's reasonable. But you might have six rungs to put together. So you could have 900 data points that you might analyze.
Especially in the older days, the scanners couldn't run for too long. They would heat up. So a five minute scan would be fairly normal back then, because you'd start getting artifacts in the data as it got too hot. But on top of that, even though scanners are manufactured better now, asking subjects to be still for too long is asking a lot. If you bet if you haven't been in the scanner, you'll learn quickly.
Anyone have an idea what's a big time point to time point motion? If the subject moves this much, we may have to throw out that volume of data. What's a big motion? Between one two-second time point and the next?
1 millimeter is big. Even 0.3 millimeters, we'll often call big. If a subject moves 0.3 millimeters over the course of two seconds, we may have to start throwing out data. So how good are we at sitting still and not moving 0.3 millimeters?
You have to stop breathing. Oh, you can't stop breathing, because you're going to be in the scanner for two hours. You have to stop your heart because your heart keeps sending your brain up and down, so you've got motion. You can't stop your heart. So we have all these little issues to deal with.
And so it's not too difficult to have motion big enough to have to censor it out or something like that. So that's the bane of FMRI, really. Subject motion is almost impossible to deal with, so you do your best to get the subjects not to move up front. I've gone off on a long digression, so I don't remember what the point of it was, but anyway, so a little motion is actually a big deal for the data.
AUDIENCE: As a rat person, we collect five minutes of nothing before we start our experiment.
RICK REYNOLDS: Sure.
AUDIENCE: That threw me off.
RICK REYNOLDS: Right, right.
AUDIENCE: [INAUDIBLE]
RICK REYNOLDS: So for rats, for example, they're fixed in a rigid position and maybe anesthetized, and so, they're not going to move. You don't have to worry about that too much. With humans, people start whining if you both them into the scanner, and it's a terrible problem. So it's hard to keep still for that long if you're collecting human data. So that's another reason the runs are a little shorter.
At the bottom of this, if I scroll all the way down past the data, we also have a history of commands that were used to create this data set. And so if you've done a lot of processing on this data, pre-processing, you're getting up to doing a little regression, 3D info might show a whole slew of commands. In the history section, this could be megabytes long in some cases.
You can also use 3D info to get little pieces of information. So for example, dash NT, I could just get the number of time points, 152. So you can use this in a shell script to grab this piece of information and use it for further processing.
Just as an example, you can get the prefix. Why would you include the prefix? Well, because we're going to get dash, how about O3N4? Well, that's good enough. Star dot head.
So I am asking for orientation or whatever O happens to be. You'll find out soon enough. N4, whatever that happens to be, that's the number of voxels in four dimensions. And start dot head will resolve to match any dot head file in the directory, the API, the statistical data, the anatomy.
Let me make my screen a touch wider. And now, you have a table of the details. So you for ANAT, ANAT, one's orig space. One's standard space. API run 1, func slim is a statistical data set with beta weights and t stats and stuff, some mask. So it gives this information for all of these.
So you can see that O, wherever it went-- O3 is actually origin information. These are the coordinates where each of the three axes start from. And then, the number of voxels in each dimension. You can see some of them have a fourth dimension. Some do not.
So you get the 175, whatever, the ones. And then you get this extra fourth dimension of one for-- why do we get the ones for those? Why does EPA run 1 only have three numbers on that line? Oh, the spacing is screwed up here. So that should be 80. 80 33. I don't know why the spacing is messed up here.
So when in doubt, I'll blame Gong. Let me just run that again. Yeah, so it was when I widened my screen. So that the spacing is good if I just make it bigger first, and then run it again.
So anyway, now you see the four dimensions there. Anyway, so that's some information about the data sets. Any questions about this?