29 - PsychoPhysiological Interactions (PPI)
February 15, 2019
May 31, 2018
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Rick Reynolds, NIMH
For more information and course materials, please visit the workshop website:
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
RICK REYNOLDS: Let's talk PPI. I have a little handout for PPI here. Events, AFNI, handouts. Strategically called all caps PPI dot little PDF. While I'm doing the hands-on stuff, handouts aren't as important. If you're doing a lecture, then of course you just follow handouts. Me, I try to ignore them as soon and completely as possible.
PPI. Psychophysiological interaction. So what is it? It's supposed to be the interaction between multiple tasks so that the conditions that you're stimulating your subjects amongst and the physiology, which is, say, the bold signal. And the physiology here would be the signal at some voxel location or at some location in the subject's brain.
So you you're looking for an interaction between at every voxel in the brain, you want to evaluate the interaction between the different tasks and this location in the brain. It's a little bit like a seed-based analysis, a seed-based resting state analysis. But you want to think about how the correspondence varies among the task conditions. That's basically what it is.
An example of what you might be asked here is, so what is a larger component of the left auditory time series during the auditory reliable condition as opposed to the visual reliable condition? So if you make a time series in the left auditory area somewhere or the ROI average or just take a seed, you make that time series. That's the physiology aspect of this.
And then you want to see for every other location in the brain, what has a larger or smaller component of this time series? How does the contribution of that time series vary among my tasks? And here, say, among the auditory and visual reliable tasks.
We like to measure this above and beyond the main task effects. If you include the main task effect, this might just be a normal interaction in the beta weights just as compared to the beta weights at this voxel. You could do that with 3D t-test plus, if you wanted to just use the-- not compared to zero, but compared to the beta wave magnitude at this location. Of course, you'd probably get a lot of negative results. If you have a positive beta way tier, all the zeroes across the brain might show something. But anyway, that should be an easy thing to do. You don't need a fancy method to get such a interaction.
So we want to measure this above and beyond the main task effects. Let me leave this here for now and leap over to my favorite place, the terminal window, and you can come along. So let's go to the home directory again. So we'll type cd in the ls. And I want to look at some time series to get a better idea of what PPI is going after here.
So let's go back into AFNI Data 6. And we'll stay in the FT Analysis subdirectory. So this is where we just ran our surface analysis scripts. Within this directory, there is a PPI directory. OK. So let's go there. So this, actually, README here is an analysis script. This README script calls these command scripts. We'll save that for later.
But what that is is a set of scripts to run a PPI analysis based on our FT results. So we've run the basic volumetric results on subject FT. And so we have the FT results directory. We've done a volumetric analysis there. Maybe now we want to run a PPI analysis based on that. So these scripts are set up to do so.
But before we worry about that, I want you to run this run PPIExample.txt script. That's not so great to look at, so I'm just not going to. Let's just TCSH the thing and look at what it shows us. So TCSH, run PPIExample.txt. By the way, this is a nice projection system. This looks fantastic. We've got high resolution, we can show a lot of stuff. This Suma stuff, that was encouraging me to show the extra garbage in Suma because we can see it all and see it well. And you don't have to stack things on top of each other. This is nice. I noticed that here, just putting these windows to the left of each other, we usually don't have the space to do such a thing.
Anyway, I digress. Let me switch these around. I'm going to put the windows titled Ideals on the left and Target in the middle and Seed on the right. Doesn't really matter, but that maybe is an order that makes more sense. So what do we have here? On the upper left here, we have two voxels. I have perhaps no idea which is which. Yeah. I don't remember what I did for this. But I should be able to figure it out from here.
But anyway, the blue and the purple on top. One of these time series is at our seed location. And one of the time series is at some random target location that we want to compare to our seed. So our seed location is the seed for-- it's the physiological piece of the PPI. So the seed location is going to determine-- we're going to compare things. We want the contribution of this seed to each target location. And we want to see how that varies across our conditions.
So I don't know which is which. Let's call the blue one the seed. Hopefully it's correct. I'm suspecting it's not, but who cares. So we've got a seed time series in blue and a target time series in purple. And we want to decompose these things somehow. First of all, we don't want the task main effects to affect our PPI.
So the task main effects are in the bottom in black and red. Those, you recognize those curves, right? The black is the visual reliable stimulus at the ideal response curve. And then the red one is the auditory reliable ideal. So we don't want those to affect our PPI. The green here, the green here is supposed to be-- it's a sinusoid. I didn't know what else to use, but it's some function that is 0 mean over time, this is what's going on in our seed region beyond the task in how much of this over time might vary.
So the seed region has some amount of this over time. And then the target region has some different amount of this stuff. The target has an amount of the seed over time. And we want to look at the contribution of the seed at the target location, and for every target location in the brain, for that matter. So--
AUDIENCE: [INAUDIBLE] frequency of a sinusoid is just--
RICK REYNOLDS: Just to throw something in there. The point is, it has to be 0 mean across all time and 0 mean with respect to our tasks of interest. So I didn't know how to fake that well, so I just made a sinusoid, have it go up and down fast enough where I figure it's about 0 mean over any of these durations.
But it doesn't have to. Let me give you a context to picture the PPI in, a better one than this one. So I run an analysis where I'm comparing my two main conditions are images of puppies and images of spiders. And I'm comparing the brain response to these two types of images. So I run my analysis, I publish my results, and lo and behold, I'm in nature. There is a difference between looking at puppies and spiders.
But I wasn't very picky. I didn't take note of the fact that some spiders are kind of ugly, some are really cute. And the puppies, some of the puppies are OK, but some of them are really scary, right? Puppies can really scare people. And spiders are adorable. So, you know, there's a cuteness factor for the spiders and a scariness factor for the puppies that I haven't accounted for.
So you could imagine that your subjects, they see this puppy, this puppy, this puppy, this puppy, maybe the magnitude of the bold response is varying across stimulus events. Or maybe I see a lot of puppies, so I see there are a lot of little fluctuations in the bold response based on these quick stimuli that are presented in a block condition, say, I see 10 puppies at a time and their scariness factor goes up and down. And maybe that's actually in the data at the seed location or at any location. And maybe the same thing holds for the spiders. I see kind of an ugly spider and an adorable spider.
And at some different parts of the brain, you see fluctuations due to that. But you didn't account for this in your analysis. But the variance is in the data. The subject is responding to these cute spiders. So a PPI could capture that because in your main analysis, all you've got are these big black and red curves. All you account for is the average bold response to the spiders and the puppies.
But maybe on top of that, based on how cute these things are, you're seeing additional fluctuations that just aren't in your model. Perhaps you should be using some amplitude modulation that would be appropriate in the context of what I'm describing. But let's ignore that and let's look at an alternate way of doing this and maybe in the context of a PPI.
So that does bring into question to some degree, in my mind, a PPI analysis often is-- the only thing it can really capture is something you didn't think of capturing in your original model. Another thing you might find with the PPI analysis, the way to get a stronger PPI result is to do a lousy job in your original model. The worse you model your data up front, the more it's going to be left for PPI to find because PPI is more of a correlation type of analysis. But you've got tasks in there and not just someone taking a nap in the scanner.
So you've got actual tasks which tends to mean bigger bold responses. And so you'll capture the variance of these things more in a PPI analysis than in the task. So if you do a bad job up front, your PPI may be better. So that's-- make about what you will. You'd rather do a good job up front.
So there are these aspects that you either don't understand or you've failed to model in the data or what have you. And so that's what you're left with. So you've got these things going on. And again, this green signal is just some cuteness rating or something like that, something happening in our boxes of interest. But it's on top of the task.
So like this blue curve right here in the first in the first graph may be some combination of these five curves. And what are these five curves? Well, the black curve is, of course, the visual curve or let's call this the puppy curve, and the red curve is the spider curve. The green curve is the typical addition or is the additional fluctuation that isn't varying with task at our seed voxel. So it's just whatever is going on in our seed voxel that's more or less over the whole run.
And then the blue is like an additional component that's an additional fluctuation that's happening during the-- now I forget it. Is this the spider time? So this is additional fluctuations during the spider time and the purple as additional fluctuations during the puppy time. And the same thing at the-- well, I've got target and seed here. I should reverse those. I'm sorry for throwing you off here. But again, it's more logical. Talk about the seed first, then the target. So we've got these components in the seed and the components at the target.
And so the PPI is how much is the seed time series-- how big a component of the seed time series seems to be present in each of the target time series in the different conditions and the interaction? So you might get a beta wave for each condition. The beta weight for my spiders might be the contribution of the seed that's unique to the spider condition. And a beta weight for puppies might be the contribution that's particular to the-- specific to the puppy condition. Plus there's a contribution that's more or less consistent over time, a single beta weight. That is the effect of the seed at the target location.
A little bit confused? You should be. PPI, it's not as simple obvious thing. Maybe the original way it was done, it was more or less looking at a one condition, we have this magnitude, and the other condition this magnitude. And how does the difference between these compare between a seed and a target? If you only have two conditions in your entire time series, you can do something like that.
How many people are doing analysis with two conditions? That includes rest being one of them. So you are? So if you're doing that, maybe you could do more of a standard PPI. And it would be a little more comprehendable to talk about. But if you've got 17 conditions, like most people have, you can't do something like that. So this is more of-- this should reduce to the same thing as the old PPI. But this handles more than two conditions.
So you'll effectively get a beta weight of the PPI value per condition and the interaction would be, say, the difference of two of them. And the difference of two would show a change in contribution of your seed at the target location between condition puppy and condition spider. So, effectively, we'd like to just remove this ideal time series from the data. So the devaluation of the PPI should not be affected by these big curves because they're just going to dominate, everyone, anything you've already done that analysis. You don't need to see that same result come out in your PPI.
So we basically try to do the PPI orthogonal, keep the PPI results orthogonal to that or based on orthogonal curves.
AUDIENCE: So [INAUDIBLE]
RICK REYNOLDS: You choose the seed in much the same way you do it in resting state analysis, whatever the heck you care about, you know? Who's to say? So in this example, we said we cared about the left auditory cortex. But like you've got a region that is interesting in some way and you think it has some connectivity with some other regions--
AUDIENCE: How do you [INAUDIBLE]
RICK REYNOLDS: The target is everywhere. Each location becomes a target. But there's one seed location, say. So it's this area that you care about how this connects to everything else.
AUDIENCE: Does it make sense if you [INAUDIBLE] and you get the [INAUDIBLE] take that voxel?
RICK REYNOLDS: You could. You could pick the mixed seed. So, you know, the PPI results, actually, also the way we've described them, the times series are somewhat orthogonal to the main results. So that should be-- that in itself gives you a lot of justification for choosing the seeds however you want. But, you know, so that brings up a question, are your seeds going to be single-subject based or group based?
So there's some difference there too. But for the most part, I don't see any reason why you would be restricted in choosing a seed because even the main result is more or less-- we're using time series that are orthogonal. So it's not quite like the double dipping thing in choosing a main seed and then doing a new analysis based on that. That's not even quite double-dipping either. Sort of just a different thing. But I think you can justify using whatever you want.
RICK REYNOLDS: Here, let me lead back to the slides. Is this speech too? Yes. So here, we are going to actually use beta weights as measures of effect. So, basically, here we're going to end up with-- the PPI beta weight is going to be not a correlation or a C normalized correlation. We're going to use a beta weight. And the beta weight is, here's my seed time series, but I remove the main effects.
Now, what's the contribution? The magnitude of my seed time series during puppies versus during spiders. So these are beta weights, effectively. It's a magnitude of some time series. So the incremental effect of this seed time series, say, at the other locations. So you could do it as a correlation analysis. However, like one example of the trouble with the correlation. If you look at the correlation of these two areas, I suppose they're only correlated during the puppy condition and not the rest of it.
And even suppose they're more or less perfectly correlated, the correlation value is going to be scale-- essentially scaled down by the fraction of time that they have the correlation over the whole thing. So if basically the correlation is 0 during the whole run but during 10% of it is high, your R squared, the R squared should be 10%. It won't be 100%. So it basically scales it down by the duration.
And that's a goofiness now. How much of my run did I use for cute spiders? How much I did I use for scary spiders? Is that the same? What about the puppies? And how much for puppies versus spiders? You know? Did I stimulate for as long? Now you have to worry about these things because they will affect the correlations, but they won't affect the beta weights,
assuming you have enough time to estimate the beta weights, which you should because they're during the same task conditions. So beta weights, in those ways, are a little easier to handle. And then, beta weights are just done as a group analysis the same way beta weights are now, puppies versus spiders, but now we can use PPI puppies versus spiders.
So, again, they're a little bit more like task effects not affected by their relative cumulative durations of them, like I mentioned. So the way we do it is we compute [INAUDIBLE] task conditions at once. So however many tasks you have, you will compute a beta weight for the contribution during each one of these conditions. Puppies, spiders, pizzas. For each of your conditions, you'll get a beta weight that shows the contribution-- not contribution, but the relative amount of your seed that is present at each one of your target locations so you get these magnitude values. And so you can do a group analysis just the same way you would otherwise do it.
So let's think about how we can actually do this. How do we get these magnitudes? One difficulty is, so suppose my puppies are displayed for 10 seconds here. And let's say we wipe out the main task effects. They're gone. So let's not even talk about them. So I see some fluctuations during the 10 seconds of my puppy response. Can I just compare those between the seed and target-- that 10 second period? No.
Especially, what if it's not 10 seconds? How many of your conditions are 10 seconds long? Most of you, they are one second long. You're going to compare one second to one second? Let's ponder one or two seconds, actually. That makes it more obvious. If you compare the one or two-- say, two seconds to two seconds between seed and target, how do they compare?
Well, let's suppose we present a big stimulus in these two seconds. The cutest spider ever seen is presented. So you expect a big result, a big bold effect of that, right? What does the bold effect look like due to that? Two seconds. What does the bold response look like? It looks like this. It's all over here. What's the effect in that two seconds? Nothing. There's no bold response in here almost at all. It takes two seconds for the bold response to get off [INAUDIBLE].
It peaks four or five seconds later and then goes back down after 12 or 14 seconds. You're nowhere near your stimulus period. That's the bold effect, right? This makes life a little more difficult. Now we have to talk about deconvolution, we have to talk about, say, MRI timing or bold timing versus some sort of neurological timing what really happens in the brain during the two seconds? Forget the bold response, we just want-- just what's happening in that two seconds? And the bold response is just an effective [INAUDIBLE].
So life is hard. So what do we do? We're talking about deconvolution here. So here we whine about the kind of the task involved with [INAUDIBLE] function. And let me just leap down to the processing steps. So what do we actually do? And maybe things will be more clear this way. So first of all, we generate a seed time series.
But again, we don't want the main effects to influence this. So where do we get this seed time series from? Do we want motion known-- motion to effect this? Do we want known drift to be involved in this? No. This is all just [INAUDIBLE] the waters, right? So if we don't want any of this garbage affecting our seed time series, where might we get the seed time series from? The residuals. That's right. You get it from the single subject analysis residuals. You've already removed the main task effects and then all the garbage things that you don't [INAUDIBLE] talk about. So you just get it from the residuals. Life is a little harder when you worry about censoring. Let's table that aspect for now. But you get it from the residuals.
So now you have a seed time series that doesn't have any main task effect. You've already regressed out the mean task effect and motion and [INAUDIBLE] Whatever. So you generate your seed time series from an ROI average or a seed or whatever you do. You can drop a sphere down or a ball down 5 millimeter radius, whatever you do. And then you want to deconculve this seed [INAUDIBLE] timing in some sense.
So remember, though, the bold response is slow and sluggish. You've got a two second event. You don't want the thing you're comparing from the two second event, you want it from the cute puppy, the cute spiders. You don't want it from the pizza task that [INAUDIBLE] You're actually in the middle of the peak for your pizza stimulus. You don't want pizza. You want spiders.
So you want to deconvolve the bold signal in some sense. So instead of getting this bold curve, you get more of like a neurological magnitude, which, of course, we can't do accurately, but we can hope, we can more or less estimate some sort of neurological signal by deconvolving all of the MRI data, all of the MRI signal with the same block basis [INAUDIBLE] you use for your regression.
So if you used BLOCK, if you used BLOCK for your main analysis, for your main regression-- which we did, we used BLOCK of 20-- or if you use BLOCK of 2, or BLOCK of 7, who cares? You're using BLOCK. If that's [INAUDIBLE] the bold response to look like, we can deconvolve the signal with this function and now we have some sort of neuronal timing signal that we can partition over stimulus time. Fixation, pizza, puppy spider. We can break our time series into that with zeros that basically zero out everything but your condition of interest. And now, in some sense, you have neural activity in your condition during your condition of interest.
Optimally, you might even think right now, well, shouldn't we do all of our analysis like that? When we actually like to deconvolve our whole 4D time series and run our linear regression model, then we could just do the whole analysis like this. That might even be reason [INAUDIBLE] if we could deconcolve well. And two, that the cost is that would take a long time. Deconvolution is an expensive step. It takes a lot of computation. It's easy to do it in one voxel modestly quickly, but to do it on 100,000 takes a long time.
Anyway. So we take our seed time series and partition it for this across the stimulus classes. Now we have one seed time series per condition. Zero everywhere, but fluctuations during our conditions of interest, spiders, puppies. And we would hope that we see cute spider, ugly spider, very cute spider, moderately cute spider. And similarly for puppy. So we hope to see some sort of fluctuations like this.
And now that we've partitioned this seed [INAUDIBLE] one seed time series, we have a piece of that time series per condition, 0 puppy, 0 puppy, 0 other puppy, 0 spider, 0 cute spider, medium spider. So it's all 0 except during our condition of interest. And then we see the fluctuations. Now, these are fluctuations that we think might be related to our stimuli because we deconvolve the signal back to neuronal timing as if they were well possible. Hopefully it's not grossly incorrect.
And then we reconvolve that back into MRI timing. So now we can reconvolve with our block. But now we've separated our puppies and our spiders in temporarily appropriate times. so now, once you reconvolve, now you throw them into linear regression. You put the seed in the linear regression and all your PPI terms. And now you just get beta weights out. You don't care about the beta weight from the overall seed. You want things that are specific to the condition. So you get your beta weight for puppies and your beta weight for spiders. And their difference might give you the interaction. And the final result is blaming your confusing results on literature-generated seed locations. That's the most important step.
RICK REYNOLDS: To some degree you could just put your PPI regressions in the [? Earth's ?] model. And we could almost do an insta PPI like that and 3D deconvolve, except then you'll get [INAUDIBLE] whining at you. You're not accounting for all the degrees of freedom. It's better to pull it in the full model. So that's what we suggest. So just throw all of this back in the original model and then your all your degrees of freedom and stuff like that.
So it would be-- and I've looked at the results, the difference between these two things. They're very similar. I haven't done any extensive analysis, but I did a quick look.
RICK REYNOLDS: Say that again?
AUDIENCE: Like the noise [INAUDIBLE]
RICK REYNOLDS: Any noise is going to affect this. I mean, so we hope that the that our PPI beta weights are just dependent on the cuteness of the spiders. But any noise that affects your target is going to correct this. So anything, any stimulus can affect this. But that applies to your original analysis too.
But then-- so this is the difference between task and rest too. It's a similar difference. For an external stimulation to affect your main task analysis, it has to be correlated with your task that the timing has to be somehow a little synchronized with your task. So like motion might often apply to that, but some random things probably won't. But random things can [INAUDIBLE] your whole brain out once. And that means they affect the seed and the target. And that's more of a resting state type problem. You could see resting state correlations because of this random thing. So PPI can, again, pick up-- so anything in a similar sense that resting state [INAUDIBLE].
So just to note it, then, in this PPI directory, you've got, again, this readme.txt and command one, two, three. This this runs a PPI analysis. This is really the documentation, say, that we have on how to do it. You could just modify this to suit your data. So that's a non-trivial thing, but I tried to make the scripts reasonably comprehendable and commented.
Let me just mention to generate your seed, first you run the original regression but without censoring because we don't know how to handle censoring when we're doing a deconvolution and reconvolution step. So [INAUDIBLE] average motion spikes and whatnot, we figure we pass it back and forth. On the flip side, if we use the sensor data, we'd probably rather deconvolve and reconvolve the 0 residuals, the 0 values and the [INAUDIBLE] then emotion spike. But you can bicker about that.
So in a somewhat and analytically conservative way, we'll say, start out by not-- so your main analysis is censored. Do another thing where you don't censor. You can just run 3D reconvolve 3DT [INAUDIBLE] with the same x matrix. Just take out the censoring thing. So it's trivial to redo that 3DT project to project out your original no censor x matrix from the data.
And now you have the residuals. And then you send your residuals to this-- [INAUDIBLE] ROI average for your seed time series. And then you can send that to basically the second script. Here we use 3D deconvolve. In this case, since our main basis function was like a block and not like a game, in the sense it was like a block, we were on 3D deconvolve just to generate a block ideal time series.
So I didn't mentioned one little aspect here. We actually oversampled the data before we do the deconvolution. The nice little addition of effect of the oversampling is, one, we can handle stimulus events that are not TR locked. So we didn't say-- we haven't talked about when the cute spiders were displayed yet, right? That might not be TR locked. Do you need one TR, two TRs, or pieces of them? What if your stimuli [INAUDIBLE] half a second in the middle of TRs and then you have no events?
So how we did this is we oversample the timing in here down to 0.1 seconds. And now we can have actual blocks of time for our stimulus conditions, even if they're not TR-like. They're just locked to this 0.1 second TR now. So we over sample the timing to that. We have that the temporal intervals of our spiders so we deconvolve the time series and now we can grab those temporal spider intervals at the 0.1 second TR and then reconvolve.
And one little interesting thing to note about this, the temporal, if you oversample it like this, doing the deconvolution and then reconvolution steps, they are almost inverses. So if you take your time series, you oversample [INAUDIBLE] like we do and reconvolve it, you get almost the same time series back.
That's not like other software packages may restrict themselves to bold-like signals. That's a nice thing. We don't go after that. It's kind of nice to have this invertability that gives a robustness to the method that is nice. So we're going to rely on this censoring and the motion parameters and stuff like that to work like they have been already. And we'll just let the data be what the data is and pass it through [INAUDIBLE] PPI filter see that and hope it comes out well.
Anyway. So we oversample everything to this 0.1 second grid. We make a 0.1 second resolution block response ideal. And the duration of this block response is 0.1 [INAUDIBLE] basically, we're generating an instantaneous response curve with the shape of the block. And then we can convolve this or deconvolve this to any duration since this is an IRF instant impulse response function.
So the first thing we do in this script is we use timing tool and give it our stimulus timing file. In timing tool, we'll break this into events sampled at our upsample TR of 0.1 seconds. So it's going to generate a time series of when this event was happening, when were the spiders shown, cute or not, when were they displayed at a 0.1 second resolution.
I will skip that step, then we take the seed time series, we upsample it. Fantastic. You can imagine that. And then we use 3DT fitter [INAUDIBLE] So we take our upsampled seed time series and deconvolve it. So these big bold responses, not that we'll see any, but the big bold responses should become little neuronal responses in our 2 second box rather than in the 14 second time window.
And now we can just multiply the spider time by this deconvolve seed and get a spider seed, multiply the 01 puppy time by our deconvolved seed, 01 puppy time multiplied [INAUDIBLE] seed, and we get a 0 puppy response seed time series. And then we can reconvolve these to see that above and beyond the main effect we get a puppy time series and a seed time series in MRI time, in bold time because we've reconvolved it. And that's what we do down here. And then we downsample them back to the normal TR and run our linear regression.
That's enough. That's enough suffering with this. I am seeing tears. I hate to see people cry. So any last questions about this?
AUDIENCE: Is it OK to compare the runs across days? [INAUDIBLE]
RICK REYNOLDS: This-- the runs in the same day. Well, in the same scanning session, yeah. So if you have multiple runs, the deconvolution is done per run because-- where is our deconvolution? Somewhere. Here we go. So for each run index, we run 3DT fitter. And that's because bold responses do not cross run breaks. So you don't want to run breaks infecting the deconvolution stuff. So you do a per run and then you can concatenate later when you make your multirun regressor.
AUDIENCE: [INAUDIBLE] to compare the [INAUDIBLE]
RICK REYNOLDS: You put these back in the full model with censoring, with motion, with pull [INAUDIBLE] with the main regressors. So it's additional regressors in the main model.
AUDIENCE: [INAUDIBLE] does this make a strong basis to then guess the unknown responses? So [INAUDIBLE] puppy versus spider and then [INAUDIBLE]
RICK REYNOLDS: Yeah. Yeah. I would think so.
RICK REYNOLDS: Yeah. So if you've got multiple levels to some condition, you could use this as a way to guess at what the levels were for subjects. If that's what you mean.
AUDIENCE: Yeah. That's what I--
RICK REYNOLDS: OK. Wait a sec-- say that one more time, please.
RICK REYNOLDS: Well, we should have results, except I didn't do one little thing. [INAUDIBLE] Now we'll get results. That won't take terribly long, but a few minutes. And that will generate results that you'll have trouble understanding.
AUDIENCE: What's the difference between 3D deconvolve and [INAUDIBLE]
RICK REYNOLDS: T filter? T filter is solving a potentially different equation. In 3D deconvolve, you're looking for a linear-- you're looking for a linear combination of these regressors add up to fit to your data. It might be closer if you imagine using 3D deconvolve with like 10 functions, where you know when the events happened but you don't know the shape of the response. And so you want to use 3D deconvolve and 10 functions [INAUDIBLE] an IRF or an HRF because it's not impulse. It could be longer events. But some sort of model of the human dynamic response function.
In this case, it's different. We're assuming the response to any event is block-shaped. But we don't know when the events occurred exactly or how big the responses are. So we don't know when they occurred, but we care about these time windows, but we don't know the cuteness level, we don't know when these are happening, in some sense.
So it's 3DT fitter is looking for this in the data and converting this to a little bump and a bigger this to a bigger bump at a different time. In [INAUDIBLE] you'd hope optimally this 14 second bold response would get mapped back to a two second bump, optimally. Because this is what happened and that's how long the stimulus lasted.
AUDIENCE: So like [INAUDIBLE]
RICK REYNOLDS: Yeah. Yeah. Just putting more dots in the middle instead of dot, dot, dot, dot. You know? Having in this case 20 per 1. 20 oversampled point.