GLMsingle: a toolbox for improving single-trial fMRI response estimates
May 2, 2022
April 28, 2022
Jacob Prince, MIT
All Captioned Videos Computational Tutorials
Advances in modern artificial intelligence have inspired a paradigm shift in human neuroscience, yielding large-scale functional magnetic resonance imaging (fMRI) datasets that provide high-resolution brain responses to tens of thousands of naturalistic visual stimuli. Because such experiments necessarily involve brief stimulus durations and few repetitions of each stimulus, achieving sufficient signal-to-noise ratio can be a major challenge. This tutorial will introduce GLMsingle, a scalable, user-friendly toolbox available in MATLAB and Python that enables accurate estimation of single-trial fMRI responses (
glmsingle.org). Requiring only fMRI time-series data and a design matrix as inputs, GLMsingle integrates three techniques for improving the accuracy of trial-wise general linear model (GLM) beta estimates. First, for each voxel, a custom hemodynamic response function (HRF) is identified from a library of candidate functions. Second, cross-validation is used to derive a set of noise regressors from voxels unrelated to the experimental paradigm. Third, to improve the stability of beta estimates for closely spaced trials, betas are regularized on a voxel-wise basis using ridge regression. Validation analyses show that GLMsingle substantially improves the reliability of beta estimates across visually-responsive cortex, and that these improvements translate into tangible benefits for higher-level analyses relevant to systems and cognitive neuroscience. This tutorial will provide a practical overview of GLMsingle, aimed at making the toolbox accessible to a wide range of fMRI users. We will review key concepts in GLM modeling of fMRI data, discuss how the different components of GLMsingle seek to optimize different steps in the signal estimation procedure, and walk users through a Google Colab demo illustrating the ease of applying GLMsingle to example fMRI data.
GRETA: Jacob is now working in the Vision Lab with doctors Talia Konkle and George Alvarez. Without further ado, we are excited to hear about GLMsingle.
JACOB PRINCE: Thanks so much, Greta. Thank you, Fernando, for hosting. It's really a pleasure to be here. I appreciate that everyone came out both in person and virtually.
I hope everyone can hear OK on the Zoom. As mentioned, I'll be talking about GLMsingle, which is a new toolbox we've developed for accurate single-trial fMRI response estimates. This is collaborative work we've done with Ian Charest, Jan Kurzawski, John Pyles, Mike Tarr, and Kendrick Kay, and Kendrick is with us here over Zoom as well.
A brief overview. First, I'm going to tell you why we developed GLMsingle and provide a bit of detail on how it works, and then we have a hopefully fun Google Colab demo so we can all try out the tool for ourselves. And then I'll wrap up by telling you how we've tried to validate the technique, and talk about implementation details, and provide some tips and guidelines, and we can close the discussion, and we'll definitely leave time for questions.
So, why are we spending our time on this? We as neuroscientists are in the midst of a major paradigm shift toward data-hungry approaches, and this has seen a rapid increase in the scale and depth of fMRI data sets. The traditional fMRI studies maybe had a few hours of scanning per subject, maybe up to a few dozen subjects, but now we're living in this regime where we have things like Natural Scenes Dataset with tens of thousands of different stimuli presented to each subject at high resolution. And you can use these kinds of data sets to answer many different questions.
You could study representations. You could build encoding models. You could try and characterize individual differences. Really, a huge list of different questions we could ask and answer with these data sets and these resources, but we run into the fundamental limitation of noise.
This is familiar to anyone who's done fMRI. Noise really is a major obstacle that limits the robustness and replicability of our fMRI analysis. I should be more specific about what I mean when I'm talking about noise for the sake of this talk. Here I'm assuming that we have some fMRI data, and our goal is to fit a general linear model or GLM to estimate bold responses for each condition or each trial in the data set. And if you're not too familiar with GLMs, I'll provide a refresher in a few slides.
For the time being, imagine we had a toy fMRI study where for each voxel in the brain we're trying to estimate a beta value that reflects the degree of activation in that voxel. And this toy example has three different conditions inspired by some of my favorite hobbies. So we have basketball, piano, and poker chip. We can just imagine that these are the images we showed our subjects, and each one has a theoretical ground truth activity level.
In a parallel noiseless universe, we might be able to put a subject in the scanner and just measure what the ground truth is, but of course this is impossible. And in our noisy reality, we might measure something, estimate responses, and get something like that. And if we repeated the experiment a few times, we might get something that looks like that.
This is actually a pretty good voxel. Most voxels don't necessarily look this nice. And so when I say "noise," I'm really referring to variability in response estimates that we're getting over repeated presentations of the same trials or conditions.
Of course, there are many different sources of noise that are all baked together into a huge mess-- there's head motion, physiology, scanner noise, thermal noise, there's actual neural variability, like the ground truth could change from moment to moment or from trial to trial, and we would never know that. The fact that these things are all baked together means that estimating response is pretty much a statistical nightmare, especially when we're moving into these more conditioned-rich regimes, where we might try to pack more and more conditions into our experiment, and you inevitably run into this trade-off where as we increase the number of conditions per subject inevitably we can show each condition fewer and fewer times. That makes us sad because we have fewer pink dots to try to average together and get better estimates of the ground truth response.
So the question is how can we combat noise to harness the potential of existing and future fMRI data sets in this really exciting moment for the field. That's pretty much the goal of GLMsingle. So we've tried to come up with a flexible and data-driven approach to denoising and signal estimation. The key insight, I think, is to acknowledge that noise structure differs not just between subjects but also between brain regions and between individual voxels and that our strategy should be to try to efficiently maximize data quality at the level of individual voxels and individual subjects. Then hopefully whatever analysis we're trying to do, whether we're analyzing how brains relate to one another, how our brains relate to models, or how brains relate to behavior-- hopefully, improving the quality of our signal estimates will positively impact our downstream analyzes.
So I'm going to provide a high-level overview of GLM analysis, and this will be a review for some of you. Oh, got a question. Yeah?
JACOB PRINCE: Yeah. Yep.
JACOB PRINCE: Right. So we can't know the ground truth. We're trying to get an accurate estimate of the ground truth by repeatedly presenting these conditions and measuring brain responses. When I'm talking about noise, you alluded to the fact that subjects also contributes something we could call noise in the form of individual differences. Different people might vary in their responses.
For the purposes of this talk and this technique, I'm going to be focusing on the idea of within subject noise, just the concept of the variability and responses you'd estimate within a voxel over repetitions of the same stimulus. Moving on.
So a high-level overview of GLMs. Given some fMRI data, the goal here is to describe time series from each voxel, y, as a linear combination of regressors in our design matrix, x. And the coefficients we estimate are the betas are going to reflect the activation magnitude of the voxel, as I mentioned before.
So I'll walk through the process of how the design matrix will be constructed for one run of our toy experiment. The starting point is a matrix defining the onset times of each condition. In this run, let's say we have five instances of each condition, so 15 trials total. Now, there are different ways you could model this data, but for the sake of this example, we're going to try and go for what we'd call a single-trial response estimation, where we're estimating a separate beta per trial.
That means that our design matrix will start as a matrix with 15 columns, one for each trial ordered chronologically. So now we've gone from a matrix of conditions by time to a matrix of trials by time, and we don't expect brain responses to be 0s and 1s. So this is where we'd want to introduce a hemodynamic response function or the expected response profile of the voxel when it's stimulated. Then the typical thing is to take this function and convolve it with each column of the design matrix, yielding something that might look like that.
And now by solving a linear equation that maps these predictors onto our observed neural time series, we can estimate a beta weight for each trial in the experiment and store those betas for further analysis. As mentioned, the data are super noisy, and for this reason it's a common step to include what are called nuisance regressors or noise regressors. For example, you might throw some polynomials like this into the mix to try to model low-frequency fluctuations in the real-time series that might arise due to something like subject motion, for example.
So now the design matrix would have a few extra columns. And if we assign weight to those, hopefully we can clean up or denoise our signal estimates. So this is a generic framework for single-trial response estimation using GLMs, and I'll be referring back to the schematic as I talk about the different ways that GLMsingle tries to improve on this framework.
Right off the bat, I'll say the only thing that the tool needs is input, are the time series data, and the matrix of condition onset times, and everything else happens like magic under the hood. And even though the tool was designed with an eye toward these rapid event-related designs, I should say that the same logic here applies for block designs, too. So you can imagine each little black trace could refer to the onset of a block of stimuli. And continuous designs like movie watching or listening to a story are also theoretically compatible with this tool as long as discrete events within the design are coded as repeated instances. In a movie, for example, that could be repeated appearance of a character on screen.
So before I dive in and talk about how GLMsingle tries to improve the quality of our betas, I'll just pause and see if there are any questions so far. So, how does GLMsingle work? We've basically implemented this as a single function that wraps all the important procedures together, and we have both MATLAB and Python implementations for convenience.
As I said, it takes as input your fMRI time series data, and an event matrix for each column indicates the onset times for each condition. The fMRI data can be in either volumetric or surface space, and it's going to return estimates of our single-trial betas. And I'll go into these outputs a little more later on.
Under the hood, GLMsingle incorporates three core techniques designed to optimize the quality of our betas. The first is the use of a library of hemodynamic response functions to identify the best-fitting HRF for each voxel. The second is a method called GLM denoise, where data-derived nuisance regressors are identified and used to remove noise from our beta estimates. The third key step is an efficient implementation of ridge regression called fractional regression as a method for dampening the noise inflations that can be caused when our predictors and our design matrix are super correlated with one another-- for example, if events are close together in time.
Applying these procedures in conjunction, I'll hope to convince you that has a seriously surprising impact on how things turn out downstream. I think each procedure tackles a different component of the GLM framework that I just laid out on the previous slide. So I'll walk through those different components in a bit more detail.
I should say that as we're going through this you might be thinking of your own designs or your own experiments and whether they might be compatible with this kind of technique. And also you might be thinking of questions like, oh, I'm not sure each of these procedures may be appropriate for my particular design. I just want to say right off the bat that the way we've implemented it makes it very easy for users to turn on or off different parts of the pipeline depending on what their particular needs are interests are.
OK. Before we get into each of these three components, I'm just going to provide a very high-level overview of the findings from our manuscript, which is now a preprint on bioRxiv, showing what happens for your downstream analyzes that might be relevant to cognitive and systems neuroscientists from applying GLMsingle. Just for these graphs, focus on the blue versus the red. So the blue will be a baseline GLM or beta version 1, and the red will be the final output of GLMsingle beta version 4. And we find that, first of all, you just see improved reliability of our response estimates across the board. We also observed the GLMsingle has a really substantial effect, reducing the temporal auto-correlation we see in our beta estimates. So each of those graphs is a temporal similarity matrix and just shows a reduction in correlations that extend forward in time.
For people who do representational similarity analysis, or RSA, we also applied those approaches and found that applying GLMsingle tended to enhance the representational similarity between subjects both within and across data sets. We also found that when we tried to decode single images, our accuracy was significantly higher after applying GLMsingle compared to baseline. I'm not going to go too deep into these results. If you'd like to read more about them, they're in the paper, and we can circle back to them if people have questions at the very end.
Now that we know what the impact of the technique is, let's talk about how each component works. First, the library of HRFs approach Typical approach in fMRI, as mentioned previously, is to assume that the response profile of the voxel follows the shape of the canonical HRF. And this is a pretty strong assumption, actually. It's definitely possible at different parts of the brain we might have different HRFs, and an issue is that if we mismodel the HRF, that could add noise to our signal estimates.
So GLMsingle relaxes the assumption that we're going to have the same HRF everywhere in the brain and instead uses a simple HRF selection strategy, where we take this library of 20 HRFs, which is shown on the left, and define the optimal one for each voxel as follows. First, we iteratively take the input data and fit the data each time using a different HRF from the library. Then for each Voxel, we just choose the HRF that has highest variance explained. It's a very simple strategy, and the user can enable or disable this step or even add extra HRFs to the library if they are interested.
I'll quickly pause if there are any questions on this step. Yes?
AUDIENCE: Yeah. Would it be between the red HRF on the far left and [INAUDIBLE] on the far right? What's the time difference between those two things?
JACOB PRINCE: Good question. Yep. The question was what is the time difference between the peaks between the red HRF, which is the slowest time to peak, and the blue on the right, which has-- so the red is the fastest. The blue is the slowest.
It's a good question. I'm guessing it's a few seconds, but maybe Kendrick-- do you want to chime in? I'm not positive.
KENDRICK KAY: That's a good question. I'm actually just plotting them right now on my end, so bear with me. It's several seconds. Let's see.
Well, the selection of the library which Jacob didn't get into-- which we can certainly get into if you're curious-- it's based on empirical data. And there was some choice as to what's the beginning and end. In theory, there may be a few voxels that are even more rapid or even more delayed than what was selected here. So it was deliberately quite inclusive, but anyway. So the current library that comes by default is 4 seconds of peak-to-peak difference between the earliest one and the last one.
JACOB PRINCE: Thanks, Kendrick. Yeah, it's a good question. The question was looking at this plot of the HRF indices on the left for an example slice of data-- this is from an NSD subject-- wondering about the fact that there seems to be the structure in the optimal HRFs, where we see this kind of bias or predominance of yellow and green indices chosen in the back of the brain and the red up front.
It's a really good observation. It does seem to suggest that there are meaningful differences in how meaningful spatial structure and how the voxels differ in their optimal HRFs. This is low-hanging fruit for ongoing analysis or future kinds of things we could do with this tool. We'd actually study that. So I don't have much to say about why we see that structure at this point other than to say that the fact that there is that structure, which you pointed out, I think speaks to the importance of including the step because the fact that you see all these kind of rainbow patterning implies that if you were to just choose one color, you would be systematically mismodeling response profiles for much of the brain.
AUDIENCE: Are you fitting the HRFs [INAUDIBLE] scan or [INAUDIBLE]?
JACOB PRINCE: Question was are we fitting the HRFs during a resting-state scan. Not quite. First off, this framework-- and maybe I should have been more clear-- is really not compatible with resting-state data because there's no event structure in that kind of experiment. At least it wasn't designed that way. Kendrick alluded to the way that the library itself was derived, which was using task fMRI data from a bunch of subjects and NSD.
Then in terms of actually identifying optimal HRFs in the data sets for which GLMsingle is compatible, it comes down to this iterative fitting approach, where you just go through the HRFs, you fit the whole brain one time with each of them, and you measure variance explained between the predicted and actual time series. And you just arg max over the levels of prediction. So it would be with whatever-- it's derived from your data, basically. Yeah.
KENDRICK KAY: Maybe I'll just chime in just briefly, unrelated to those questions, assuming you can hear me. [CHUCKLES]
JACOB PRINCE: We can hear.
KENDRICK KAY: The structured HRF indices you see in the figure-- we know at least in part it's caused by variability in hemodynamic distributions. Essentially, the larger veins that are draining from the capillaries are delayed in their temporal time courses. And the question about how you estimate-- you can think of this library is a prior as to the set of reasonable HRFs that you might expect in any given subject. And it's a very simple selection process, right? There's a fixed set of 20, and there's no attempt to try to find more fine grain than that, primarily due to computational efficiency reasons.
GRETA: And then someone in the chat says-- [INAUDIBLE] says I have a similar question about the HRF indices. In your experience, how variable do the HRF indices tend to be in a rather small volume? And what would that then mean for the validity of this method? For instance, if you find a high level of variability in closely neighboring boxes.
JACOB PRINCE: Mhm. It's a good question, asking about what we've observed in terms of the variability in the HRFs and also what it would mean if you had high variability in a local neighborhood. Yeah. In terms of the data sets we've applied this technique to, generally speaking-- and this is limited in-- what you see in the slice is actually pretty representative of what I found in my own analysis of several data sets, where there is a bit of variability over the brain but also its structure.
I think the question of what it means if there were variability in local neighborhoods relates to Kendrick's point. I think that the structure of voxels-- I mean, I'm not an expert on how veins interact with the unit of a voxel. But if the indices are partially tied to venous phenomena, then it could be possible to have certainly rapid changes in the optimal indices depending on where the vasculature is and have that not be a sign of instability but rather just actually what is in the data. I think the other point is just to say that even if you see this high-frequency structure in certain parts of the brain-- and that's in the context of the fact that the library is a reasonably constrained set of functions to begin with. So just seeing going from index 2 to index 18 and back even in the local neighborhood doesn't necessarily imply that you're dramatically altering what the shape of the response is.
I'm not sure if that answers the question very well. Kendrick, feel free to chime in if you have other thoughts on that.
KENDRICK KAY: Yeah, I think that's accurate. I will just add a little bit to this. I think, well, it depends on the data set in the sense that if a SNR is so horrible that the data are so unreliable, then, practically speaking, you may do better assuming a fixed HRF, which is sort of at the heart of this entire technique that you need data in order to use fancier techniques that are data-driven.
So if you have really great data, you will and can notice very rapidly changing temporal characteristics, these HRFs. And, again, I mentioned it has to do with the venous structure and the brain that you're measuring. So I guess it's not neither a good thing or a bad thing if you see it change rapidly. It's good in the sense if you can capture it effectively and use it for your advantage in terms of analysis, but, again, if your data are really horribly noisy, it may fail to be estimated well.
Well, hopefully that's helpful, but it is a complex set of interacting factors.
JACOB PRINCE: Yes. One more question, and then we'll move on. Yeah. Question about the fact that we're assuming a uniform prior over these HRFs and just choosing the one that maximizes variance explained and whether if we studied different subjects and found characteristic structures that were shared across all of them. Maybe we could adjust that prior or even-- I'm assuming you're also angling toward the possibility of expanding the library to include even more shapes. But you can imagine if you have a certain prior over certain parts of the distribution, then you might be able to cut down the computational cost of exploring even further through HRF space.
Yep. Yeah. I think as a first attempt, I mean, this is a pretty new technique. So I think the uniform assumption is justified at least off the bat, and you're completely right that it's a very interesting path for further analysis. And we might be able to answer that kind of question soon.
OK, I'm going to move on.
GRETA: If we can get back to the discussion--
JACOB PRINCE: Yeah, definitely. We can circle back to all these issues, and thank you for the insightful questions. Please keep them coming.
So, the second component. Onto GLMdenoise. As we've said, our data are going to suffer from very large amounts of noise, and oftentimes the noise is going to be spatially correlated, too-- for example, due to head motion. To account for this, as we said, it's common to include nuisance regressors. But rather than presupposing what these nuisance regressors are going to look like, GLMsingle takes a different approach, where we instead rely on an existing technique called GLMdenoise.
The insight is to try and directly model components of the data that seem to be noise with respect to the experiment and then include optimized noise regressors in your GLM that are hopefully going to do a better job cleaning up the signal than if you were to have assumed what the shape of the noise regressors would look like. So this is a previously published technique, and at a high level the procedure works as follows. We first are going to fit what we'd call an on-off GLM. This is a game where all the conditions in the experiment are just collapsed into a single predictor, and it's basically going to identify parts of a brain that are active in response to your experiment and also parts that are inactive.
And then we're going to take the non-active voxels and treat them as what we'd call the noise pool. OK, so the noise pool. We're saying there doesn't seem to be signal here relating to our stimuli to our experiment, and we're going to take a pretty simple approach, which is to apply PCA to the time series data from just the noise pool voxels, and then critically use cross-validation to iteratively one at a time start adding the top principal components from the noise pool to our GLM as candidate noise regressors, and continue doing that until we've maximized cross-validated variance explained, and stop there.
Basically, you're going to-- I just plotted a few different kinds of PC regressors that would come out of GLMdenoise, just two examples. Definitely very different looking than the polynomials, and this is again under the philosophy of kind of letting the data speak and really taking a data-driven approach to this important step of denoising. This technique has been previously shown to outperform other popular approaches across a range of experiments. That was the original paper in 2013, and more recently Jan published some work showing that GLMdenoise improves outcomes in RSA and decoding analyzes.
So this is the second step happening automatically under the hood when you run GLMsingle. The third component that we're going to discuss is this idea of ridge regression. At this point, we've identified optimal HRFs, and we have now a set of noise regressors that we're going to include, and we have to actually estimate the final betas. Typical approach would be just to use ordinary least squares to solve your linear equation, and there are your estimates. But because the time course of the HRF is so slow, lasting up to around 30 seconds, and oftentimes our predictors will overlap substantially in the design matrix, this means that ordinary least squares estimates are going to be noisy due to this high degree of correlation or multicollinearity between the predictors in our design matrix.
I'm going to use that schematic from earlier to walk you through an example of how and why this happens. You can imagine we have just our first two columns of the design matrix, so beta 1 and beta 2, reflecting two different trials-- maybe a basketball and a piano. Let's say for the sake of example that these two predictors are super-correlated, so r equals 0.8. So now we can plot the space of what our estimates would be to beta 1 and beta 2, and let's just say for the sake of example that the ground truth is somewhere there, like 4, or 3, or something, so that we know for a fact that the true response is 4 for beta 1 and 3 for beta 2.
And then we run our experiment, and we actually get to make a measurement and estimate some betas, and we land there. This is where the issue with OLS comes in because when you have correlated predictors, in order to minimize least squares, if one of the betas goes up, then the other has to go down. So if you're trying to explain the same outcome using two things that look very similar and you put a lot of weight on the first one to preserve the same amount of error, you have to put less weight on the second one and vice versa.
That's unstable. So what it means is if we were to show these stimuli again and assume independent noise for the second presentation, we'd land somewhere else in this space with our estimate, and we could show these two stimuli again and again. And if we were to repeat this experiment a whole bunch of times, we'd end up with a point cloud like this.
You'll notice that because of that degree of dependence between these correlated predictors, you'll end up with this anti-correlated cloud of estimates. What this means is that OLS is systematically mismodeling the brain responses. So we want to regularize, and the goal of a regularization is to shrink the betas or constrain them in a manner that hopefully will bias our estimates toward the ground truth.
So we don't want to just shrink our betas toward the origin directly, and this is where ridge regression comes in. Theoretically, what ridge regression achieves is this bias that will hopefully bring our estimates closer to the theoretical ground truth. And we can do simulation to see if this is actually the case.
The key thing, of course, is to regularize just the right amount. Right? So you have to see these little comet trajectory things. You want to stop right when you're as close as possible to the theoretical ground truth.
So you can simulate this kind of context, and say we run 200 experiments. If you do this and you run ridge regression, you'll actually end up with a graph that looks like this. It's a lot to take in, but the green is again the known ground truth, and each ridge trajectory is basically reflecting different amounts of regularization or different amounts of shrinkage that are applied. What you see is that what the technique achieves is this bias where you shrink your estimates such that if you achieve the correct amount of regularization, they move closer to the theoretical ground truth in simulation. So this is just intuition for why ridge regression works well in the case of correlated predictors.
Right. So we need to get the correct amount of regularization, and then everything will hopefully work out. The way the GLMsingle does this under the hood is by implementing this technique called fracridge, fractional ridge regression, which was published a few years ago. It's an efficient way to learn the optimal amount of shrinkage per voxel to get you hopefully where you want to be in your estimates.
What the frac refers to is shrinkage fraction. So it's different than a standard approach to ridge regression, where you might grid search over log spaced lambda values. Fracridge is going to sample a more effective part of that space more evenly in order to optimize the accuracy of the estimates with respect to the ground truth.
So this is a lot. I can pause if anything was unclear. I hope-- yeah. Yeah. Each dot is a separate experiment with independent noise, and the goal knowing the ground truth is beta 1 should be 4 and beta 2 should be 2.
The goal is just to use-- the dots are using ordinary least squares to-- critical to clarify that. And each experiment has random independent noise, so you just land somewhere. The point is that when your predictors are super correlated-- in this case, r equals 0.8-- you see that the cloud of black dots follows that diagonal. So there's systematic anti-correlation in the errors.
To translate that insight into what it would mean for your responses in your brain data, it just means you're going to have noisier more kind of spiky-looking signal estimates, where if your estimate is high for one trial it's more likely to be lower for the next trial and then vice versa. So the comets, the trajectories through space, are showing as you regularize more, and more, and more, and more for each one of those experiments, where do your betas go. The idea is that if you regularize just the right amount, you'll actually move all the dots closer to the green dot.
If you stop at the right point along the trajectory, you'll actually move closer to the theoretical ground truth.
AUDIENCE: But would it be biased?
JACOB PRINCE: Yeah. Yeah, so this is the shrinkage bias. This is the fact that ridge regression is imposing some bias on the signal estimates in order to account for this issue. What you end up with is-- plotted in the slice of brain over there is a map where you basically have to find an optimal amount of regularization to apply at each point in the brain. As I alluded to-- well, the question is how much data is required to optimally get these hyperparameters. Right?
Right. As I alluded to, these are cross-validated procedures to get the amount of regularization per voxel. The way the cross-validation happens under the hood is it takes the trials for which you have repetitions, and it basically uses the repeated trials to get an estimate of the generalization of your signal estimates. So it takes the betas and it basically says how can I set my hyperparameter so that my beta is generalized maximally well.
So it's tied to the number of repetitions you have. The quality of that estimate is tied to the amount of repetitions you have in your data set. More is better, so NSD has quite a few repetitions. The second data set we used to validate the technique is BOLD5000, which is another large-scale event-related data set. Interesting thing about BOLD5000 is it has far fewer repetitions than NSD, and we were wondering when we applied GLMsingle to this new data set whether everything would completely fall apart given fewer repetitions. In fact, that wasn't the case, and we found that actually we saw similar results.
The takeaway is that more repetitions are better, but it's not necessarily required to have tons and tons of repetitions in the experiment in order to optimally estimate these hyperparameters. In fact, they seem to work with fairly few. We're getting close to the demo. [CHUCKLES]
Right. So this is the high-level overview.
GRETA: Elena had one--
JACOB PRINCE: Yes. Yes, yes.
GRETA: --which is would it be helpful to have a few images [INAUDIBLE] positions and very interesting [INAUDIBLE].
JACOB PRINCE: Yeah, good question. More helpful to have a few images with more repetitions versus kind of an even amount of--
GRETA: Spacing of--
JACOB PRINCE: Yeah. Probably. Well, I think the better in all respects. More images with repetitions is great. More repetitions of those images is great, and it is a trade-off to some extent, I guess. But I would imagine-- I don't know, Kendrick, if you have thoughts on this. It's kind of hard to assess whether it's more important to have diversity of images in your repetitions versus to have a whole bunch of repetitions of fewer conditions.
I don't really have a good intuition. I mean, more is better, but--
KENDRICK KAY: It's a general issue that comes up all the time in these types of experiments. But for the purposes of this designed technique, having many of a few images is not particularly helpful. So I would say distribute. If you have the time to distribute your reps, you might as well evenly distribute it amongst a set of images, or stimuli, or conditions, for that matter
JACOB PRINCE: Thanks, Kendrick. All right. These have been very insightful questions, so thanks and please keep them coming. Let's move into the demo.
So we prepared a Colab notebook. If you'd like to access it, you can go to tinyurl.com/glmsingle-demo, and we'll all hopefully be able to implement GLMsingle from start to finish in a matter of a few minutes. When you get to the notebook, please click File and then save a copy in Drive. This will allow you to make edits, and if you'd like to change the code, plot new things, or run different combinations of parameters, then save it in a copy.
And I will also pull up the demo. High-level goal here is just to kind of provide a basic walkthrough, get people oriented toward loading data and loading design matrices, showing that you can pretty much call the whole pipeline in about a tweet's worth of code, and then kind of look at some of the figure outputs because the pipeline will give you some of those for free, and start thinking about how to interpret those, and then maybe starting to think about how we might change hyperparameter settings to suit our particular needs or the needs of our data sets. I'm just going to walk through. People should interrupt, ask questions. Remote users, feel free to stop me and ask questions, too. And we can spend as much time on this as people would like.
The first step is to import packages, and we're going to download an example data set, which is the first slice of the first session of the first subject of-- it's not the first slice. It's a slice from the first session of NSD. So this should download pretty quickly. The next comes installing GLMsingle. So we first clone the repo, and the install should be pretty quick.
OK. Successfully built GLMsingle. Now that we've installed it, we can load our example data set. And we're going to isolate a very, very, very small portion of the brain, so 400 voxels from one slice, 20 by 20. And it's going to be from the first-- it should say three runs, first three runs of the session.
So just small-scale example, and we can load our data and design matrix and store a couple of other important values, like how long the stimulus is on screen for and what the TR is, which are also inputs. And now we can initialize our hyperparameters. So you could actually just run the tool without this step, and these would all be implemented by default. But I'm kind of spelling them out just to be elaborate.
To turn on HRF fitting, you'd set library to 1. To turn on GLMdenoise, you'd set wantglmdenoise to 1, and same for fracridge. Then this is also the default behavior, but we want to save all of the intermediate beta versions that go into computing the final outputs. We want to also save those in memory because we might want to plot them, visualize how they change as you go from step to step. So that's what these ones refer to.
The first one is on-off GLM that was involved in GLMdenoise. The second one is the betas that come from the HRF optimization. The third one is after GLMdenoise has been applied, and then the final output is the complete pipeline including ridge regression.
So you're all probably past this part already, but we can just draw some plots. On the left, you have your example slice of data. Looks like we captured some nice cortical curvature in this particular slice, possibly from the visual system. Then on the right is the other main input, which is this matrix defining the onset of each condition, and it's really hard to see. Yeah, there are very tiny black dots. It's not dust on your screen. They are there. In fact, some of the columns do have repetitions. It's really hard to see that, but in this input matrix you would have repetitions coded as being in the same column.
OK. So data and design, and we can just print out some metadata. Three runs in total. 300 TRs per run. This is our dimensionality of the input data. Stimulus is on for three seconds, and xyz dimensionality, and numeric precision. OK.
So we have now all the ingredients we need to actually run the pipeline and just put in the schematic here for reference. So we can create this object that has our hyperparameters in it and then just call fit. No. We forgot to run the chunk. Actually have to set our ops. OK.
And now we can press Go. This should take about two minutes, so I can pause here in case anyone has had issues, questions, or lingering questions. It's a very good question. What if all of your voxels seem to have identified the optimal HRF to be at one end or the other of the library? It definitely suggests that-- well, to me, it would suggest that the neighborhood that you're sampling in might not be the most appropriate neighborhood for the subject.
Yeah. It's hard for me to come up with a reason why. I think in that case it might be a scenario where you'd want to expand the library with other shapes. It might be the case where you might want to ditch the technique altogether. You could always venture into the realm of finite impulse response modeling of HRFs and just use like a super flexible function that just fits whatever's there.
So if you think you have really funky-shaped HRFs that don't conform to the library at all, there are other approaches that are available. In fact, the FIR approach I think is possible deep under the hood in GLMsingle. And if it's not, we're going to build it in soon hopefully. It was a part of GLMdenoise, where you can flexibly just basically model whatever the HRF is per voxel. It's much more computationally costly.
So for the sake of the computational cost, we want the library approach in this case. But if you're seeing very strange or unexpected results, it might be a sign that you want to use more flexible procedure.
KENDRICK KAY: Yeah. Maybe I'll chime in on that given there are many questions about HRF stuff. Yeah. I agree with everything Jacob said.
The FIR approach is certainly an approach. I guess there's this fundamental trade-off of you need a lot of data to learn good time courses if you do FIR. And depending on your experiment, if you have few trials, and depending how you want to assign which trials go together as a fixed condition, FIR modeling may actually produce poor results. It's essentially the biased variance trade-off.
You can unbiasedly estimate your time courses, but if they look horribly noisy, they may not be useful for what you actually want to use this data set for. And that I think ties directly to the way the library approach is designed. It is in some sense highly biased because it's a strong prior that your HRF is one of the 20, and that was deliberate in the sense that data are limited, and we don't want to flexibly estimate any arbitrary time course because likely the data are somewhat noisy and will lead to inaccurate time course estimates if you were to just go down that route.
GRETA: Yeah. Thanks a lot.
JACOB PRINCE: Thanks, Kendrick. That's helpful. Yeah, one more question. Mhm.
So the question was how did we pick the HRFs. I'm going to try to answer it, and, Kendrick, feel free to fill in any details. They took data from NSD subjects and basically estimated what would happen. They basically estimated a manifold through HRF space, varying different parameters like the time to peak and the width, and then plotted. Yep, there it is. [CHUCKLES]
Kendrick has taken control. Yes. Kendrick, do you want to--
KENDRICK KAY: Yeah, man. I'll just give you the high-level overview. So you can think of these as histograms. They basically are. It's just the histograms on a unit sphere. You're looking at the top of a sphere, but anyway. These come from different subjects, and these all share the same axes.
So if you were to pool these histograms together, that's basically what this is. Think of this as a histogram. Yeah, there's some spread, but we don't know if this is noise or a real variability. But if we were to only have limited HRFs in our library to spend, the idea is you would spend it traversing this path on this sphere. And that's what the 1 through 20 here are plotting.
This was manually drawn because this isn't actually a straight line. It's an arc on this-- well, it's a smooth line on the sphere. The goal is to try to get a library that works reasonably well for all the subjects, and you can maybe eyeball for yourself whether or not this is more or less fine for the different subjects. And certainly one route is to derive a subject-specific arc or manifold, which you could do, but it's just extra work.
JACOB PRINCE: Mhm.
KENDRICK KAY: Sorry. I didn't really even explain the histogram. It's a histogram of time course shapes, where the dimensions of your histogram, which is on the unit sphere, correspond to different time course vectors, ways in which time courses can vary from a fully flexible time course estimate. And you can see that almost all voxels lie on a very small part of this space.
I don't know if that was clear, but hopefully that provides some insight.
JACOB PRINCE: Mhm. I think there's a follow-up.
AUDIENCE: Yes. Every time I've done this correctly, [INAUDIBLE] a prior [INAUDIBLE] would be counted. And that would be counted on the--
JACOB PRINCE: Mhm.
AUDIENCE: --curve. And instead of [INAUDIBLE] prior, do we have a set prior [INAUDIBLE].
JACOB PRINCE: Mhm. Yeah, so the suggestion would be an alternate kind of library, where you have a 2D Gaussian fit to this kind of colored heat map where you're not only sampling more widely along the color span but also assigning probabilities in the fitting procedure even to certain HRFs.
AUDIENCE: Right. So this is assuming that one [INAUDIBLE].
JACOB PRINCE: Mhm.
JACOB PRINCE: Mhm. Kendrick, can you speak to the motivation for drawing the kind of one-dimensional or drawing the trajectory as opposed to fitting an ellipse?
KENDRICK KAY: Yeah. I could barely hear the question. So I heard like it's challenging a cognitive experiment here, but I think I get the sense of the question, which is are there more sophisticated ways to constrain or parameterized the choice of time course variation here. Yeah, certainly. I guess one of the design considerations here is computational time and efficiency. [CHUCKLES] With the larger the data set, the more of a nightmare it is to pre-process tons and tons of sessions, and subjects, and so on.
So yeah. Fundamentally, maybe a speed accuracy trade-off. We could spend more CPU time for slightly better results, but CPU time is expensive, especially if you pay for it. [CHUCKLES] But in terms of given the distribution shown in panel B, you could fit something potentially and treat this variability around the line as real, and that could be real, and that gets into the nature of why these vary, which gets into vasculature and fMRI-type stuff.
Yeah. Aside from computational time, I guess there's also the usual kind of bias variance trade-off. So let's say you've removed all constraints and just allowed any possible time course. Well, but you're basically back to FIR fitting, and you may suffer inaccuracy due to noise in the FIR estimates. So then you have to choose some compromise between bias and variance, which is not to say there may be more effective and even fast computational approaches to this problem other than the very simple click-20-points-on-the-sphere approach. And that would be an interesting thing to follow up on.
GRETA: Time for one more comment?
JACOB PRINCE: Yes. One more follow-up question. Yeah. The question is whether the set of HRFs defined using vision experiments were generalize to other tasks like language or working memory. Great open question, I would say. Probably it would generalize to the extent to which other literatures have observed similar HRFs being appropriate between modalities.
I don't really have a good answer other than to say that it's an exciting thing to test. And one of the next steps, actually, for this project in the course of revisions is going to be to try the approach using a subset of the Human Connectome Project development data set-- probably a task that involves reward or emotion processing. So it's basically as far away as you can get from both in the nature of the data we have from each subject and also the task. Very far away from the territory we've traversed so far, and I think it makes for a very strong test of generalization that should give us insight into that question once we visualized the map of optimal HRFs. And we may indeed see unexpected, striking structure that suggests that they don't generalize so well, or we may see that it seems like they work fine.
GRETA: All right.
JACOB PRINCE: Yeah. OK. Back to the demo. Oh, lord. Hopefully, people finished running, and I allude to these outputs before-- four different kind of beta version's output from a default run of the tool. And so this just kind of describes them, and then within a particular one we're going to focus on type D, that being the final output of the pipeline which contains all the optimizations.
We'll get a whole bunch of different things. You have the mean volume that you passed in, for example. You have the HRF indices, the noise pool that was used for GLMdenoise, the betas themselves, an r squared map showing variance explained, and the map of optimal regression or ridge regression frac values that were identified. So we can just--
JACOB PRINCE: Yeah. Sorry, everyone. I got derailed by really good questions. OK. Sorry. We were just going through the printouts, the four different kinds of betas you get from a default run, and then just a sampling of the outputs that the tool returns for free, the main one being this beta is variable.
But we can just preliminarily draw some plots and see. This kind of figure inspection is very important, I would say, for any users just as a first step to understand the nature of your beta estimates but also how these intermediate steps are working for the sake of sanity checking and diagnosing issues. From the top left, I just took the average of all the estimated GLM betas over all the trials that we passed in, and we do see actually very responsive voxels along these kind of curvy parts of cortex.
So we do seem to have some visually responsive cortex captured in our 20 by 20 square. And then the HRF indices-- we do see also some structure here, where there seems to be this predominance of yellow and green indices chosen for those active voxels and more blue and red indices chosen outside the active zones. Our shrinkage fraction estimates are down here. The same kind of structure is apparent, where there seem to be systematically different values chosen for visually responsive parts of cortex, and then our variance explained map at the bottom right, showing that we do better actually modeling the data where there seems to be more activity, perhaps unsurprisingly.
We have these outputs, and there isn't so much more in here, but I just thought I would show-- and people can jump in if they have any questions on the figures. Thought I'd show another case of running the pipeline, where this time you basically turn everything off. So you could treat this tool also as a way of just running like a generic GLM the same way you would with any other fMRI analysis package. And by doing that, it's as simple as-- to do that, it's as simple as just turning off the library, turning off GLMdenoise, and turning off fracridge, and then saying we want to output just the first set of betas.
So we could just adjust the hyperparameters and run the thing again. And this time, rather than two minutes, it takes about two seconds. And we can just do things like visualize how the mean betas change. It's interesting.
Mean betas from the baseline are on the left. So this new GLM we just ran that doesn't have any of the fancy optimizations thrown in, and then from our initial run of the pipeline in the middle, and you could just plot a different map. And it's actually interesting to me that the changes are pretty subtle across this slice, and it's hard to diagnose or make inferences when you're only processing 400 voxels. But I think it's instructive to understand that even with all these steps we've included, the nature of the pipeline is such that it's not going to dramatically, wildly change your beta estimates, especially in parts of the brain that are responding to your experiment. Seems like at least in this slice there are more subtle changes that may nonetheless still be very important in terms of optimizing the reliability of our estimates and also benefiting downstream analyzes.
Then I provided a-- we'd call a hyperparameter playground, where if you like to try ablating a particular part of the procedure, you could just uncomment some of these lines and run it yourself. There's no real point here other than just orienting users toward how to set hyperparameters, and, for example, we could turn off HRF fitting and run it. It's finishing more quickly this time.
I'll pause. Oh, they've just finished. OK. This time, instead of visualizing the mean over all the betas, we can just choose. Let's just choose trial 3 and trial 152, and we can compare our little experimental output where we ablated the HRF fitting procedure to the main output from the default run.
I don't know. It's different. It's interesting. Cortex that's active is still active between the no HRF fitting and with HRF fitting, and there's no real inference to be derived here other than just to say that this is how you run the thing, and these are the kinds of outputs you get. And you can start asking and answering questions for yourself about how things might change depending on the techniques you include.
Yes. Question. Great question, asking whether the use of GLMdenoise means we're not including other kinds of typical motion regressors.
We do, actually. In addition to the regressors you get from GLMdenoise, you do include polynomial regressors. Kendrick, do we also include the kind of classic kinds of motion regressors, too? Is that in there?
KENDRICK KAY: No. Actually, this was a point I put in the chat earlier. I think in the presentation of the technique you made it sound like polynomials are not included, but they are. So they're included by default.
Then the kind of data-driven regressors come in 1 or more, or, rather, a 0 or more, depending on how well they cross-validate or how well they cause overall cross-validation to improve if any. So yes. The traditional mindset of dump in motion parameter regressors and all that stuff is not the default implementation.
JACOB PRINCE: Thanks for clarifying that.
KENDRICK KAY: The theory is a potentially more powerful and more flexible way to gather that type of variances is to learn it from the noise pool.
JACOB PRINCE: So the default-- the question is how many regresses are included by default in GLMdenoise. Yeah. The default setting, I believe, is to test up to 12 or 10 I think. And so that just defines how many you're going to sweep through in your cross-validation loop.
That's completely subject to user flexibility via hyperparameter changes in two directions-- one, you could decide you want to sample more than 12 or less. You can also make an a priori decision that you're going to include a certain amount. So that'd be like a heuristic approach, saying I'm not sure I want to include so many regressors. I'm just going to decide off the bat that probably the top PC or the top two PCs from the noise pool would be useful and set the limit at one or two, and then it'll just do that by default.
Yeah. Good question. Stats on improvement. So I have a supplemental slides, the figures from our paper, and I'd be happy to walk through on some of those results. I think that would hopefully give a sense of the kind of magnitude of improvement you get. As soon as we're done with the demo, I can switch back to that.
So yeah. That's pretty much all I had in the demo. Did anyone have lingering questions or any issues running it? Yeah. I think if you install the MATLAB implementation of GLMsingle from the GitHub repo, it comes with everything you need I believe.
AUDIENCE: All right.
JACOB PRINCE: Kendrick, is it true that we included fracridge in there?
KENDRICK KAY: I actually don't know. You might have to install this fracridge thing as one external extra thing.
JACOB PRINCE: Yeah. It's a good question, and it would be easy to find out. It's either all there for you, or there might be one or two things that need to be installed. So I'm going to jump to answer the question from the chat, just about the nature of the improvements we get.
This is not-- there's no animation, so bear with me. This is basically the first main results figure for the manuscript. I think it hopefully will provide a sense for the nature of the improvements we observed in the two data sets that we tried to validate the technique on the NSD and BOLD5000. So panel A on the left.
First main results figure from the manuscript. So panel and the far left column is showing flat maps, just looking at voxels and visual cortex. And this is a reliability metric and specifically the difference in reliability that we observe from the final output of GLMsingle versus our baseline GLM, which is just pretty much what I showed in the demo, where you turn everything off, assuming a canonical HRF and not having any extra noise regressors or doing ridge regression.
So that's beta 1, and beta 4 is the final output. Then we compute reliability basically through something akin to a split-half procedure, where you take the profile-- you isolate the repeated stimuli, and you just correlate the response profiles over the repetitions. So it's basically a test-retest reliability metric, and the units are Pearson r.
So we can just subtract beta 4 minus beta 1 and plot that. The scatter plots show that in NSD there's almost uniform improvement in reliability. And in BOLD5000, interestingly, the effect is less apparent, and this is still positive, but the point clouds you'll see are a little bit noisier, likely due to the fact that, as I mentioned, there are far fewer repetitions in BOLD5000. So we have noisier estimates of reliability to begin with.
Yes. So the question is, I think, about the nature of the distributions of improvements in the--
JACOB PRINCE: The spatial map of improvements in BOLD5000. Yeah. It's hard to say, especially because we're doing the analysis and volumetric space and then projecting that onto the cortical surface. So it's possible that you have gray matter and kind of more relevant parts of the brain maybe interspersed with white matter in the projection here. It's hard for me to interpret.
I would say yeah. It's an open question. Probably there is a mapping between the degree to which different voxels are visually responsive. Yeah. It's a very interesting question.
If we correlated our map of air indices with the improvements in reliability, would we see any-- would those be related? And it's interesting. Yeah. You might predict that if you're using an HRF that deviates more, if the optimal HRF deviates more from the generic canonical HRF, that might predict more mismodeling of the responses in the baseline case and actually predict greater improvement from the optimization. But it's not something I've actually plotted and would be another great kind of analysis to do if we were to look deeper into the optimal HRF indices.
The right side of this is basically just summarizing. The approach here is to take all of our beta versions, and we want to compute kind of a democratic aggregate score in order to then threshold the voxels by reliability and see how the relative quality of the different [INAUDIBLE] changes more and more reliable.
AUDIENCE: Can I look at it?
JACOB PRINCE: Great. Thanks. Right. So then these graphs are basically panel B, top left. Along the x-axis, we're looking in more and more reliable voxels as defined by a [INAUDIBLE] prior to basically the average reliability of all the versions. And this ensures that the same groups of voxels are going to be analyzed at each reliability level.
And you see that across the board we achieve-- and then the y-axis is basically change in reliability relative to the group average. So these are relative differences in data quality. It's a little bit hard to wrap our heads around that, but the interpretation of the bar graph is that per voxel at threshold level r equals 0.2.
The final output of GLMdenoise is you're gaining approximately 0.05 Pear r, split-half reliability over the average of all the versions, and you're gaining close to 0.15 Pearson r over if you had just used the generic approach. A bit confusing, and the trends are reasonably consistent between the two data sets in the aggregate.
JACOB PRINCE: Yep. Question is could you use this tool for a block design. Answer is yes. We haven't exhaustively tested every possible design, but actually on the GitHub repo you'll find two examples scripts. One is event-related data, and one is block data.
We basically ran the same procedure for both of those examples, where we fit-- actually, I think we did all four of these beta versions, the same ones we were analyzing here, and looked at the reliability of the signal estimates in the event-related case for the individual trial estimates and then the block case for our estimates of block activity and found that the trends were pretty much as expected, suggesting that the tool is compatible with the block designs. Yeah, and we can talk more afterward if you'd like to talk about the specifics of your data set.
KENDRICK KAY: Yeah. Maybe I'll chime in there. I guess this is related to the Zoom chat question about, well, how much is improving on different data sets. I mean, here are just two of many hundreds of possible data sets, and it's important not to-- I mean, every data set's different. Different trial distributions, different magnitude of activation, different subjects, different scanning. So you want to be, I guess-- or we want to be careful not to overgeneralize.
Now, regarding block design, from a theoretical perspective, you can think about each of the components here-- the HRF, the data-driven nuisance regressor, and the ridge regression component. They each interact with block designs versus [INAUDIBLE] designs in a specific way, and it likely is the case that the magnitude of improvements you might expect under block design might be smaller, for example.
I think a lot of this is common lower in fMRI, but with block designs the precise shape of your time courses are lesser of a deal than with event-related designs, where the entire time course is really this up-down very rapid event where the time course shape might matter a lot more than a very long 20 second, 30 second block. Similarly, the nuisance regressors-- well, the extent to which they're going to help will depend on the extent to which the so-called neural event of interest, the blocks, sort of reside in the same temporal frequencies as the physiological and head motion noise that you're trying to learn from the data. That may vary compared to event-related design.
Finally, ridge regression. The benefit of ridge regression has to do with shooting yourself in the foot with rapid, highly correlated single-trial regressors. And if you go to a block design, two successive 30-second blocks are still going to be correlated-- not non-zeroly, but maybe lesser correlated than two rapid trials. I don't know if that was helpful, but there's some theory you can apply here.
JACOB PRINCE: Actually, that leads to a follow-up question, Kendrick, that came from an earlier practice run I did, where some people use this opt-seek procedure to try and optimize their event structure in order to de-correlate by basically adding jitter and switching around the order of events to try to de-correlate their predictors. The question was if you're actually making an attempt to de-correlate the predictors whether ridge regression is even needed.
And I didn't quite know what to say.
KENDRICK KAY: Ooh. That's a good one. The short answer is it's complicated because techniques and ideas like this optimization are optimizing for something very specific, with specific choices on how you're going to model the data and the dependent metric you're trying to optimize for. And if you optimize for a analysis approach which doesn't include, say, ridge regression and you intend to actually use regression in your actual data analysis, obviously you didn't optimize for what you're actually doing. So it gets complicated.
One more comment on that without talking about this forever. Traditionally, you would shoot for an analysis where you do a condition-specific estimation, not like single-trial specific estimation. And that's a very different optimization-dependent metric to shoot for, which is fine because that's more conventional fMRI. But here it's a very different mindset, and so what you get under those approaches may not generalize well to this new approach.
JACOB PRINCE: Great. Thank you. I think I'm going to return and just run through the last few slides, and then we can take any last questions. Thank you all for your attention.
Just some brief FAQ. We covered most of these, but just to run back through them-- which study designs are compatible, as we've alluded to several times. Both event-related or block designs are compatible, and continuous designs, again, relevant for movie watching, kind of different linguistic studies, anything continuous. As long as you take discrete, repeated events encode them as such in your design matrix, you could use GLMsingle for those kinds of tasks and not resting-state data.
For preparing fMRI time series data, again, volumetric or surface space are fine, and we encourage kind of minimal pre-processing before using the pipeline. So this would mean motion correction and slice time correction, and there's no need for additional filtering procedures because hopefully those kinds of things would be accounted for by GLMdenoise. Spatial smoothing and registration to a particular anatomical space are fine.
Again, there is this expectation of repetitions because of cross-validation, which we've covered. More repetitions are ideal but not necessarily required. And where would we fit in the context of existing pipelines? SPM, FSL, AFNI, BrainVoyager, what have you. Theoretically, this tool could replace existing GLM components of these pipelines but also complement other pre-processing routines if there were specific things associated with one of the pipelines that were of interest.
Again, if we only want some of the components but not others, we can just use the hyperparameters to turn those on or off. And then resources that are available for users. Again, the GitHub repo. The wiki on the repo actually has nice kind of extended discussion of some of the deeper statistical and theoretical questions associated with the technique. If you run into any bugs or issues, do not hesitate to raise an issue on the repo, and we actually have been hopefully pretty responsive at responding to the issues.
And then there are the example scripts which I alluded to, which are available on the repo and our preprint. Can take any final questions. Before I do, so just to acknowledge Kendrick, Ian, Jan, John, and Mike again. Amazing collaborators and mentors, and thanks for your diligent attention.
Yeah. Thank you.
Happy to take-- again, I know we covered a lot of things. So if there are any remaining questions from the online viewers or in person, we can stay and chat. Thank you to everyone for joining online. Thanks, Kendrick. I appreciate it.