I’ve been using the 3dLSS approach for single-trial estimation in a rapid event-related fMRI design and had a question regarding the treatment of motion outliers (high-motion TRs, flagged by framewise displacement thresholds).
Specifically:
When setting up the LSS model, should one:
Censor the motion outlier TRs, or
Leave all timepoints in the model, even those corresponding to excessive motion?
I noticed that when I censor motion outlier TRs, I sometimes get extremely large beta values in regions like the amygdala (e.g., >1000), and many betas in the 100+ range. Even without censoring, I still see beta values up to ~8.
For context: the mean intensity in each voxel (per run) was scaled to 100 (before 3ddeconvolve), so my understanding is that the regression estimates from the individual-level GLM analysis should be interpretable as percent signal change.
So, I don't understand why some beta values are high.
I’d greatly appreciate any advice on handling motion outliers in 3dLSS, and whether these unusually large beta estimates are due to modeling issue.
Most preprocessing steps (including head motion correction) and subsequent data analysis in fMRI are largely guided by heuristics rather than strict rules. Therefore, it is not surprising that results can sometimes be biased, including producing unreasonably large estimates, as seen in your case. What is the planned follow-up analysis after the individual-level modeling?
For follow-up, my plan is to use the single-trial betas from 3dLSS for predicting behavior, and also to compute beta-series connectivity (particularly with hippocampus and amygdala ROIs). The unusually large betas are concerning because they could distort both the behavioral prediction models and the connectivity estimates.
In the absence of a more effective alternative, one possible approach would be to remove outliers (e.g., those exceeding a 2% signal change) from the estimated trial-level coefficients before conducting your follow-up analysis. Do you think this could serve as a workable workaround for you?
It might be good to know more about your actual model. What is the basis function? Are the event times TR-locked? Are they randomly distributed across the TR?
I used AFNI’s 3dDeconvolve with a canonical gamma HRF. The events are TR-locked and pseudo-randomized across runs with jittered ITIs (2/4/6 s). Below is the code I used to generate the design matrix for one of the condition, which was further used in the 3dLSS script:
I could try removing outliers (those exceeding a 2% signal change), but I was concerned how these unusually large estimates might also influence beta estimates for temporally adjacent trials ?
Using IM with censoring can certainly lead to such noisy results. At least you aren't using TENTs. I'll assume TR=2, based on the jittered ITIs.
Those 2 GAM basis functions, when sampled on a 2s TR grid, start at zero, peak at about 1, and go back down to zero. The smallest number in the '2' curve is about 0.0013, while the smallest in the '6' curve is about 0.00067. With IM, each regressor will have only a single copy of this curve. With censoring, the relevant time points could be removed from the regression. So there could even be cases where a censored regressor is all zero, if all non-zero regressor time points were censored.
Note that a '6' regressor has 9 non-zero time points. Now suppose the first 8 of those 9 time points just happened to get censored. You would be left with a regressor that is all zero, except for the single time point. It would fit the data exactly (like a delta regressor). But then what sort of beta might be expected?
Let's just assume that the magnitude of a unit delta response would be 1.0 at some location, based on noise or whatever. Here the beta weight won't be 1.0, because the regressor has a (max and only non-zero) value of 0.00067. If the exact change were 1.0, the beta would be 1.0/0.00067 = 1492.54. And now there would be a beta weight over 1000. It can happen, as Gang noted, even if mostly because of unlucky censoring, or some other peculiarity.
What to do about this unlucky-but-natural-occurrence-based-on-the-model is a serious question. Gang's point is that you might rather replace something you pretty sure is garbage with zero, but it becomes a bit tricky to do in a reasonable way.
That makes sense. I’m planning to try Gang’s workaround of excluding trials with beta estimates exceeding ~2% signal change from the subsequent analyses.
At the same time, I was curious, by why some trials still show relatively large beta values (around 8) even when I don’t apply censoring (i.e., keeping all TRs). Interestingly, those same trials end up with extremely high values (>1000) when censoring is applied. My guess is that this is still a motion artifact issue, but I wanted to check if that interpretation seems reasonable.
For reasonably reliable estimates of betas, we put many event responses into a single regressor. But in the IM case each event goes into its own regressor, and it becomes clear how unreliable a single beta might be, the results are noisy. Censoring becomes problematic, but not censoring is also problematic. Motion might cause some spiking that is modeled by a single event response, resulting in a large (positive or negative) beta.
Of course there are possibilities other than motion, but motion spiking will play a large role.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.