Dear AFNI experts,
After reading the documentation and message board about different
approaches for movement correction/mitigation (including concepts and
calculations) I still have some questions:
If I apply enorm censoring, would it be reasonable to also regress
the 6 motion parameters and it’s derivatives at the subject level? The
question arises because both options (enorm and motion regression) rely
on motion parameters and using both of them in the same analysis may be
Many studies have used 0.2-0.3 and 0.1-0.15 as framewise displacement
(FD) and DVARS thresholds respectively. However, the literature using
enorm and srms censoring is scarce compared to FD and DVARS. Could you
recommend me threshold values for enorm and srms that you know that work
fine with resting datasets?
Would it be appropriate to apply the two types of censoring in the
same study: i) enorm censoring (based on movement parameters) and ii)
srms censoring (based on intensity variations)? If the answer is yes,
should I use the two thresholds simultaneously to remove one specific
volume from further analyses? Are there AFNI functions to detect the
volumes to remove based on the two thresholds (enorm and srms)
If I apply enorm censoring, srms censoring, or both types of
censoring in the same dataset, would it make sense to also add mean
enorm, mean srms, or both mean descriptors as nuisance variables at the
Many thanks in advance for your help.
Others might weigh in here, too, but I will try to reply to some of these points. Below, AP=afni_proc.py.
Firstly, you might have watched this already as part of checking documentation, but this video in the Alignment playlist of our AFNI Academy video series provides more description about motion estimation, enorm and things like censoring:
Secondly, another general point-- we typically do use 2 criteria for censoring volumes:
- where enorm is greater than some value (say, 0.2, for standard adult/human voxel sizes), and because the enorm is a measure of relative motion (i.e., the difference in alignment params between two time points), where enorm is large leads to censoring of both time points involved. Usage example in AP:
-regress_censor_motion 0.2 \
- where the outlier fraction is greater than some value (say, 0.05, corresponding to where >=5% of voxels in an EPI brain mask are outliers); this leads to just censoring a single volume. Usage example in AP:
-regress_censor_outliers 0.05 \
The censoring lists are combined before being applied (i.e., a volume could be censored due to either or both criteria).
Thirdly, you can enter in your own 1D file that descibes additional centering you want to do. This is described in the AP help:
-regress_censor_extern CENSOR.1D : supply an external censor file
e.g. -regress_censor_extern censor_bad_trs.1D
This option is used to provide an initial censor file, if there
is some censoring that is desired beyond the automated motion and
Any additional censoring (motion or outliers) will be combined.
To your points:
For resting state FMRI, we typically include the 6 motion parameters and their derivatives in the subject level regression model, yes. For task-FMRI, we usually don’t include the derivatives-- we only include the 6 motion parameter series.
These do provide different sorts motion reduction (I hate the term motion correction…). The censoring aspect removes volumes that are essentially considered untrustworthy. The regression aspect tries to use the motion to account for smaller motion effects.
The exact value of enorm to use probably depends a bit on your voxelsize, population (are they kids who move around a lot?), and other considerations. I think the afni_proc.py help suggests starting with enorm values of about 0.2, and outlier fraction values of about 0.05. I would start there. The benefit of using enorm vs something like FD is that it involves the square of the motion quantities-- this is both more appropriate geometrically, and provides better sensitivity to motion events.
I don’t know what srms censoring is… If it is based on “intensity variations”, would those kinds of things be caught by the outlier fraction censoring? As noted about, we do recommend using both enorm and outlier censoring. And you can add in your own, external censoring list.
I don’t have any experience with attempting to add these parameters in at the group level. Something that has been recommended for inclusion at the group level for resting state is the GCOR parameter estimate; please see here:
That might capture a different aspect, usefully, at group level?
Let me jump in
Regarding point 3), I think ‘Karelo’ refers to the definition of ‘srms’ in 3dTto1D where srms = scaled rms = dvars/mean, and it is suggested that “SRMS survives both a resampling and scaling of the data. Since it is unchanged with any data scaling (unlike DVARS), values are comparable across subjects and studies.” (taken from https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dTto1D.html)
My suggestion is that you explore the range of the srms timecourse across the subject in your dataset, and try to infer the most appropriate value to clean your data from very large global fluctuations. Note the global effects will probably be captured by -regress_censor_outliers or 3dToutcount too. Performing this exploration is also valid for enorm (or FD or any metric). As Paul indicates, a threshold of 0.2 on enorm might be a good start, but maybe too strict for instance for clinical populations. Note that there are multiple definitions of FD in the literature (see Figure 9 in https://pubmed.ncbi.nlm.nih.gov/23499792/) and setting up a fixed threshold is irrelevant, informative if the definition is not indicated in the paper.
Hope this helps