Cluster Correction - reposting in hopes of an answer :-)


I have a few questions around the correct implementation of whole-brain cluster correction in AFNI. Given the recent bug identified in 3dclustsim and the subsequent development of some fixes and new tools to improve the accuracy of cluster-wise correction, I’m in the process of revisiting several analysis and want to make sure I am on the right path. FYI - I am using an updated version of AFNI (May 11, 2016).

  1. One of my analysis uses 3dttest++ at the group level. For this analysis, I plan to try the non-parametric approach to cluster-size thresholding, as described in AFNI and Clustering: False Positive Rates Redux by Cox, Reynolds, and Taylor. Would this approach be preferred/better than the -acf solution?

  2. My other analysis uses 3dMVM at the group level. For this, I would like to use the new -acf approach implemented in 3dFWHMx and 3dClustsim and want to make sure I am using this method correctly…

First, I’m using the following command to determine the ACF model parameters (a,b,c) for each subject’s individual-level errts time series file output from 3dDeconvolve:

3dFWHMx -detrend -ACF temp.1D -mask ./full_mask.${subj}+orig ./errts.${subj}+orig >> blur_errts.${subj}.1D

Note: The errts.${subj}+orig file in the command above is concatenated across 5 runs, however in my full script I do include additional code to ensure that the detrending and -ACF calculations are done separately for each of the five runs. This results in different ACF model parameters for each run. I subsequently average each parameter across all runs for each subject.

Then, for cluster-wise correction at the group level, I calculate the average a, b, and c parameters across all subjects, which are input into the following:

3dClustSim -mask grpmask.nii -acf 0.7830668 3.295774 10.68484 -prefix Clust.WLgroup.1D

Is this correct, or should I be using something output from the group analysis to calculate the ACF parameters? (as I believe is done with the new -clustsim option in 3dttest++)

Also, because this is a connectivity analysis using manually-traced hippocampal regions, I wanted to keep the individual-level 3dDeconvolve analysis in native-space. I’ve then been transforming the stats+orig output from 3dDeconvolve into standard space before moving to the group analysis, but I realize that my ACF parameters are estimated on the native-space data. Is this an issue at all?

  1. My final, somewhat unrelated question concerns something I heard recently, regarding the use of 3dClustSim for whole brain correction. I have been using 3dclustsim for whole brain correction for some time, but was recently told that in some circles this practice is not currently accepted. Rather, it “might” only be appropriate for cluster correction across smaller, a priori regions-of-interest. Unfortunately, I don’t have any further information from the source, and I have been unable to find any hint of this discussion online. I’m wondering if you have any insights about this?

Thank you in advance for your help!


  1. The 3dttest++ -clustsim approach is different from
    the 3dClustSim -ACF one. It is quite difficult to say
    that one is better in general, however the former
    approach should be easier to defend.

  2. That looks good. You should also be able to see the
    ACF approach applied in an script. To see
    the inclusion with 3dClustSim, include the option,
    “-regress_run_clustsim yes”.

Yes, there is a slight issue when going to standard
space. Personally, I do not see why you stay in orig
space initially. The traced regions can be passed
along using -anat_follower_ROI options for application
in standard space.

If the errts stays in orig space, perhaps it should
be transformed before computing the parameters.

  1. There is likely going to be a lot of inappropriate
    criticism of 3dClustSim coming up. But consider that
    even when using FWHM it behaves similarly to RFT in
    SPM. And using ACF is much more conservative.

Sorry for being slow on this…

  • rick