Input to 3dttest++ -Clustsim

Hi,

I’m attempting to get a cluster size threshold for FWER correction.
What I’ve done in the past year or two is use 3dFWHMx with the -acf flag on the error timeseries to get a blur estimate for each subject, and then enter the mean blur estimates across subjects into 3dClustSim to get a table of cluster thresholds.

From the documentation, I understand that the recommended method these days is to use 3dttest++ with the -Clustsim flag, however I’m not quite sure what the correct input (or, more strictly, the best for correctly reducing FWER) would be for this method? Should I still be using the error timeseries (i.e. only looking at the variance that is not explained by our model), or some other input?

And whatever the correct input should be, am I correct in thinking that the subject files in question should be handed to 3dttest++ as subbricks of a single bucket?

Thanks!

Hi Henry,

When using 3dttest++ -Clustsim (or -ETAC), it is good to supply your group level
mask, but there is nothting else to do. The input is exactly as without -Clust
sim, presumably a bunch of beta weights.

3dttest++ -Clustsim (or -ETAC) uses permutation testing on the residuals from that t-test to compute cluster criterea, which are then output to the same sorts of tables that 3dClustSim would produce (-ETAC outputs binary significance maps, instead).

For example, it would be enough to add -Clustsim (and -mask) to AFNI_data6/group_results/s6.ttest.covary, as you could try out with the sample data for bootcamps.

Does that seem reasonable?

  • rick

Thanks for the response!

So if I understand correctly, the idea is rather than looking at the smoothness of the residuals from the full deconvolution model (or whatever method is being used), we’re looking at the variance (across subjects, rather than across voxels?) in the individual betas we get from that model?

  1. Does that mean we need to find a separate threshold for each and every contrast we want to look at?
  2. DIf I understand correctly, this would mean that contrasts (or simple conditions) with more individual differences in activation level will necessitate larger clusters, because the betas will have larger residuals across subjects. Is that what we want to control for in cluster thresholding?

Hi Henry,

Sorry for being slow…

  1. No. The -Clustsim option is used to generate a table akin to that from 3dClustSim, and it would still be expected to use consistent uncorrected and corrected p-values across the tests.

What might vary across tests is the needed cluster size required to achieve a corresponding corrected p-value.

  1. This clustering is based on permutation testing, so it isn’t exactly variance that is being measured. Or maybe I am not quite sure what you mean by having more individual differences.
  • rick

Thanks, I think my questions weren’t quite clear:

1 - I didn’t mean for different thresholds or alpha levels, but for different comparisons. For example, if I have a 2x2 design, I might want to look at a main effect for each factor separately, and for the interaction. Each one of these would be a different beta map, so I would to run 3dttest++ on each main effect and on the interaction separately?

2 - As I understand it, the idea of cluster thresholding is using spatial permutation in order to only accept clusters that contiguous enough to be unlikely to appear given the spatial distribution of effects (or, in other words, how likely are we to receive a cluster of a given size, under the null hypothesis that all variance in the data is noise). Thus, the 3dClustSim -acf method just took the measure of the spatial variance, and used that to compute spatial permutations with the right level of smoothness.
But if we are looking at the residuals of the 3dttest on beta maps, then the spatial variation in the residuals should be dependent on how much individual difference there is in the betas–that is, an effect which is similar across all subjects should have smaller residuals that an effect which shows more variation. Does this not mean that the permutation is mixing up cross-subject variance in activation (in betas) with spatial variance in activation? These seem to be two separate sources of variance, and intuitively it’s not clear to my why it makes sense to mix the cross-subject variance into the spatial filtering.

Thanks again!

  1. Yes. Well, to use one of the new -Clustsim or -ETAC methods, the only (current) way to do so is via 3dttest++, running one test at a time. For ANOVAs, such as with 3dMVM, then ACF clustering with 3dClustSim would be the way to go.

  2. The basic cluster thresholding uses random noise, altered (blurred) to match the given ACF parameters (no permutations). Then one just counts the large clusters to get probabilities of them.

Alternatively, permutation testing permutes (typically?) subjects per group or possibly negates subjects for a single group test. Note that this is not spatial permutation, but “temporal” (cross subject) permutation.

If I understand it correctly, the 3dttest++ -Clustsim permutation method is the same, except it permutes the t-test residuals. Either way, both spatial and individual variance play a part.

Simply permuting subjects might be the most clear method, basically asking whether the current group partitioning give significant results when compared with random group partitioning.

  • rick