We recently ran an fMRI study and used AFNI for the analyses. It is a memory study whereby we are exploring the effects of drawing relative to writing. Scanning was done at retrieval during an old/new recognition test. We evaluated contrasts between conditions in two different ways:

Analysis version 1: First, we did just the simple effect contrast (brain activity when responding to Draw items vs. Write items). This resulted in a minimum cluster size threshold of 30 voxels, which allowed for 2 clusters to survive thresholding.

Analysis version 2: Then, we did a contrast whereby brain activity during false alarm responses on the recognition test ('old' responses to New items) was subtracted from each term (Draw - New vs. Write - New). However, note that this was a within-subject study with intermixed trials at retrieval, so the New value is constant within a participant, meaning the identical term is used twice in the model in this case. Effectively, we are comparing two difference scores here that each share a common baseline level of performance, if you will (i.e., the New term). This resulted in a minimum cluster size threshold of 9 voxels, which allowed for 5 clusters to survive thresholding (including the two clusters mentioned in analysis option 1).

We argued that using the latter analysis leads to reduced variance in the model and therefore when we determine minimum cluster size thresholds we get small required cluster sizes, allowing for more clusters to survive thresholding.

However, we had a reviewer raise the point that counting the New term twice in the model may underestimate the variance, going on to suggest that the improved model fit in the latter analysis type is likely spurious. They then suggested we should stick with only the simple effects (analysis version 1) and not use analysis version 2. However, they did not provide any references for this claim. In my mind, analysis version 2 is like running a t-test on difference scores, which should be valid.

Does anyone have thoughts on this debate, or papers to read relevant to this discussion?

To clarify, are the two versions of your analysis distinguished by using the same model at the individual level with 3dDeconvolve, but extracting those two different effect estimates for population-level analysis? Additionally, it would be helpful if you could share your 3dDeconvolve script.

If I'm understanding your question properly, I believe the answer is yes. We used afni_proc, so here is the code for that, and below is the code for the 3dANOVA command whereby we create the difference scores described in 'analysis option 2' when using the '-adiff' command:

Your ANOVA formulation might be overly complex. If your research hypothesis centers around the contrast between Draw items and Write items, consider the following three options:

One-Sample t-Test:

Directly perform a one-sample t-test using 3dttest++ with draw_write from the individual level.

Paired t-Test (Option 1):

Conduct a paired t-test using 3dttest++ with draw_tar and write_tar from the individual level.

Paired t-Test (Option 2):

Alternatively, perform a paired t-test using 3dttest++ with draw_new and write_new from the individual level.

All three approaches should yield virtually identical results, as they are both ontologically and algebraically equivalent.

Thanks for the advice, Gang! If I were to stick with the ANOVA as it stands now, is there a difference between Draw - Write vs. (Draw - New) vs. (Write - New), as the reviewer suggested? They suggested the latter case would lead to systematic underestimation of variance because I'm counting the same New term twice.

The focal effect should be the driving force behind your choice of model, not the other way around. In your case, a well-designed ANOVA model might also be appropriate and could potentially yield comparable results. However, without directly examining your data, it's difficult to say definitively why the two contrasts diverge based on your current model.

Gang Chen

The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.