I have a short question about a small volume correction analysis that I am running with afni. Is it a correct practice to use a p-value of 0.05 for a voxel-wise analysis run in a small ROI? Or shall I use an uncorrected p-value of 0.001 as one would typically do for a cluster analysis?
Is it a correct practice to use a p-value of 0.05 for a voxel-wise analysis run in a small ROI?
It is preferable to adopt a modeling approach that is less sensitive to the amount of the data. You may consider the highlight-but-not-hide reporting methodology:
Thanks a lot for this Gang,
I have some additional (related) questions that I cannot easily address.
I just would like to run an SCV to further explore and characterize my results (using small preregistered ROIs, which I found as being significantly involved in the task when running some t-tests on the averaged beta weights as well as when looking at the results from the whole brain). To do this, I was applying a 3dttest++ and 3dClustSim to each of the small ROIs and looking at the clusters that survive an initial uncorrected threshold of .001 or .005, but I didn’t do anything with the FWHM, which turns to be very relevant in voxel-wise analysis in small regions. I have, however, some doubts about what I did so far:
How can I implement FWHM in my analysis? More specifically, in the second-level analysis, I should use the acf obtained from the mean group (3dttest++) or that one at individual level (from 3dDeconvolve)? (I am very new to afni, and I am still learning how to program). I am using 3dDeconvolve and 3dREMLfit for my first-level analysis.
Is it correct to use ClustSim in a small volume (e.g. VTA, which has 40 voxels)? I would think that a peak-level analysis (as SVC in SPM) would be more appropriate here. If this is correct, wherein the output I need to look? I guess I should look at the peak of the cluster and check with the alpha value that comes from the analysis with the FWHM?
How can I implement FWHM in my analysis? More specifically, in the second-level analysis, I should use the acf obtained
from the mean group (3dttest++) or that one at individual level (from 3dDeconvolve)?
There is no definite answer for this. It seems that people have been empirically estimating ACFs using the individual-level or population-level data.
Is it correct to use ClustSim in a small volume (e.g. VTA, which has 40 voxels)? I would think that a peak-level analysis
(as SVC in SPM) would be more appropriate here.
The correctness of SVC is likely in the eye of the beholder. TBH, I personally consider the SVC approach controversial, and would rather not use it. Multiplicity is a complicated issue, not just in neuroimaging. Even though some common methods have been proposed/adopted in the field, controversies remain. As discussed in the paper I mentioned earlier, if the analysis is performed at the whole brain level, I would present as much information as possible at the whole brain level. You can highlight and discuss the regions you would like to focus on, but it would be beneficial to the field and promote reproducibility without limiting the full results to a smaller amount of data.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.