I am kind of new to neuro-imaging and Afni, so please pardon these many basic questions.
I ran the proc script with both FDR and 3dClustSim and got the results attached to stats_REML. I am now looking at the stats_REML data in the GUI.
My questions:
Is the p value under slider bar corrected or uncorrected?
If I didn’t run the Monte Carlo simulation with 3dClustSim, that would be the uncorrected p value. Since I ran it and the results were attached to the stats_REML data, that should be the corrected p value. Am I right?
If it is the uncorrected p, and I want to use the FWE-corrected p (height thresholding) before applying cluster-size threshold (extent thresholding). What should I do? Where can I get the FWE-corrected p value?
The other way around: If it is the corrected p, and I want to use the uncorrected p before applying cluster-size threshold. Where can I get the uncorrected p value?
Related to questions 2) and 3), which way is recommended? Is it better to go with the uncorrected or corrected p at the voxel level?
The last column in the Cluster table is for Alpha value. Is this Alpha the Alpha calculated when I run 3dClustSim or it is something different?
Also, I understand that we have to find clusters that produce Alpha <= .05? In my results (after doing CLUSTERIZE), I obtain Alpha > .10 for all clusters in the list even when p was set to .05. What should one report in this case?
In the cluster results for one run, I see this line: “Min FDR q in threshold = 0.62: True detections are rare or weak”. For another run, I got “Min FDR q in threshold = 0.60: True detections are rare or weak”. People typically use a cut off q value of .05. Does this mean FDR is too conservative and I should not use it to look at the change between the 2 runs? or perhaps I can just set FDR = 0.62 for both runs and compare the activation anyway?
I very much appreciate your time clarifying these points.
Duong
The p-value under the threshold slider is uncorrected,
while the q-value is FDR corrected.
3dClustSim correction gives p-values that apply to the
clusters, not to individual voxels. If those FWE results
are attached, you should see corrected p-value for the
clusters in the Clusterize->Rpt (report) interface.
First apply your uncorrected p-value with the threshold
slider bar, then use the Clusterize report interface to
limit the results to clusters of some appropriate size (or
do not bother to limit them below 20, and just verify the
corrected p-values in the Rpt interface).
From the Rpt interface there is a “3dclust” button that
will output a corresponding command in the terminal window
(i.e. a command to reproduce your results without using
the afni GUI). For this, be sure to apply an appropriate
minimum cluster volume.
n/a
The voxel level is always the uncorrected p-value, the
corrected p-values apply to the clusters.
The top row of the cluster table should show alpha
values. The last column would refer to AlphaSim, which
is antiquated.
If the clusterize reports alpha values > 0.1, then none
are surviving. The p set at 0.05 is uncorrected (and is
probably too liberal - more common and accepted is using
uncorrected p of 0.001, or at least down to 0.005).
Those FDR warnings imply that there are too few voxels
at low p-values to even reach q = 0.05, at least for
those individual tests.
Thank you Rick for your comments/suggestions.
I have a few more questions.
You said the voxel level is always the uncorrected p-value, the
corrected p-values apply to the clusters. Does this apply to both FWE (p) and FDR (q)? I am wondering: if I use FDR (q under the slider bar) instead of FWE, do I need to set a limit for cluster size afterwards?
Maybe I am missing something important here?
My understanding is that, if one’s already applied an appropriate correction for multiple comparisons (either FWE or FDR) at the voxel level, one should be justified in interpreting that anything shown significant is “significant”. There is no need for further corrections. Am I right?
I have read the paper: “Cluster-extent based thresholding in fMRI analyses: Pitfalls and recommendations” by Choong-Wan Woo et al. ---------- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4214144/ ------------- in which they state that cluster-extent based thresholding provides low spatial specificity; researchers can only infer that there is signal somewhere within a significant cluster and cannot make inferences about the statistical significance of specific locations within the cluster.
*** With that in mind … If I want to investigate the evolution of brain activation within some ROI associated with some task over time, is it valid that I measure and look at the change in the number of voxels of some active cluster within that ROI (with cluster-extent correction)?
On the one hand, I think that even I see a bigger cluster (say 500 voxels) at time t2 compared with 400 voxels at a previous time point t1 (same location), I still cannot conclude that there is an effect of (say) ‘learning’ in that ROI. This is because, as suggested in the paper above, nobody is sure if the additional 100 voxels are actually active. On the other hand, I feel that it is valid if I can observe a consistent trend over multiple time points, e.g. 400 voxels at t1, 500 at t2, 700 at t3, 1000 at t4 etc.
What would you suggest? (I hope the question is clear)
FDR does not use clustering. Simply setting the slider
to q=0.05 provides FDR corrected results. The q-value would
correspond to some p-value differently for each volume of data.
If you want to measure the evolution of voxels within some
ROI, then typical clustering across the volume does not seem
appropriate. Using a localizer would make more sense.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.