In the current version of AFNI the errts-files coming out of 3dDeconvolve keep the censored TR-s as zero-filled volumes. Will this affect the smoothness estimates from 3dFWHMx? As far as I know you can’t use the censor-file in 3dFWHMx.
That is right, the all-zero volumes should not be applied
when estimating the blur. Consider how it is applied in an
afni_proc.py script, like: AFNI_data6/FT_analysis/s15.proc.FT.uber
That uses 1d_tool.py to get time points per run which as
nor censored, before running 3dFWHMx, e.g.:
To avoid bothering with handling the censored TR:s I will in the future just use
-regress_est_blur_errts
in the pre-proc. This seem to give me the ACF for my errts-files (and errts_REML if anaticor is used) automatically.
However, the afni_proc script uses full_mask.$subj+tlrc as the 3dFWHMx mask. Why not the mask_epi_anat.$subj mask? This is the mask we combine for all subjects and use in the group analysis via:
I can take this opportunity to ask what kind of masks we should use. We have e.g. used a GM-mask when running 3dClustsim to reduce the number of voxels to include.
This is how we have done it:
3dFWHMx: Used the individual mask_epi_anat masks (can we use GM-mask for smoothness too?)
3dClustSim: GM-mask re-sampled to errts/stats-grid OR group combined mask_epi_anat (see above)
3dttest/3dMVM/etc: group combined mask_epi_anat (see above)
Is this all right?
PS: Bonus scenario:
If we do a small volume corrected / ROI-based analysis (e.g. just looking at thalamus) and use a thalamus mask for our statistical test and just use the thalamus mask in 3dClustsim. Do we only use the ROI region when we assess the acf parameters too? I.e. the smoothness in the ROI only or should we use the ACF from the whole brain?
Sure, I think adding ‘-mask_epi_anat yes’ will have the
mask_epi_anat dataset used for 3dClustSim and such.
I would not use a GM-mask for smoothness estimates. Well,
Bob would know better, but I would be leery of that.
3dttest++/etc should use the same group mask as 3dClustSim.
I am not sure how stable the smoothness estimates would be
for a small ROI. Give it a try and compare with the whole
brain. In any case, it should be good enough to use the
whole brain estimates.
But that makes all the results go away
(Half joking).
Btw, SPM, per default, use FWE which tend to provide pretty good results and it’s an OK approach according to the Eklund paper. Avoiding cluster issues.
Afnis “viewer defualt” is FDR which “never” , through the q-score, show significance. So we have to use FWHMx+clustsim or -3dttest: clustsim/etac (which is always super conservative).
Why don’t Afni at least offer FWE like spm? What’s the harm since it’s ok according to Eklund and some people seems to be successful with it.
Thanks!
We are an AFNI lab and we really like “the AFNI way”, it makes us feel in control when we can avoid toolboxes that simply spit things out for you (that we think look a bit toooo good).
The reason for me asking so bluntly is that it’s frustrating when we keep seeing reasonable findings disappearing when applying corrections while similar tasks from other groups at the same university and the same MRI-scanner using the same sequences makes SPM simply spit out nice results saying that they are FWE corrected (hence avoiding all the cluster business). And I really wonder the reason for you not using FWE, so I can tell users at the lab when they wonder about it. It’s not an accusation, sorry about the tone!
What you said:
“If we actually believed what you suggest, we probably would be offering different methods”.
was the answer I was looking for. Since my knowledge of stats is not what it could be I just wanted to make sure we are not missing out on something.
I’d rather be correct than having questionable “sweet looking maps” from a toolbox.
Did not mean to sound confrontational, thanks a bunch! I know this topic is a hot one…
Your tone wasn’t bad. It is more that I could hardly
respond without having to grumble about software and
publications.
AFNI does use FWE correction, just using Monte Carlo
simulations, rather than random field theory (requiring
data to have a blur of several voxels), as SPM does.
If there is a large discrepancy between analyses with the
same data, that would be concerning. It would take time
but would be educational for both groups if you were to
duplicate one study between them (same data).
There are places for accidental cheating though, such as
using pairs of 1-sided tests in place of single 2-sided
tests, without adjusting p-values. Comparing analyses is
probably the only good way to understand a discrepancy.
Yes, I want to analyze their data and run fwhmx + 3dclustsim (this method seem to be a bit less conservative compared to 3dttest++ -clustsim or -etac) and see if we ge the same results.
I suspect that using pars of 1-sided t-test have been used.
Is that when e.g. you have two conditions (e.g. iaps images and shapes) and two groups (controls and patients) and you would run:
1 ttest iaps vs shapes for controls
1 ttest iaps vs shapes for patients
follwoed by a ttest vs the coef comming from these two t-test and thus missing the variance between the groups? Or what do you refer to?
Also, how would you adjust the p-values?
Thanks! I will get back to you if we run this test!
I was commenting on a more simple and common case, where
positives are reported in one image, and negatives in another.
As 2 tests, both the p-values and alpha values should be halved.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.