The reason why we don’t do spatial smooth is that we want to define fROIs on GM more precisely for latter roi-based analysis in subject space. But when I skip blur in afni_proc, I encountered some problems:
Why the Coef become so big? Some voxel can be 10-20, and the AFNI autoRange can be 100, while the normal autoRange with spatial smooth is less than 10.
Is it still necessary to run blur estimation? Why the estimated FWHM is bigger than voxel size(2.78 > 2, using regress_est_blur_errts option)?
Does it influence REML, or is it still needed to run REML? The difference of the activation between REML and 3dDeconvolve seems bigger than that with spatial smooth.
Why the Coef become so big? Some voxel can be 10-20, and the AFNI autoRange can be 100, while the normal autoRange with spatial smooth is less than 10.
→ where is it so big? In CSF or outside the brain or at least non-GM? And is your data set rs-FMRI or task-based (and perhaps event-related)? Autorange is set by the largest value in the volume, which sure could be quite large due to noise, particularly in non-brain material- it might not be an issue at all in brain/GM. In short-- MRI data are very noisy. Averaging over ROIs will also lend a smoothing aspect to reduce individual very large values.
Additionally, I would guess that the t-stat in voxels with such large coefficients would be veery big, so they are likely not “significant” because of the big uncertainty.
Re:
2. Is it still necessary to run blur estimation? Why the estimated FWHM is bigger than voxel size(2.78 > 2, using regress_est_blur_errts option)?
→ It depends what you want to do-- note that there is inherent smoothness in FMRI data even without additional, user-set blurring. No voxel is totally isolated (in signal processing, the concept of a point spread function describes how a point source gets spread out over several detectors; note also that FMRI is acquired in k-space, so each acquired “data point” is actually spread out over a whole slice).
Does it influence REML, or is it still needed to run REML? The difference of the activation between REML and 3dDeconvolve seems bigger than that with spatial smooth.
→ Perhaps someone else can weigh in on this, but I think you would still run REML. That is for temporal autocorrelation (i.e., along/within the time series itself); that is different than spatial smoothing (although I bet spatial smoothing affects it a bit because of blurring away spikes, etc.-- however, that is not so important to your question here itself). Anyways, I don’t know how you would plan to use the t-stats/uncertainty info from REML at the ROI-averaging level, but my guess is that it is more accurate to use this.
Sorry for unclear questions. This is block-designed task-fmri data, and these voxels I mentioned are on fROIs. And thanks for your explain that noise make autoRange large. But I still don’t know why the Coef are so big (>10) in these fROIs. The t value (stimuli - another stimuli) is also high, but I think it means more significant? Here I attached a screen-shot, 3ddeconvolve results at upper panel, REML results at lower panel. ps: I used despike tshift align volreg mask scale and regress blocks in afni_proc, and both demean and deriv of motion were regressed though the head motion of this subject is very small (while the coef of a subject with more head motion is normal: ~2 & < 5, without spatial smooth, large autoRange ).
I use these fROIs to: 1, extract info in GM then do some correlation and group comparison; 2, for dti & tractography rois. Some papers(Grill-spector 2017, 2018 ) emphasize not doing spatial smooth for similar analysis. I am also not sure to use the fROIs clustered from 3ddeconvolve or REML acativation map, but that from the latter looks a little bit smaller. I would be very grateful if I can get your advice.
Would you please show us the -regress_*
options in the afni_proc.py command, or maybe
the entire command?
The difference in the t-stats is expected. OLSQ
(3dDeconvolve) leads to inflated t-values (which
most people don’t care about, since the betas are
taken to group analysis).
subject ID : subj.orig.noblur
AFNI version : AFNI_18.3.02
AFNI package : linux_ubuntu_16_64
TRs removed (per run) : 2
num stim classes provided : 4
final anatomy dset : anat_final.subj.orig.noblur+orig.HEAD
final stats dset : stats.subj.orig.noblur_REML+orig.HEAD
final voxel resolution : 2.000000 2.000000 2.000000
motion limit : 0.4
num TRs above mot limit : 0
average motion (per TR) : 0.0730955
average censored motion : 0.0730955
max motion displacement : 0.567598
max censored displacement : 0.567598
outlier limit : 0.1
average outlier frac (TR) : 0.000343978
num TRs above out limit : 0
num runs found : 2
num TRs per run : 279 279
num TRs per run (applied) : 279 279
num TRs per run (censored): 0 0
fraction censored per run : 0 0
TRs total (uncensored) : 558
TRs total : 558
degrees of freedom used : 34
degrees of freedom left : 524
TRs censored : 0
censor fraction : 0.000000
num regs of interest : 4
num TRs per stim (orig) : 128 128 128 128
num TRs censored per stim : 0 0 0 0
fraction TRs censored : 0.000 0.000 0.000 0.000
ave mot per sresp (orig) : 0.076635 0.070728 0.074085 0.068345
ave mot per sresp (cens) : 0.076635 0.070728 0.074085 0.068345
TSNR average : 42.6838
global correlation (GCOR) : 0.00363595
anat/EPI mask Dice coef : 0.908329
maximum F-stat (masked) : 157.677
blur estimates (ACF) : 0.985472 1.25728 9.03276
blur estimates (FWHM) : 0 0 0
Besides,
The difference in the t-stats is expected. OLSQ (3dDeconvolve) leads to inflated t-values (which most people don’t care about, since the betas are taken to group analysis).
In some papers(Grill-spector 2017), the mean t-value was extracted from one fROI to represent the “selectivity” of this fROI, in this case should I use the REML results?
Thanks,
2086
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.