large significant cluster

Hello AFNI experts,

When I viewed cluster results after conducting 3dRegAna, there was a large significant cluster with 9190 voxels. The significant cluster covered both frontal lobe and parietal lobe. The image file for each subject was smoothed at 8mm. How can I get AFNI to report the results in reasonably restricted area rather than in a big chunk? Thanks!

Best,
Veda

Veda,

It’s hard to know as to why your result is having trouble in terms of specificity since you didn’t provide much context information. In addition to statistic values, you may take a close look at the effect estimates in those regions:

http://biorxiv.org/content/early/2016/07/22/064212

Hi Gang,

Thanks for the response.
I checked the results at the single subject level and found the same thing (a large significant chunk). The table below was extracted from the results from simple t-test (condition A vs. baseline) for one subject at p value of 0.05.

AFNI interactive cluster table

3dclust -1Dformat -nosum -1dindex 10 -1tindex 11 -2thresh -1.967 1.967 -inmask -dxyz=1 1.01 20 /Volumes

#Coordinate order = RAI
#Voxels CM x CM y CM z Peak x Peak y Peak z
#------ ------ ------ ------ ------ ------ ------
6082 +29.9 +18.9 +13.6 +49.5 -19.5 -0.5 (a big chunk again)
1502 -22.0 +76.3 -10.2 -34.5 +70.5 -15.5

I don’t think that the problem was specific to the certain statistical analysis because I found the same problem in single subject analysis with T-Test and in group analysis with either t-test or regression.

I provide my first level analysis for each subject below. Please let me know if there is any other information I can offer. Thanks!

    afni_proc.py
-dsets XXX  \
-blocks tshift align tlrc volreg blur mask regress \
-copy_anat XXX \
    -anat_has_skull no \
-tcat_remove_first_trs XXX \
-volreg_align_e2a \
-volreg_tlrc_warp -tlrc_NL_warp \
-tshift_opts_ts -tpattern alt+z2 \
-blur_size 8 -volreg_align_to first -volreg_warp_dxyz 3 \
-align_opts_aea -giant_move \
-regress_stim_times XXX.txt 
-regress_stim_labels Condition1 Condition2 \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_reml_exec \
-regress_local_times \
-regress_basis 'dmBLOCK(0)' \
-regress_stim_types AM1 \
-regress_censor_outliers 0.1 \
-regress_censor_motion 0.3 \
-regress_opts_3dD \
-stim_times_subtract 12 \
-num_glt XXX \
-gltsym XXX \
-bash -execute

Veda

Veda, I was trying to say in my previous response that scientific investigation is not just about statistical significance. In addition, you may want to take a close look at the effect estimates as well and see if the results make sense from that perspective. That’s why I included the following discussion: http://biorxiv.org/content/early/2016/07/22/064212

Hi Gang,

With all respect to your previous suggestion, my main concern is about Why AFNI treated frontal and parietal lobe as one big cluster without respecting the anatomical boundary. I thought that the problem might be more about smoothing rather than about the effect size. Would you agree?

Hi Veda,

Based on the analysis, focal regions are presumably
clustering together. There is nothing strange about
that, though it is useful to see how they break up at
more strict thresholds.

A nice way to report how clusters overlap with regions
is using ‘whereami -omask’. That will report how each
cluster in the omask dataset overlaps with various
regions for an atlas (or a list of atlases). That is a
useful way to describe clusters in general.

There are also programs like 3dExtrema and 3dmaxima
that can find local extrema that are at least some given
distance apart, say.

Though to back up a bit, please feel free to provide
more details about exactly what you would like to
report.

  • rick

Why AFNI treated frontal and parietal lobe as one big cluster without respecting the anatomical boundary.
I thought that the problem might be more about smoothing rather than about the effect size.

There are many potential reasons that may cause the problem of differentiating regions. The group analysis program can only output based on whatever the user feeds in, and cannot achieve anything beyond its capability. In other words, it cannot detect any preprocessing issues such as over-smoothing, poor or suboptimal alignment, non-BOLD signal such as head motion, etc.