I’m a new user to AFNI, so please forgive me if there’s an obvious error on my part. I am trying to do a 2-sample t-test with 3dttest++ and want to use ETAC for cluster correction. I have two groups: depressed (MDD) and control (HC) subjects, so this group analysis compares the mean difference MDD - HC.
When I look at the 3dttest++ output file, specifically the BRIK that displays the mean difference between groups – (MDD - HC) – there are both positive and negative differences across the brain. See image below (apologies for its large size):
Specifically, in the voxel selected, the difference is positive between MDD - HC. However, this same voxel is included as a significant voxel in the 1-sided t-test 1neg mask that ETAC gave:
I checked the 3dttest++ outputs from the smoothed data to see if this is an artifact of smoothing, but this particular voxel seems to have positive differences at the smoothing levels I gave to ETAC (0, 4, 6, 8).
Okay, I ran it using those binaries, but still, the pos and neg split correctly. Note that an empty result is not necessarily bad, as in there might not be anything surviving for one of the tests. So I cannot tell what might be problematic with your results.
Hmm thanks for checking! I’ll go through the processing steps in case I did anything wrong beforehand.
As I stated, there are 2 groups (MDD and HC). There are two fMRI sessions per subject. In each session, there are 2 runs of the task.
Here are the processing steps I do per subject per session:
[ol]
[li] Run fmriprep
[/li][li] Preprocess: smooth, remove 5 first TRs, zscore each voxel over time
[/li][li] Run AFNI 3dDeconvolve across both runs of the task using the command:
[/li] cmd = ("3dDeconvolve -polort A "
"-input "
“{0}/{1}/{2}/{1}_{2}task-faces_rec-uncorrected_run-01_bold_space-MNI152NLin2009cAsym_preproc.nii.gz "
"{0}/{1}/{2}/{1}{2}task-faces_rec-uncorrected_run-02_bold_space-MNI152NLin2009cAsym_preproc.nii.gz "
"-mask {8} "
"-num_glt 2 "
"-local_times -num_stimts 4 "
" {3} {4} {5} {6} "
"-gltsym ‘SYM: +happy -neutral’ -glt_label 1 ‘happyvsneut’ "
"-gltsym ‘SYM: +fearful -neutral’ -glt_label 2 ‘fearvsneut’ "
"-ortvec {7} "
"-fout -tout -x1D {9}/{1}/{2}/{1}{2}task-faces_glm.X.xmat.1D " # this is the actual design matrix
"-xjpeg {9}/{1}/{2}/{1}{2}task-faces_glm.X.jpg " # this is the design matrix
"-fitts {9}/{1}/{2}/{1}{2}task-faces_glm.fitts " # model prediction - betas * signal
"-errts {9}/{1}/{2}/{1}{2}task-faces.errts " # residuals
"-bucket {9}/{1}/{2}/{1}{2}_task-faces_glm.stats”.format(run_path,
bids_id, ses_id, neutral_reg, object_reg, happy_reg, fearful_reg,full_save_path,whole_brain_mask,analyses_out ))
[/ol]
And then I have the following steps for group level analysis:
[ol]
[li] Subtract session 3 - session 1 betas for each subject using 3dcalc
[/li] cmd = ("3dcalc -a {0} ".format(subject_stats_2) +
"-b {0} ".format(subject_stats_1) +
“-expr ‘a-b’ -prefix {0}”.format(output))
[li] Calculate covariates from Framewise displacement (I think I did this right, but just to be sure, the first column lists the subject name that’s used later in 3dttest++. The order of the subjects listed as rows in the covariate file is not the order of the subjects listed in 3dttest++.)
[/li][li] Run 3dttest++ on the differences between sessions across groups with the command printed above:
[/li]command = ("3dttest++ -setA {0} ".format(MDD_subj_str) +
"-setB {0} ".format(HC_subj_str) +
"-prefix {0}/ses-03_minus_ses-01/ses-03_minus_ses-01_stats_{1}ACC_dlPFC_mask.ttest.nii.gz ".format(second_level,BRIK_KEY[BRIK]) +
"-AminusB "
"-mask {0} ".format(dlPFC_mask) +
"-covariates {0} ".format(covar_file) +
"-prefix_clustsim ses-03_minus_ses-01_stats{0}_ACC_dlPFC_mask ".format(BRIK_KEY[BRIK]) +
"-ETAC -ETAC_blur 0 4 6 8 " +
“-ETAC_opt NN=2:sid=1:hpow=0:name=test1:pthr=0.01/0.001/10:fpr=5”
)
[/ol]
I checked the results from running 3dttest++ and ETAC on individual sessions, instead of subtracted data across sessions. I still see inconsistency between 3dttest results and ETAC one-sided masks. So I’m not sure what I’m doing wrong here
Since I am a bit at a loss here, let me suggest trying 2 things for comparison.
Remove the extra ETAC options, and simply add -ETAC to the normal 3dttest++ command.
Try this with our data. Download AFNI_data6.tgz and add -ETAC to an existing 3dttest++ script.
afni_open -aw AFNI_data6.tgz
tar xfz AFNI_data6.tgz
cd AFNI_data6/group_results
# then edit script s6.ttest.covary, adding -ETAC to the 3dttest++ command, and run:
tcsh -x s6.ttest.covary
That would not take too long, and it would give you something to compare with.
While testing if there are any inconsistencies between 3dttest and ETAC, I’m running into weird issues involving the negative outputs of the t-test. Maybe the issue is just how I’m using afni.
to convert to nifti and choose the first BRIK only.
Next, when I run the command
3dcalc -a stat.6.covary_ACM.test1.ETACmask.global.1neg.9perc.nii.gz -b stats_group_ACM.nii.gz -expr 'a*b' -prefix negmask_times_stats.nii.gz
to find the positive and negative z-scored voxels that are included in the negative mask from ETAC, I don’t get any nonzero voxels. However, I know this is incorrect. For example, there are many voxels that are included in the ETAC 1neg mask that have negative group differences in the stats nifti file. Thus, the multiplication should yield negative numbers for those voxels. Is this an issue involving trying to combine AFNI data types with nifti?
Like I mentioned, I’m very new to AFNI, so I apologize if this is a basic error involving using niftis interchangeably with AFNI files.
I just realized that the weird multiplication issues were due to ordering of a and b given in 3dcalc.
If a is first and a is in bytes (for the mask), then afni will convert b to that type, so it was converting the negative values to zero to make it into bytes. (It gave a warning that I should have paid attention to.)
However, if b is first, then I guess it doesn’t try to do the same data conversion and now multiplication outputs are correct.
So now I can confirm that the ETAC 2-sided results are consistent with the 3dttest results.
However, I had to change hpow back to the default (2) in order to get significant results. I have hpow=0 in my code, so that may cause an issue. I will also go back in my code and check if ordering or data type conversion may be the cause of the confusion.
Gotcha, thanks! I’m still trying to understand any difference between my data and your data that can cause this inconsistency with my data.
I think the only differences between the data sets are:
[ul]
[li] I input blurring options
[/li][li] I run unpaired t-tests
[/li][li] All of my data inputs are nifti files in MNI space
[/li][/ul]
I’m having trouble looking into the effect of blurring because when I don’t blur with my data, there’s no longer significant voxels, and when I try to blur your data, I get an error.
Specifically, with 3 blur options in your data I get the error:
++ 3dXClustSim: AFNI version=AFNI_19.2.09 (Aug 5 2019) [64-bit]
++ Authored by: Lamont Cranston
++ Loading -insdat datasets
*+ WARNING: only 1612 input volumes, less than minimum of 10000
++ Single FPR goal: 9.0%
++ p-value thresholds: 0.0100 0.0090 0.0080 0.0070 0.0060 0.0050 0.0040 0.0030 0.0020 0.0010
++ min cluster size : 5 voxels
** FATAL ERROR: number of input volumes=1612 not evenly divisible by ncase=3
and with 2 blur options:
++ 3dXClustSim: AFNI version=AFNI_19.2.09 (Aug 5 2019) [64-bit]
++ Authored by: Lamont Cranston
++ Loading -insdat datasets
*+ WARNING: only 1091 input volumes, less than minimum of 10000
++ Single FPR goal: 9.0%
++ p-value thresholds: 0.0100 0.0090 0.0080 0.0070 0.0060 0.0050 0.0040 0.0030 0.0020 0.0010
++ min cluster size : 5 voxels
** FATAL ERROR: number of input volumes=1091 not evenly divisible by ncase=2
I’m confused by which “volumes” this number is referring to and how to fix it. I’m assuming it means spatial volumes, but I input the mask in your directory that appeared to be a whole brain mask. Any advice you could give would be greatly appreciated! Again, I apologize if this is a basic usage issue.
Ah, I see. When I run that command, it shows that the drive is 81% full, so not entirely. I’m going to try submitting the job on the scheduler instead of running it on the command line to see if that helps with memory allocation.
Update: You were right; it was a memory problem running 3dttest++ on the command line. I submitted the code via the sge scheduler and eventually got it to complete after increasing the memory limit. However, now there are no more significant voxels
The exact command that I used was
3dttest++ -prefix stat.6.covary_ACM -AminusB -setA Vrel FP OLSQ.FP.betas+tlrc.HEAD[Vrel#0_Coef] FR OLSQ.FR.betas+tlrc.HEAD[Vrel#0_Coef]
FT OLSQ.FT.betas+tlrc.HEAD[Vrel#0_Coef] FV OLSQ.FV.betas+tlrc.HEAD[Vrel#0_Coef] FX OLSQ.FX.betas+tlrc.HEAD[Vrel#0_Coef]
GF OLSQ.GF.betas+tlrc.HEAD[Vrel#0_Coef] GG OLSQ.GG.betas+tlrc.HEAD[Vrel#0_Coef] GI OLSQ.GI.betas+tlrc.HEAD[Vrel#0_Coef]
GK OLSQ.GK.betas+tlrc.HEAD[Vrel#0_Coef] GM OLSQ.GM.betas+tlrc.HEAD[Vrel#0_Coef] -setB Arel FP OLSQ.FP.betas+tlrc.HEAD[Arel#0_Coef]
FR OLSQ.FR.betas+tlrc.HEAD[Arel#0_Coef] FT OLSQ.FT.betas+tlrc.HEAD[Arel#0_Coef] FV OLSQ.FV.betas+tlrc.HEAD[Arel#0_Coef]
FX OLSQ.FX.betas+tlrc.HEAD[Arel#0_Coef] GF OLSQ.GF.betas+tlrc.HEAD[Arel#0_Coef] GG OLSQ.GG.betas+tlrc.HEAD[Arel#0_Coef]
GI OLSQ.GI.betas+tlrc.HEAD[Arel#0_Coef] GK OLSQ.GK.betas+tlrc.HEAD[Arel#0_Coef] GM OLSQ.GM.betas+tlrc.HEAD[Arel#0_Coef]
-paired -covariates covary.toe.gap.txt -ETAC -ETAC_blur 0 4 -ETAC_opt sid=1:hpow=0:name=test1:pthr=0.01/0.001/10:fpr=9
-mask mask+tlrc
The purpose of this was to test if the issue can be replicated on your data with blurring. I will try hpow=2 to check significance but if there’s no voxels significant, it makes it very difficult to debug if the same exact settings can’t be tested for both datasets.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.