Standardized EPI does not match wit anatomical mask from same template

Hello,

we obviously already miss the AFNI bootcamp!
What am I trying to do: run 3dttest++ with an anatomical mask on bilateral parietal lobes.
What error I get:


++ 3dttest++: AFNI version=AFNI_19.0.17 (Feb 22 2019) [64-bit]
++ Authored by: Zhark++
++ option -setA :: processing as LONG form (label label dset label dset ...)
++ have 20 volumes corresponding to option '-setA'
++ 189973 voxels in -mask dataset
** FATAL ERROR: -mask doesn't match datasets number of voxels
** Program compile date = Feb 22 2019

I checked my stat.subject.HEAD files with 3dinfo and this is my output


++ 3dinfo: AFNI version=AFNI_19.0.17 (Feb 22 2019) [64-bit]

Dataset File:    stats.sub-011_REML+tlrc
Identifier Code: AFN_xXX_dIM3W6rsmxmGT1m_kQ  Creation Date: Mon Mar  4 13:54:55 2019
Template Space:  HaskinsPeds
Dataset Type:    Func-Bucket (-fbuc)
Byte Order:      LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode:    BRIK
Storage Space:   283,948,280 (284 million) bytes
Geometry String: "MATRIX(-1.5,0,0,88.5,0,-1.5,0,125.5,0,0,1.5,-68.5):119,145,121"
Data Axes Tilt:  Plumb
Data Axes Orientation:
  first  (x) = Left-to-Right
  second (y) = Posterior-to-Anterior
  third  (z) = Inferior-to-Superior   [-orient LPI]
R-to-L extent:   -88.500 [R] -to-    88.500 [L] -step-     1.500 mm [119 voxels]
A-to-P extent:   -90.500 [A] -to-   125.500 [P] -step-     1.500 mm [145 voxels]
I-to-S extent:   -68.500 [I] -to-   111.500 [S] -step-     1.500 mm [121 voxels]

All my subjects are preprocessed to the Haskins_NL_template (below the preprocessing steps)

I then checked my mask:


++ 3dinfo: AFNI version=AFNI_19.0.17 (Feb 22 2019) [64-bit]

Dataset File:    anat_bilateral_parietal_mask+tlrc
Identifier Code: AFN_H9_F7DvqxfWaxI-hhtAu8Q  Creation Date: Tue Apr  2 14:44:25 2019
Template Space:  HaskinsPeds
Dataset Type:    Echo Planar (-epan)
Byte Order:      LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode:    BRIK
Storage Space:   7,102,004 (7.1 million) bytes
Geometry String: "MATRIX(-1,0,0,89,0,-1,0,126,0,0,1,-69):179,218,182"
Data Axes Tilt:  Plumb
Data Axes Orientation:
  first  (x) = Left-to-Right
  second (y) = Posterior-to-Anterior
  third  (z) = Inferior-to-Superior   [-orient LPI]
R-to-L extent:   -89.000 [R] -to-    89.000 [L] -step-     1.000 mm [179 voxels]
A-to-P extent:   -91.000 [A] -to-   126.000 [P] -step-     1.000 mm [218 voxels]
I-to-S extent:   -69.000 [I] -to-   112.000 [S] -step-     1.000 mm [182 voxels]

The mask has the same dimensions as the Haskins template I made it from but my subjects have different dimensions. I don’t understand why my stats.subj files have 1.5 mm voxels and not 1mm like the template.
What am I missing here?

here it the preprocessing steps from the stats.subject file in case I made a mistake at some point.


HISTORY -----
[nens.lab@C07T20JBG1J2.local: Mon Mar  4 13:54:55 2019] Matrix source: ; 3dDeconvolve -input pb04.sub-011.r01.scale+tlrc.HEAD 
pb04.sub-011.r02.scale+tlrc.HEAD -censor censor_sub-011_combined_2.1D -ortvec mot_demean.r01.1D mot_demean_r01 
-ortvec mot_demean.r02.1D mot_demean_r02 -polort 2 -float -num_stimts 4 -stim_times 1 stimuli/Easy.tsv GAM 
-stim_label 1 LargeD -stim_times 2 stimuli/Medium.tsv GAM -stim_label 2 MediumD -stim_times 3 stimuli/Hard.tsv GAM 
-stim_label 3 SmallD -stim_times 4 stimuli/control_n.tsv GAM -stim_label 4 Ctrl_n -bout -jobs 4 -gltsym 'SYM: SmallD -LargeD' -glt_label 1 Small-Large 
-gltsym 'SYM: SmallD -MediumD' -glt_label 2 Small-Medium -gltsym 'SYM: MediumD -LargeD' -glt_label 3 Medium-Large 
-gltsym 'SYM: SmallD MediumD LargeD -3*Ctrl_n' -glt_label 4 All-control -gltsym 'SYM: LargeD -Ctrl_n' -glt_label 5 Large-Control 
-gltsym 'SYM: SmallD -Ctrl_n' -glt_label 6 Small-Control -gltsym 'SYM: MediumD -Ctrl_n' -glt_label 7 Medium-Control -fout -tout -x1D X.xmat.1D 
-xjpeg X.jpg -x1D_uncensored X.nocensor.xmat.1D -fitts fitts.sub-011 -errts errts.sub-011 -bucket stats.sub-011
[nens.lab@C07T20JBG1J2.local: Mon Mar  4 13:54:55 2019] {AFNI_19.0.17:macos_10.12_local} 3dREMLfit -matrix X.xmat.1D 
-input 'pb04.sub-011.r01.scale+tlrc.HEAD pb04.sub-011.r02.scale+tlrc.HEAD' -fout -tout -Rbuck stats.sub-011_REML -Rvar stats.sub-011_REMLvar 
-Rfitts fitts.sub-011_REML -Rerrts errts.sub-011_REML -verb
[nens.lab@C07T20JBG1J2.local: Mon Mar  4 15:12:30 2019] {AFNI_19.0.17:macos_10.12_local} 3drefit -atrstring AFNI_CLUSTSIM_NN1_1sided file:files_ClustSim/ClustSim.ACF.NN1_1sided.niml 
-atrstring AFNI_CLUSTSIM_MASK file:files_ClustSim/ClustSim.ACF.mask 
-atrstring AFNI_CLUSTSIM_NN2_1sided file:files_ClustSim/ClustSim.ACF.NN2_1sided.niml 
-atrstring AFNI_CLUSTSIM_NN3_1sided file:files_ClustSim/ClustSim.ACF.NN3_1sided.niml 
-atrstring AFNI_CLUSTSIM_NN1_2sided file:files_ClustSim/ClustSim.ACF.NN1_2sided.niml 
-atrstring AFNI_CLUSTSIM_NN2_2sided file:files_ClustSim/ClustSim.ACF.NN2_2sided.niml 
-atrstring AFNI_CLUSTSIM_NN3_2sided file:files_ClustSim/ClustSim.ACF.NN3_2sided.niml 
-atrstring AFNI_CLUSTSIM_NN1_bisided file:files_ClustSim/ClustSim.ACF.NN1_bisided.niml 
-atrstring AFNI_CLUSTSIM_NN2_bisided file:files_ClustSim/ClustSim.ACF.NN2_bisided.niml 
-atrstring AFNI_CLUSTSIM_NN3_bisided file:files_ClustSim/ClustSim.ACF.NN3_bisided.niml stats.sub-011+tlrc 
stats.sub-011_REML+tlrc


Finally, and not related, regarding the eBIDS, you should add a rating on quality, movement and whether participants were taken to the Rockbottom. :slight_smile:

Oh, I have used 3dresample to change it’s dimension and now the mask is in the same dimensions as my data.
I was able to run the 3dttest++… But is this correct?

Hi, Ilaria-

So, while the reference template is used as a base for registration of the anatomical/EPI data, we generally tend not to recommend upsampling the EPI data all the way to that voxelsize. It doesn’t create more information, and creates muuuuuch larger data sets. So, typically the voxelsize of the final EPI/errts/epits/stats dsets do have a different voxelsize/grid from the reference template.

While resampling the anatomical could be one way to make a mask, there are also masks that are created during processing that are probably even more appropriate, because they take into account the extent of your actual EPI data. For example, do you have a dset: mask_epi_anat*HEAD in your output? That might be the best option to use for each individual subject.

Perhaps even better for your group analysis with 3dttest++ would be to use gen_group_command as in the s.nimh_group_level_02_mema.tcsh script provided here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/main_det_2018_TaylorEtal.html
in the section “Voxelwise modeling” (it’s a 3dMEMA example, but you can do the same kind of command for 3dttest++), followed by making a group appropriate mask as in “Make group mask”, just below that.

Note that you should also check out some of the comments in the related bioRxiv draft associated with that script, as there are some very relevant sections about masking/accounting for zeros in the extents of dsets here:
https://www.biorxiv.org/content/10.1101/308643v1

-pt

Hi
thank you for clarifying my doubts about volume dimensions. Do I understand correctly that what you are referring to is a full brain mask. I want an anatomical localizer of only a specific sub region of the brain, in this case the bilateral parietal lobes. My understanding is that only the atlas has the information regarding the regions, or did I miss something else? Therefore the only solution would be to resample the voxels in the anatomical localizer? Sorry if my initial question was ill formed.

Also, thanks for the link because I found some answers to my next question about which file to use to find the correct blurr estimates and how to retrieve them!

Ilaria

Ah, OK, if you have sub-regions/localizers that you are interested in, then sure, resampling makes sense.

It is possible you might consider multiplying that localizer mask by the full brain mask suggested before, to make sure that you have EPI information overlapping the localizer.

–pt