problem at volreg step using afni_proc.py

Hi AFNI experts
I’m using afni_proc.py to do the preprocessing steps for my fMRI data. However, at the volreg step, a strange result like the image in the Attachment was exported. Only 2 subjects have this problem. And 1 of the 2 was solved by change another T1 file exported by dcm2nii (it exports 3 T1 files, , o and co*). I found that the problem first appear at rm.epi.nomask.r$run export by 3dAllineate at the volreg step (code below).
# apply catenated xform: volreg/epi2anat/tlrc
3dAllineate -base
20190704_160705t1mpragesagisoTI1000waternos005a1001_ns+tlrc
-input pb01.$subj.r$run.tshift+orig
-1Dmatrix_apply mat.r$run.warp.aff12.1D
-mast_dxyz 3
-prefix rm.epi.nomask.r$run

And the afni_proc.py code was also provided in case there is something wrong with it.

#!/usr/bin/env tcsh

created by uber_subject.py: version 1.2 (April 5, 2018)

creation date: Sun Nov 29 23:44:09 2020

foreach subj_num (31 36 40 42 43 46 47 48 52 53 54 56 58 59 61 62 63 64 70 73 74 77 80 81)

foreach subj_num (48)

set subject and group identifiers

set subj = S$subj_num
set gname = YNM

set data directories

set top_dir = /data2/public_space/LZhang/audiovisual_orig_1st/data/fmri_newGLM
set anat_dir = top_dir/raw_data/{subj}
set epi_dir = top_dir/raw_data/{subj}
set stim_dir = top_dir/time_file/{subj}

mkdir -p {top_dir}/subject_results/group.{gname}/subj.{subj}/ cd {top_dir}/subject_results/group.{gname}/subj.{subj}/

run afni_proc.py to create a single subject processing script

afni_proc.py -subj_id $subj
-script proc.$subj -scr_overwrite
-blocks tshift align tlrc volreg blur mask scale regress
-copy_anat $anat_dir/20190704_160705t1mpragesagisoTI1000waternos005a1001.nii.gz
-dsets
$epi_dir/A1.nii.gz
$epi_dir/A2.nii.gz
$epi_dir/A3.nii.gz
$epi_dir/A4.nii.gz
$epi_dir/AV1.nii.gz
$epi_dir/AV2.nii.gz
$epi_dir/AV3.nii.gz
$epi_dir/AV4.nii.gz
-tcat_remove_first_trs 8
-tlrc_base MNI_avg152T1+tlrc
-align_opts_aea -giant_move
-volreg_align_to MIN_OUTLIER
-volreg_align_e2a
-volreg_tlrc_warp
-blur_size 6.0
-regress_stim_times
$stim_dir/ba-8_A.txt
$stim_dir/ba-8_AV.txt
$stim_dir/ba0_A.txt
$stim_dir/ba0_AV.txt
$stim_dir/ba8_A.txt
$stim_dir/ba8_AV.txt
$stim_dir/da-8_A.txt
$stim_dir/da-8_AV.txt
$stim_dir/da0_A.txt
$stim_dir/da0_AV.txt
$stim_dir/da8_A.txt
$stim_dir/da8_AV.txt
$stim_dir/pa-8_A.txt
$stim_dir/pa-8_AV.txt
$stim_dir/pa0_A.txt
$stim_dir/pa0_AV.txt
$stim_dir/pa8_A.txt
$stim_dir/pa8_AV.txt
$stim_dir/ta-8_A.txt
$stim_dir/ta-8_AV.txt
$stim_dir/ta0_A.txt
$stim_dir/ta0_AV.txt
$stim_dir/ta8_A.txt
$stim_dir/ta8_AV.txt
-regress_stim_labels
ba-8_A ba-8_AV ba0_A ba0_AV ba8_A ba8_AV da-8_A da-8_AV da0_A
da0_AV da8_A da8_AV pa-8_A pa-8_AV pa0_A pa0_AV pa8_A pa8_AV ta-8_A
ta-8_AV ta0_A ta0_AV ta8_A ta8_AV
-regress_basis ‘GAM’
-regress_censor_motion 0.3
-regress_motion_per_run
-regress_opts_3dD
-jobs 50
-num_glt 12
-gltsym ‘SYM: 0.25ba-8_A +0.25da-8_A +0.25pa-8_A
+0.25
ta-8_A’ -glt_label 1 VI_-8
-gltsym ‘SYM: 0.25ba0_A +0.25da0_A +0.25pa0_A +0.25ta0_A’
-glt_label 2 VI_0
-gltsym ‘SYM: 0.25ba8_A +0.25da8_A +0.25pa8_A +0.25ta8_A’
-glt_label 3 VI_8
-gltsym ‘SYM: 0.25ba-8_AV +0.25da-8_AV +0.25pa-8_AV
+0.25
ta-8_AV’ -glt_label 4 VV_-8
-gltsym ‘SYM: 0.25ba0_AV +0.25da0_AV +0.25pa0_AV +0.25ta0_AV’
-glt_label 5 VV_0
-gltsym ‘SYM: 0.25ba8_AV +0.25da8_AV +0.25pa8_AV +0.25ta8_AV’
-glt_label 6 VV_8
-gltsym ‘SYM: 0.0833ba0_A +0.0833ba-8_A +0.0833ba8_A
+0.0833
da0_A +0.0833da-8_A +0.0833da8_A +0.0833pa0_A
+0.0833
pa-8_A +0.0833pa8_A +0.0833ta0_A +0.0833ta-8_A
+0.0833
ta8_A’ -glt_label 7 VI
-gltsym ‘SYM: 0.0833ba0_AV +0.0833ba-8_AV +0.0833ba8_AV
+0.0833
da0_AV +0.0833da-8_AV +0.0833da8_AV +0.0833pa0_A
+0.0833
pa-8_AV +0.0833pa8_AV +0.0833ta0_AV +0.0833ta-8_AV
+0.0833
ta8_AV’ -glt_label 8 VV
-gltsym ‘SYM: 0.125ba-8_A +0.125da-8_A +0.125pa-8_A
+0.125
ta-8_A +0.125ba-8_AV +0.125da-8_AV +0.125pa-8_AV
+0.125
ta-8_AV’ -glt_label 9 SNR-8
-gltsym ‘SYM: 0.125ba8_A +0.125da8_A +0.125pa8_A
+0.125
ta8_A +0.125ba8_AV +0.125da8_AV +0.125pa8_AV
+0.125
ta8_AV’ -glt_label 10 SNR0
-gltsym ‘SYM: 0.125ba0_A +0.125da0_A +0.125pa0_A +0.125ta0_A
+0.125ba0_AV +0.125da0_AV +0.125pa0_AV +0.125ta0_AV’ -glt_label
11 SNR8
-gltsym ‘SYM: 0.0833ba0_AV +0.0833ba-8_AV +0.0833ba8_AV
+0.0833
da0_AV +0.0833da-8_AV +0.0833da8_AV +0.0833pa0_A
+0.0833
pa-8_AV +0.0833pa8_AV +0.0833ta0_AV +0.0833ta-8_AV
+0.0833
ta8_AV -0.0833ba0_A -0.0833ba-8_A -0.0833ba8_A
-0.0833
da0_A -0.0833da-8_A +0.0833da8_A -0.0833pa0_A
-0.0833
pa-8_A -0.0833pa8_A -0.0833ta0_A -0.0833ta-8_A
-0.0833
ta8_A’ -glt_label 12 VV-VI
-regress_make_ideal_sum sum_ideal.1D
-regress_est_blur_epits
-regress_est_blur_errts

tcsh -xef proc.$subj |& tee output.proc.$subj

end

Could anyone help me and thanks in advance!

regards,
Lei Zhang

Hi, Zhang-Lei-

I’m a bit curious what your version number of AFNI is? I wonder if it is a bit old? You can check by running “afni -ver”.

Because of the order of your blocks, the volreg results are shown in the final space (and this is basically what we recommend in most cases, so that is the correct thing to do). But I suspect that the EPI->anatomical alignment might not be good, so the final placement of the EPIs is off. (It is possible the anatomical->template alignment is bad, as well-- that should be checked.)

Do you see a directory in your afni_proc.py results directory called QC*/? That contains an HTML that you can open (QC*/index.html) to view a lot of quality control automatically created by afni_proc.py. That would show you individual alignments to check. A lot of its features are described here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/tutorials/apqc_html/main_toc.html
… and we recommend that you have Python+Matplotlib installed on your computer, and then you can use the “-html_review_style pythonic” for an even nicer style of the QC HTML.

There are also a script for single subject (ss) review, called @ss_review_driver, that helps guide you through viewing things in the GUIs, esp. alignment. (Many of these features have been integrated into the QC HTML, as well.)

Does that show any alignment issues between EPI->anatomical or anatomical-> template?

–pt

Hi pt
Thanks for your reply. My AFNI version is AFNI_19.3.17. No QC*/ is found in the results directory.
I use @ss_review_driver to check my results. In the “check alignment between anat and EPI” part, I found that the alignment is actually good, but both anat and EPI were strange. Sorry that I don’t know how to check anatomical-> template because I’m new to this and it seems not provided by @ss_review_driver.
Here is the information in out.ss_review.S48.txt that may help.

subject ID : S48
AFNI version : AFNI_19.3.17
AFNI package : linux_centos_7_64
TR : 0.64
TRs removed (per run) : 8
num stim classes provided : 24
final anatomy dset : anat_final.S48+tlrc.HEAD
final stats dset : stats.S48+tlrc.HEAD
final voxel resolution : 3.000000 3.000000 3.000000

motion limit : 0.3
num TRs above mot limit : 25
average motion (per TR) : 0.0651944
average censored motion : 0.060212
max motion displacement : 14.8969
max censored displacement : 14.7876
outlier limit : 0.1
average outlier frac (TR) : 0.00173028
num TRs above out limit : 16

num runs found : 8
num TRs per run : 573 573 573 573 573 573 573 573
num TRs per run (applied) : 573 566 564 560 573 573 565 573
num TRs per run (censored): 0 7 9 13 0 0 8 0
fraction censored per run : 0 0.0122164 0.0157068 0.0226876 0 0 0.0139616 0
TRs total (uncensored) : 4584
TRs total : 4547
degrees of freedom used : 104
degrees of freedom left : 4443

TRs censored : 37
censor fraction : 0.008072
num regs of interest : 24
num TRs per stim (orig) : 350 345 344 349 345 347 348 344 345 346 347 346 347 349 347 348 340 346 347 348 349 346 348 352
num TRs censored per stim : 1 0 0 0 7 2 9 0 4 4 3 0 9 0 5 0 0 2 3 2 0 4 0 0
fraction TRs censored : 0.003 0.000 0.000 0.000 0.020 0.006 0.026 0.000 0.012 0.012 0.009 0.000 0.026 0.000 0.014 0.000 0.000 0.006 0.009 0.006 0.000 0.012 0.000 0.000
ave mot per sresp (orig) : 0.059940 0.064692 0.055341 0.063820 0.068829 0.061730 0.082562 0.061919 0.080397 0.073060 0.061793 0.059867 0.067746 0.065851 0.064782 0.064474 0.056292 0.064761 0.053936 0.067525 0.058309 0.066955 0.053046 0.060134
ave mot per sresp (cens) : 0.059718 0.064692 0.055341 0.063820 0.059485 0.060849 0.062403 0.061919 0.056704 0.069689 0.059314 0.059867 0.060930 0.065851 0.059633 0.064474 0.056292 0.063895 0.052220 0.065337 0.058309 0.064298 0.053046 0.060134

TSNR average : 129.525
global correlation (GCOR) : 0.110747
anat/EPI mask Dice coef : 0.88903
anat/templ mask Dice coef : 0.797534
maximum F-stat (masked) : 18.7281
blur estimates (ACF) : 0.780446 4.49885 11.1442
blur estimates (FWHM) : 0 0 0

Lei Zhang

Hi, Lei Zhang-

The anatomical in standard space should be called: anat_final.*+tlrc. You could overlay this on your template dset and check alignment. How does that look?

Hmm, yes, that AFNI is quite old, over a year and half. I would have expected to see the QC_*/ directory there still, but if you update your AFNI:


@update.afni.binaries -d

… then that should give you that newer functionality. There are other newer features, too; for example, the GUI now has a message about whether your data contains obliquity information (that is what I noticed was missing from your screen shots); and probably many other features.

On a separate note, we do generally advise AFNI users to sign up for our Digest email, which lists updates/new programs/fixes/Bootcamps and other announcements:
https://afni.nimh.nih.gov/afni/community/board/read.php?1,154890,154890
Also, we do have a set of online “AFNI Bootcamp” classes, for both general theory and hands-on specifics of FMRI processing:
https://www.youtube.com/c/afnibootcamp
and a slightly older Bootcamp:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/educational/bcamp_2018_05_mit.html

–pt

Hi pt
The alignment between anat_final and the MNI152 template I use is terrible as the image shows.
And thanks very much for your recommendation!

Lei Zhang

Howdy-

Oh my, that did go horribly wrong…

I am noticing your reference volume is the MNI average; that is pretty blurry, without a lot of detail to align well with nonlinear alignment. We typically recommend using another MNI volume with more detail, such as the MNI 2009c volume, and esp. using the MNISSWnii.gz volume (for performing nonlinear alignment using @SSwarper prior to running afni_proc.py, and then using the alignment results as inputs to afni_proc.py). See the attached image-- the top is the MNISSW volume, and the bottom one is the MNI average. You can see the difference in level of detail…

We can discuss this more, but I’ll point out some useful background for these considerations:

  1. Some description of templates (and atlases), including how to pick them:
    [AFNI Academy] Atlases and Templates Intro - YouTube
  2. Also, the alignment series would probably be really useful to watch:
    [AFNI Academy] Alignment (part 1/4): Background - YouTube
    There are maaany considerations that come into alignment.

–pt

Hi pt

Oh, yes. The chosen reference is MNI_avg152 because it is the only choice of MNI template in uber_subject.py.
Now I choose MNI152_T1_2009c+tlrc as the tlrc_base, and it works well!
Thank you for your advice!

Lei Zhang

Hi, Lei Zhang–

Ahh, I see. Well, we don’t really recommend that people set up their afni_proc.py commands with uber_subject.py anymore, because it doesn’t have all the functionality of the full afni_proc.py itself. We have lots of starter examples in the afni_proc.py help itself:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/programs/afni_proc.py_sphx.html
… and in the AFNI Bootcamp data, such as in AFNI_data6/FT_analysis/s* and in AFNI_demos/AFNI_pamenc/afni_scripts/*

And, were you able to update your AFNI so you get the QC HTML created?

–pt

Hi pt
Thanks very much for your recommendation.
After I fixed a few problems with python, the QC HTML finally appeared! It’s a very nice interface to check the quality!
Sorry that I have 2 new questions. Now I use the MNI152_2009_template_SSW.nii.gz as the tlrc_base. I found that the output size is 647664, which is different from the output using MNI_avg152 (617361) as the base because of the different size of the 2 templates. And I found the size of the most atlas in MNI template is the same same as MNI_avg152. Can I simply use 3dresample employing the atlas or MNI_avg152 as the master to make the output size the same as the atlas so that I can extract time series in a brain region?
Another one is now I use the option as Example 6b suggest (https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/programs/afni_proc.py_sphx.html#example-6b-a-modern-task-example-with-preferable-options) to do the first-level analysis. Stats.+tlrc was not output anymore, only the stats_REML.+tlrc was output. So, is that right I just use the output in stats_REML.+tlrc to do the group-level analysis in the 3dMVM?

Zhang Lei

Hi, Zhang Lei-

Glad the QC is useful (again, I recommend using “-html_review_style pythonic” for the nicest version, which you might be using now…).

You can see the matrix/grid dimensions of a dataset with:


3dinfo -n4 -prefix MNI_avg152T1+tlrc.HEAD MNI*SSW*nii*
91	109	91	1	                   MNI_avg152T1
193	229	193	5	MNI152_2009_template_SSW.nii.gz

Both datasets have 1x1x1 mm**3 voxels, so indeed they have different fields of view (FOVs).

What atlas are you interested in, in particular?

Note that the matrix size of the output EPI dataset likely be different than the above, because it isn’t resampled to such a fine size; the default output EPI will be about the same as input (maybe rounded slightly finer, but only very slightly). You can control the final output voxel size by using this afni_proc.py option:


-volreg_warp_dxyz 2.5

(for example, if you wanted 2.5 mm isotropic final voxels).

So, in general, you will always have to resample the atlas. There are better and worse ways of doing even this; Daniel Glen is the local expert on advising about this aspect. You can use "3dresample -rmode NN -input ATLAS_NAME -master FINAL_EPI_DSET ", for example and see how that looks as a starter.

But note also that there aaaare different MNI spaces, actually, where the brains match up approximately but not exactly-- so make sure to overlay your chosen atlas on the MNI reference base you use, to make sure they line up well.

Re. using “-regress_reml_exec”: the stats files there will be called statsREML, and contains both “coef” and “tstat” subbricks, typically. Here is an example of at statsREML header (part of the “3dinfo statsREMLHEAD” output) for an afni_proc.py-produced stats dset with that flag used:


Number of values stored at each pixel = 5
  -- At sub-brick #0 'Full_Fstat' datum type is float:            0 to       38.9125
     statcode = fift;  statpar = 2 227
  -- At sub-brick #1 'CONTROL#0_Coef' datum type is float:     -48.0433 to       47.5369
  -- At sub-brick #2 'CONTROL#0_Tstat' datum type is float:     -4.94164 to       5.79199
     statcode = fitt;  statpar = 227
  -- At sub-brick #3 'TASK#0_Coef' datum type is float:     -65.5111 to        37.002
  -- At sub-brick #4 'TASK#0_Tstat' datum type is float:     -5.80596 to       8.67473
     statcode = fitt;  statpar = 227

NB: this dset is part of the AFNI Bootcamp demo data, here: AFNI_demos/AFNI_pamenc/AFNI_02_pamenc/sub-10517/sub-10517.results
And the scripts for processing it are here: AFNI_demos/AFNI_pamenc/afni_scripts/, in particular this contains the afni_proc.py (AP) cmd: c.ss.3.AP.pamenc.

I think the idea is that you have asked for the 3dREMLfit version of modeling, so just that stats is output file of interest. It should have all the normal/usable properties of a stats file, just output via 3dREMLfit processing (so, using a generalized least squares approach instead of OLS, I think).

–pt

Hi PT
Thanks for reminding me that the atlas should always be resampled!

For example, the 3mm version of AAL90, Power264, and Dosenbach160 are all 617361, which is the same as the output size using MNI_avg152 as the base, not 647664 using MNI152_2009_template_SSW (almost all atlases I see are the same size as output using MNI_avg152 as the tlrc_base). For example, now I download the meta-analysis results from neurosynth.com, which is also 617361, and I need to extract the time series from the subject’s preprocessed data (647664) that is activated in the meta-analysis results from neurosynth.com. So, I can just resample the results from neurosynth.com using the preprocessed data to be the master. Then, the results image from neurosynth.com also become 647664, and I can extract the time series using the activating cluster in meta-analysis results. And this process is also right to extract the time series from STG using atlas like AAL90. Is that right?

The second question is about the stats output. I understand the stats* is from 3dDeconvolve and statsREML is from 3dREMLfit. I find statsREML is recommended when 3dMEMA is used to do the group analysis. If I use 3dMVM to do the group analysis, either stats* from 3dDeconvolve or statsREML from 3dREMLfit is appropriate as the input in 3dMVM. Is that right?

Lei Zhang