afni proc alignment questions

Hello everyone,

I used afni proc to align fMRI images, corrected with SLOMOCO, to T1 using default (big_move) alignment setting. I checked the fMRI images as an overlay and they were not aligned properly. I have 39 sets of images and 3 of them were not aligned after I tried the other two alignment options (giant_move and ginormous_move).

I am wondering if there’s other alignment methods that you recommend.

Thanks!

Howdy-

Could you please post the afni_proc.py command that you ran? That would help so we know the details of what has been done at the moment.

–pt

Hello,

Absolutely. Here is the afni proc command that I ran.


#!/usr/bin/env tcsh
# created by uber_subject.py: version 1.2 (April 5, 2018)
# creation date: Wed Aug 21 16:52:36 2019

# set subject and group identifiers
set subj  = VGT28
set time = pre
# set data directoriesclear
set top_dir = /media/cunninghamlab/DATA/MRI/CCFES_VGT/RawMRI
set anat_dir  = $top_dir/T1/${subj}/${subj}${time}_T1
set epi_dir   = $top_dir/fMRI/${subj}

# run afni_proc.py to create a single subject processing script.
afni_proc.py -subj_id $subj                                           \
        -script proc.$subj$time -scr_overwrite                             \
        -blocks align blur mask scale regress      \
        -copy_anat $anat_dir/${subj}.${time}.T1+orig                          \
        -dsets                                                        \
            $epi_dir/${subj}${time}_fMRI1/${subj}${time}.SLOMOCO.fMRI1/${subj}.${time}.fMRI1.slicemocoxy_afni.slomoco+orig.HEAD \
            $epi_dir/${subj}${time}_fMRI2/${subj}${time}.SLOMOCO.fMRI2/${subj}.${time}.fMRI2.slicemocoxy_afni.slomoco+orig.HEAD \
        -tcat_remove_first_trs 4                                      \
        -align_opts_aea -ginormous_move                                   \
        -blur_size 5.0                                                \
        -regress_stim_times                                           \
            $top_dir/fMRI/BH2Run.txt                                       \
            $top_dir/fMRI/LH2Run.txt                                       \
            $top_dir/fMRI/RH2Run.txt                                       \
        -regress_stim_labels                                          \
            BH2Run LH2Run RH2Run                                      \
        -regress_basis 'BLOCK(36,1)'                                \
        -regress_opts_3dD                                             \
            -gltsym 'SYM: BH2Run -LH2Run' -glt_label 1 BH2Run-LH2Run  \
            -gltsym 'SYM: BH2Run -RH2Run' -glt_label 2 BH2Run-RH2Run  \
            -gltsym 'SYM: RH2Run -LH2Run' -glt_label 3 RH2Run-LH2Run  \
            -gltsym 'SYM: LH2Run -RH2Run' -glt_label 4 LH2Run-RH2Run  \
        -regress_make_ideal_sum sum_ideal.1D                          \
        -regress_est_blur_epits                                       \
        -regress_est_blur_errts

Thank you!

Thanks for posting that.

Just to check, what is your current AFNI version number (=output of “afni -ver”)?

I think you should replace your “-align_opts_aea …” line with the following


-align_opts_aea -cost lpc+ZZ -giant_move -check_flip      \

to respectively select the lpc+ZZ cost function (which tends to be the most stable/general for EPI-T1w alignment), a bit larger search in the fit space, and to check for potential left-right flipping between the EPI and anatomical. Why are we on the watch for that latter one? … because it happens:
Glen DR, Taylor PA, Buchsbaum BR, Cox RW, Reynolds RC (2020). Beware (Surprisingly Common) Left-Right Flips in Your MRI Data: An Efficient and Robust Method to Check MRI Dataset Consistency Using AFNI. Front. Neuroinformatics 14. doi.org/10.3389/fninf.2020.00018
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7263312/

Also, if you have Python+matplotlib on your system (you can verify this with “afni_system_check.py -check_all”), then I would add “-html_review_style pythonic”, so you get a fun+information HTML created to QC your processing. There is one that is created by default anyways, but the “pythonic” one looks much nicer.

I would add this line to get a better mask estimated (but not applied, just there if you want one later):


-mask_epi_anat yes                                        \

And I am not familiar with SLOMOCO—but basically, the assumption is that all motion effects are removed by it? The brain is moved slice-by-slice in each volume, and then are the motion parameters regressed out somehow in that process? And is censoring done then? Normally we would be using motion estimates to censor and filter bad volumes (along with looking for large fractions of outliers in a volume). I am not sure how best to do that in this case; nor how to account exactly for additional lost degrees of freedom from any earlier regressions (typically, it should all be done at once).

–pt