T1 and statistical map at indiviudal level not align well

Hi AFNI experts,

We preprocessed task fMRI images and fit the GLM at the individual level. When we check the statistical maps for each condition at the individual level (p < .001), some statistical maps seem not to align the anatomical images very well. For example, in the attached files which belong to the visual condition, children just passively view the characters presented on the screen. A large proportion of “activated” voxels lay outside of the brain. We also tried -giant_move option but it did not help a lot. Do you have any ideas to fix this?


[size=medium]The afni_proc.py cmd we used.[/size]

afni_proc.py -subj_id s0 -script proc.av -blocks despike tshift align tlrc  \
    volreg blur mask scale regress -copy_anat                               \
    /Volumes/ana_SG/audvis/afni/ana/${subj}/anat/${subj}.anat+orig              \
    -anat_has_skull yes -dsets                                              \
    ana/${subj}/func/${subj}.run1+orig.HEAD         \
    ana/${subj}/func/${subj}.run2+orig.HEAD         \
    ana/${subj}/func/${subj}.run3+orig.HEAD         \
   ana/${subj}/func/${subj}.run4+orig.HEAD         \
    -tcat_remove_first_trs 0 -align_opts_aea -cost lpc+ZZ -big_move         \
    -tlrc_base MNI152_T1_2009c+tlrc -tlrc_NL_warp -volreg_align_to          \
    MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -blur_size 8            \
    -regress_stim_times ****.1D \
    -regress_stim_labels phon_A phon_V phon_AVC phon_AVI char_A char_V      \
    char_AVC char_AVI -regress_basis 'BLOCK(20.8,1)' -regress_censor_motion \
    2 -regress_censor_outliers 0.10 -regress_motion_per_run                 \
    -regress_make_ideal_sum sum_ideal.1D -regress_opts_3dD -num_glt 4       \
    -gltsym 'SYM: phon_AVC -phon_AVI' -glt_label 1 phon_con -gltsym 'SYM:   \
    char_AVC -char_AVI' -glt_label 2 char_con -gltsym 'SYM: phon_AVC        \
    -phon_A -phon_V' -glt_label 3 phon_supadd -gltsym 'SYM: char_AVC        \
    -char_A -char_V' -glt_label 4 char_supadd -jobs 2                       \
    -regress_est_blur_epits -regress_est_blur_errts -regress_run_clustsim no


Hi, Xin-

There should be an output QC HTML in your afni_proc.py results directory; it will be the “simpler” one, and in the future you might want to add “-html_review_style pythonic” to your afni_proc.py command.

But anyways, based on your subject ID, there should be a directory called “QC_s0” in the results directory; can you open the “index.html” file there, either by navigating to it and clicking on it, or perhaps just using a browser from the command line? For example, on Linux with Firefox install, you could type:

firefox QC_s0/index.html

from the results directory.

More details about this are provided here:
… but in the top 2 sections are the ve2a (volumetric EPI to anatomical) and va2t (volumetric anatomical to template) QC blocks. If you could post those images here, we could assess each of those individual alignment steps?

If you would like to re-run the QC with the nicer Pythonic form (for motion and other plots), assuming you have Python and its matplotlib module installed, you could do the following:

# get a script to redo the QC part that happens at the end of afni_proc.py, 
# which will run it in Pythonic form
wget https://raw.githubusercontent.com/afni/afni/master/src/ptaylor/supplement/redo_apqc.tcsh

# execute the script: with no arguments, it runs in the current directory;  or, you can put 
# a list of one or more results directories from afni_proc.py output,
# and it will run in each.
tcsh redo_apqc.tcsh

This will move any existing APQC (afni_proc.py QC directory) to a backup old_QC_*, and then create a new QC_${subj} again in the same spot, where ${subj} is the subject ID.


Hi Taylor,

Thanks for the thorough explanation and Pythonic tips, I do redo the QC btw :slight_smile:

Here are the QC images for ve2a and va2t.


Hi, Xin-

Thanks for sharing the ve2a and va2t images—in both cases, the alignment looks good to me, so I expect the overall alignment (which is just the concatenation of those) should be good.

To check the modeling a bit, could you also please share the vstat image, so we can see what your F-stat patterns from modeling look like? We should see places where F-stat is large, which means where the regression model explains a lot of the input well.


Hi Taylor,

Thanks for the detailed explanation! Please see the attached F map. The whole experiment contains 4 conditions: auditory (A), visual (V), auditory-visual consistent (AVC), and auditory-visual inconsistent (AVI).

I just realized that a lot of TRs (10 out of 24) were censored for the visual condition, it might be just the data quality problem?