3dAllineate Fatal Error - @Align_Center?

Hello AFNI beginner here,

I have been trying to use uber_subject.py to create an afni_proc.py script to analyze resting state fMRI data of an ADNI subject and received the following error:


[7m** FATAL ERROR:e[0m 3dAllineate fails :: base image has 0 nonzero voxels (< 100)

And here are my 3dinfo of the dataset:


3dinfo ADNI_127_S_1427_MR_Accelerated
_Sagittal_IR-FSPGR__br_raw_20170906164807531_128_S606160_I901027.nii 
++ 3dinfo: AFNI version=AFNI_19.1.23 (Jun 24 2019) [64-bit]

Dataset File:    ADNI_127_S_1427_MR_Accelerated_Sagittal_IR-FSPGR__br_raw_20170906164807531_128_S606160_I901027.nii
Identifier Code: NII_cdwlaxZkOuwSssfsDHMmew  Creation Date: Thu Jul 11 16:18:07 2019
Template Space:  ORIG
Dataset Type:    Anat Bucket (-abuc)
Byte Order:      MSB_FIRST {assumed} [this CPU native = LSB_FIRST]
Storage Mode:    NIFTI
Storage Space:   25,690,112 (26 million) bytes
Geometry String: "MATRIX(-1.2,0,0,-145.401,0,-1.0547,0,-138.233,0,0,1.0547,388.256):196,256,256"
Data Axes Tilt:  Plumb
Data Axes Orientation:
  first  (x) = Left-to-Right
  second (y) = Posterior-to-Anterior
  third  (z) = Inferior-to-Superior   [-orient LPI]
R-to-L extent:  -379.401 [R] -to-  -145.401 [R] -step-     1.200 mm [196 voxels]
A-to-P extent:  -407.182 [A] -to-  -138.233 [A] -step-     1.055 mm [256 voxels]
I-to-S extent:   388.256 [S] -to-   657.205 [S] -step-     1.055 mm [256 voxels]
Number of values stored at each pixel = 1
  -- At sub-brick #0 '?' datum type is short


3dinfo run1+orig.HEAD
++ 3dinfo: AFNI version=AFNI_19.1.23 (Jun 24 2019) [64-bit]

Dataset File:    run1+orig
Identifier Code: AFN_RbRdH72rQGTX16RaKrwRQA  Creation Date: Thu Jul 11 16:02:38 2019
Template Space:  ORIG
Dataset Type:    Echo Planar (-epan)
Byte Order:      LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode:    BRIK
Storage Space:   78,643,200 (79 million) bytes
Geometry String: "MATRIX(-3.4375,0,0,-170.88,0,-3.4375,0,-186.568,0,0,3.4,-86.1097):64,64,48"
Data Axes Tilt:  Plumb
Data Axes Orientation:
  first  (x) = Left-to-Right
  second (y) = Posterior-to-Anterior
  third  (z) = Inferior-to-Superior   [-orient LPI]
R-to-L extent:  -387.443 [R] -to-  -170.880 [R] -step-     3.438 mm [ 64 voxels]
A-to-P extent:  -403.130 [A] -to-  -186.568 [A] -step-     3.438 mm [ 64 voxels]
I-to-S extent:   -86.110 [I] -to-    73.690 [S] -step-     3.400 mm [ 48 voxels]
Number of time steps = 200  Time step = 2.50000s  Origin = 0.00000s  Number time-offset slices = 48  Thickness = 0.000
  -- At sub-brick #0 'ADNI_127_S_1427_[0]' datum type is short:            0 to          5220
  -- At sub-brick #1 'ADNI_127_S_1427_[0]' datum type is short:            0 to          5081
  -- At sub-brick #2 'ADNI_127_S_1427_[0]' datum type is short:            0 to          4997
** For info on all 200 sub-bricks, use '3dinfo -verb' **

I’ve read a previous post[/url] that had a very similar problem, but I’m having difficulties understanding the thought process. For my data, it looks like both files are not centered in the X and Y position (both all Right and Anterior), but would it not matter because they are both relatively in the same position? And in this [url=https://afni.nimh.nih.gov/afni/community/board/read.php?1,155134,155142#msg-155142]reply, I don’t understand where I would “run @Align_Centers.” Do I add that line of code before @auto_tlrc in afni_proc.py like this?


# ================================== tlrc ==================================
@Align_Centers - base MNI_avg152T1+tlrc -dset ADNI_127_S_1427_MR_Accelerated_Sagittal_IR-FSPGR__br_raw_20170906164807531_128_S606160_I901027.nii -child run1+orig.HEAD
@auto_tlrc -base MNI_avg152T1+tlrc -input anat_ns+orig -no_ss \

Thank you!

Howdy-

  1. The 3dinfo output is useful for diagnosis here, indeed. Looking at the ‘extents’ of each dataset–that is, the interval in space along the x-, y- and z-axes that the volume occupies–shows some interesting info:

### extents/FOV of the anatomical (units=mm):
R-to-L extent:  -379.401 [R] -to-  -145.401 [R]
A-to-P extent:  -407.182 [A] -to-  -138.233 [A] 
I-to-S extent:   388.256 [S] -to-   657.205 [S]

# extents/FOV of the EPI (units=mm):
R-to-L extent:  -387.443 [R] -to-  -170.880 [R] 
A-to-P extent:  -403.130 [A] -to-  -186.568 [A] 
I-to-S extent:   -86.110  -to-    73.690 [S] 

Probably the EPI and anatomical volumes overlap along the R-L and A-P axes, but look at the I-S ones: the anatomical lives between “388.256 [S] -to- 657.205 [S]”, while the EPI lives faaaaaar away between “-86.110 -to- 73.690 [S]”.

… and any reference anatomical will be centered around (0,0,0), so it would probably only overlap with the EPI and only along the I-S axis (which might not be so useful, because it is the anatomical that gets aligned to it).

Alignment works best when datasets start off as similarly as possible: overlapping (so not a big translational offset) and spatially oriented the same way (so no big relative rotation). Checking for all possible starting points of a dset can lead the alignment programs getting stuck in false minima, which would be bad. In this case, your data sets are all soooo far apart from each other (hundreds of mm!) that alignment will have troubles. (I’m surprised that the public data set would have such “far out” origins, actually, but I guess it is what it is…)

Anyways, as noted in the other post, there are some options for telling afni_proc.py to expand its alignment criteria, but probably the best thing to do would be to run:


@Align_Centers - base MNI_avg152T1+tlrc -dset  run1+orig.HEAD

# I am just assuming this input dset is a zipped NIFTI; otherwise, make correct
@Align_Centers - base MNI_avg152T1+tlrc -dset  ADNI_127_S_1427_MR_Accelerated.nii.gz  

to create a new version of the EPI and anat dsets that are more “re-centered”, each file having a “_shft” suffix stuck onto the output dset. You can check their new extents and compare to the old with:


3dinfo -extent -prefix run1*HEAD ADNI_127_S_1427_MR_Accelerated*nii*

  1. While starting from uber_subject.py is OK, that doesn’t have all the optional flexibility of afni_proc.py loaded into it; we usually recommend starting with an “afni_proc.py -help” example, and tweaking from there. Additionally, we have put some code examples online, in particular about using a pre-afni_proc.py step to perform nonlinear alignment to standard space (and skullstripping of the anatomical, at no extra charge!) with @SSwarper; the way to do that and to use the output with afni_proc.py is described in @SSwarper’s help. Or, perhaps even better, is a pair of scripts for doing this, from a paper on biorxiv (that is worth reading, about processing choices that can/should be made and why). Those scripts are available here:
    https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/main_det_2018_TaylorEtal.html
    in particular these two scripts:

for alignment/skullstripping with @SSwarper:

s.nimh_subject_level_01_qwarp.tcsh

for afni-proc.py:

s.nimh_subject_level_02_ap.tcsh
… though I notice in the alignment script, that that is using an older syntax for @SSwarper, for which you could substitute a newer one:


# replac older program call from "s.nimh_subject_level_01_qwarp.tcsh": @SSwarper sub-${ss}_T1w.nii.gz sub-${ss}
# with:
@SSwarper          \
        -input  sub-${ss}_T1w.nii.gz        \
        -base   MNI152_2009_template_SSW.nii.gz       \
        -subid  sub-${ss}

… and the reading to go along with those scripts is:
https://www.biorxiv.org/content/10.1101/308643v1.article-info
Note that the @Align_Centers comment at the top still holds; you could just use “MNI152_2009_template_SSW.nii.gz” as the base dset there.

  1. Feel free to post your afni_proc.py command here for any further comments/suggestions, too.

That became a long reply, but hope that is useful.

–pt

Thank you so much for the simple explanation!

For now, I have edited my afni_proc.py based on example 9b: resting state analysis with ANATICOR, as follows:


afni_proc.py -subj_id $subj                                        \
        -script proc.$subj -scr_overwrite                          \
        -blocks despike tshift align tlrc volreg blur mask regress \
        -copy_anat $top_dir/anat_shft.nii                          \
        -dsets $top_dir/run1_shft+orig.HEAD                        \
        -tcat_remove_first_trs 3                                   \
        -align_opts_aea -giant_move                                \
        -tlrc_base MNI_avg152T1+tlrc                               \
        -tlrc_opts_at -OK_maxite                                   \
        -volreg_align_to MIN_OUTLIER                               \
        -volreg_align_e2a                                          \
        -volreg_tlrc_warp                                          \
        -blur_size 4.0                                             \
        -mask_epi_anat yes                                         \
        -regress_anaticor                                          \
        -regress_censor_motion 0.2                                 \
        -regress_bandpass 0.01 0.1                                 \
        -regress_apply_mot_types demean deriv                      \
        -regress_motion_per_run                                    \
        -regress_est_blur_errts                                    \
        -regress_run_clustsim no

However, I face another problem… now I am encountering an error with 3dDeconvolve.


++ 3dDeconvolve extending num_stimts from 0 to 118 due to -ortvec
++ 3dDeconvolve: AFNI version=AFNI_19.1.23 (Jun 24 2019) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ STAT automask has 74515 voxels (out of 271633 = 27.4%)
++ Skipping check for initial transients
++ Input polort=4; Longest run=492.5 s; Recommended minimum polort=4 ++ OK ++
++ Number of time points: 197 (before censor) ; 42 (after)
 + Number of parameters:  123 [123 baseline ; 0 signal]
** ERROR:  *** Censoring has made regression impossible :( ***
** FATAL ERROR: 3dDeconvolve dies: Insufficient data (42) for estimating 123 parameters

It appears that there are too many time points being censored (looking at the motion_censor.1D file, there does seem to be quite a bit of motion in fMRI, so I suppose that makes sense), but I am not sure why there are so many parameters for resting fMRI. Where would I go from here?

Hi-

You asked about the number of parameters being used; I am copy+pasting from a different MB thread here in reply (and slightly adding to it. The issue is counting degrees of freedom (DFs) as you build your model:
When you process a data set, you start with 1 DF for every time point you have (so, for N time points, you have N DFs). Every censored time point removes 1 DF (motion events with large Euclidean norm (“enorm”) of motion parameters lead to censoring of 2 time points, so 2DFs; events censored due to large outlier fraction lead to censoring 1 time point, so 1DF). Every single frequency that is bandpassed (and they are discrete frequencies, because we have finite time series) leads to 2 DFs being removed; for a TR=2s time series, the Nyquist frequency is f_N = 1/(2*2) = 0.25 Hz, and so bandpassing 0.01-0.1 Hz would lead to a loss of 60% of DFs just from that alone (and the rate of loss is higher for shorter TRs). Every regressor added to the model (e.g., polynomial or motion regressor) removes 1 DF. All this is to say-- gather your data and choose your processing wisely, for the sake of your final statistics!

In your specific case:


++ Number of time points: 197 (before censor) ; 42 (after)

… leads me to think there is a lot of motion present! 155 of your time points are being censored by this motion criterion. If you load your EPI dset in AFNI as an underlay, and move through volumes across time, does the dset look very motiony?

Note that your bandpassing is also using up a huge number of DFs, as well. However, in this case, motion is really a big culprit, and you should visually check if the dset in question is usable, I think.

–pt

As you have suspected, the EPI data evidently had a significant amount of motion. I suppose I shouldn’t take all data from ADNI for granted and make sure to double-check everything. I’ll keep your advice in mind for future processing.

Thank you again for the help!

Yes, I would definitely check all public data as carefully as if I had just acquired it.

–pt