Seeking alignment improvement






\

Hi there AFNI wizards!

I’ve recently gotten some nice success with preprocessing of some T2-weighted EPI sequence functional datasets through afni_proc.py. The method of alignment worked best was “-align_opts_aea -cost lpa -big_move”, though I think there is room for improvement. I’ve also tried lpc+zz and nmi (both with and without -big_move), but these methods didn’t work as well as lpa -big_move.

I include here some of the QC images that I think are most revealing, all from the same subject. 3 of them are from 1 scan and 3 from the other. The “meditation” scan seems to be processed in a superior manner compared to the “rest”, and I’m uncertain why. I know that the 2 datasets have slightly different TRs and I wonder if this could cause this difference in pre-processing success.

Is there any guidance you can provide about whether the results specific to the “rest” condition can be improved?

The afni_proc.py command for these data sets looks like this:

 afni_proc.py                                                                 \
    -subj_id                   sub_"$1"_rest1                                \
    -out_dir                   $directory_processed/fMRI/rest1               \
    -dsets                     $work/sub_$1/sub-"$1"_task-rest_run-01_bold.nii \
    -blocks                    despike tshift align tlrc volreg mask blur    \
                               regress                                       \
    -copy_anat                 $directory_sswarper/anatSS.sub_$1.nii         \
    -anat_has_skull            no                                            \
    -tcat_remove_first_trs     4                                             \
    -align_unifize_epi         local                                         \
    -align_opts_aea            -cost lpa                                     \
                               -big_move                                     \
    -volreg_align_e2a                                                        \
    -volreg_align_to           MIN_OUTLIER                                   \
    -volreg_tlrc_warp                                                        \
    -tlrc_base                 MNI152_2009_template_SSW.nii.gz               \
    -tlrc_NL_warp                                                            \
    -tlrc_NL_warped_dsets      $directory_sswarper/anatQQ.sub_$1.nii         \
                               $directory_sswarper/anatQQ.sub_$1.aff12.1D    \
                               $directory_sswarper/anatQQ.sub_$1_WARP.nii    \
    -volreg_post_vr_allin      yes                                           \
    -volreg_pvra_base_index    MIN_OUTLIER                                   \
    -mask_segment_anat         yes                                           \
    -mask_segment_erode        yes                                           \
    -regress_bandpass          0.01 0.25                                     \
    -regress_censor_first_trs  4                                             \
    -regress_anaticor                                                        \
    -regress_ROI               WMe CSFe                                      \
    -regress_apply_mot_types   demean deriv                                  \
    -regress_motion_per_run                                                  \
    -regress_censor_motion     0.3                                           \
    -regress_censor_outliers   0.1                                           \
    -blur_size                 3.0                                           \
    -regress_est_blur_epits                                                  \
    -regress_est_blur_errts                                                  \
    -html_review_style         pythonic                                      \
    -execute

Indeed, I’m a little surprised that lpa was the preferable cost function—it is normally used when two dsets have similar tissue contrast—but if it works, it works.

I agree that the “meditation” file looks like it has better EPI-anatomical alignment. The rest scan seems to have more signal loss, esp. in the cerebellum. That might be one reason for the difference? It is possible that not automasking the EPI would therefore be useful in that rest case:
-align_epi_strip_method None

It is difficult to make out the structures of the EPI. Normally, alignment is driven by the sulcal and gyral patterns within the data, and those are a bit difficult to see in some places. I guess I am not sure how much more improvement I would expect in the “meditation” alignment. There are some spots where the EPI coverage looks a bit thin, but that is probably due to signal attenuation from sinuses and dropout; one often sees these in the VMPFC and subcortical regions, unless very particular care is taken during the acquisition time.

–pt

It’s hard to say if this alignment can be improved, but it looks reasonable as it is. As Paul said, the EPI is missing the structural contrast that the lpc/lpc+ZZ methods use; CSF doesn’t appear very distinctly here in the EPI data. That makes it hard to tell visually if there is good alignment. That lack of contrast is sometimes caused by flip angles that are high. If redoing the acquisition is possible, then lowering the flip angle may help recover that structural information from the EPI data. If not, then also take a look at the pre-steady state volumes for better contrasts - the first or second volumes of the “dummy” scan can sometimes give the contrast needed for better alignment. In afni_proc, you can add something like this option to specify a presteady state volume:

-align_epi_ext_dset subj10/epi_r01+orig'[0]'

For your current processing, the lpa, lpa+ZZ and nmi cost functions are good choices. If data starts out very far apart, the -giant_move option expands the search more than the -big_move.

I'm sorry for the late reply but i want to thank you both for your attention and assistance!

I took both of your advice and it seems to have made subtle improvements, though I presume the issue is just inherently with the quality of the data.

Thanks again!