Using align_epi_anat.py, I was unable to achieve good alignment, so I did a manual alignment, to make align_epi_anat.py’s job easier.
Instead, my brains, which were aligned in original space, became unaligned in Talairach space.
Here is the align_epi_anat.py command I use:
align_epi_anat.py -anat2epi -anat ${subj}_mpra+orig
-save_skullstrip -suffix _al_junk
-epi ${subj}_run${run}+orig -epi_base 78
-epi_strip 3dAutomask
-volreg off -tshift off
After Talairaching the anatomy and doing volume registration on the functional data, the volreg, epi2anat and tlrc transformations get catenated, using the following:
Each alignment step of those can/should be checked individually; each alignment is calculated individually, and that is the time to look at output/base comparisons as overlay/underlay. Looking at the final result leaves too much of a question of “which alignment step let the team down”-- probably, it is just one; but even if it is more than one, this can only be checked effectively step-by-step.
And I generally think in cases like this: let’s use afni_proc.py for this; are you? the code snippets look reminiscent, but then you don’t have to worry about doing the concatenations yourself and possibly having some small badness/type creep in along they way. Related to #1, afni_proc.py’s automatic HTML QC will output images of epi-to-anat alignment and anat-to-template alignment. For free. So you have no extra work. And this would answer the question of where the problem might be.
In modern times, I have only see the need for “nudging” a dataset for alignment in the case of a single-slice-to-whole-brain alignment. I don’t think that is your case here? First thing to check: is there a big difference (either rotationally or center-of-mass-wise) between any pair of dsets you are aligning? That makes alignment trickier, yes, but we still have options to help deal with these things-- in align_epi_anat.py, one would use “-ginormous_move” to overcome some of those challenges; likewise, using “lpc+ZZ” as a cost function for EPI-T1w anatomical alignment is good, to have extra stability piled onto lpc’s excellent abilities. In afni_proc.py, one would use "
-align_opts_aea -cost lpc+ZZ -ginormous_move
You can load up the EPI and T1w volume as overlay and underlay, for example, in the AFNI GUI to see how much they overlap to start.
Similarly, for aligning anatomical to template space, “@SSwarper” is the best tool, which combines skullstripping (the “SS” part of the name") with nonlinear alignment. This is run before afni_proc.py, and then its results are handed to the program to use efficiently+wisely. This program also has a “-giant_move” option, if the datasets start off far apart.
All of the above is predicated on having dsets with no large, other features (these aren’t infant brains with different/low contrast; these aren’t odd partial field of view; these aren’t veeeery low contrast/quality; the anatomical is T1w, and so lpc(+ZZ) is an appropriate cost function for that step; etc.). AFNI can handle other situations, but then some other considerations would apply.
My version of AFNI is too old to include -align_opts_aea as an option.
There is still an offending data set, and I was wondering if I could share it with you, along with my script.
This is a script that my data analyst developed about 5 years ago (she has since moved one). I think it was developed with afni_proc.py, but I have since split it up into multiple scripts, so that I can check the results of the individuals steps.
These are brains from adolescents and young adults, who move a lot.
Glad some of those opts sorted things out. I shudder to think how old that version of AFNI is… I would encourage you to update it!
I would also suggest making/using afni_proc.py itself, because it would be more easily maintainable+tweakable over time. At the bottom of The Script That Hath Been Passed Down Over Generations, is there the afni_proc.py command that generated it (or, that generated its initial form)? We might be able to help retro-fit that process, or start from scratch to make one that has the same features.
Also, a major reason to both update your AFNI version and to use afni_proc.py to setup your processing pipeline is that afni_proc.py produces several helpful, automatic QC tools, including a table of important values; a driver script to open dsets for you to review; and a full HTML of several aspects of your data: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/tutorials/apqc_html/main_toc.html
With this functionality, it is possible to very efficiently (and, importantly, systematically!) review the various aspects of processing.
Happy to check out the problematic dsets, sure.
–pt
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.