I'm usually an FSL user and have primarily used FLIRT/FNIRT for registration, but am struggling with some infant brain data from dHCP and thought I'd see if I could get better registration from an infant template space to individual functional space using AFNI tools. I have been playing around with align_epi_anat.py and getting very poor results. I've played around with several variations on this code, but generally have been trying to specify: use of cmass, use of ginormous_move/giant_move, and no need to skull strip. Despite trying multiple variations, every attempt at calling the cmass or ginormous_move commands still returns the error that cmass is off. I know I found a few other posts about this, but haven't found any variation in syntax that resolves this error. Any help/suggestions are greatly appreciated!
OK, sure. There are a couple things to check when starting here, to check out the DSET_EPI and DSET_ANAT properties. Depending on their age, the anatomicals of the infants look different than adult datasets. When aligning datasets, it's important to know whether they have similar or differing tissue contrasts, so you can choose an appropriate cost function; if the contrasts are the same, we might specify using -cost lpa+ZZ, but if they differ we would use -cost lpc+ZZ. Also, the initial overlap matters. These are discussed in these AFNI Academy videos on alignment.
Firstly, can you please show the image (or images, if there is any obliquity in either the EPI or anatomical) looks like from this command, which will show the initial overlap, displaying the EPI underlaying the anatomical:
Then, can you show what this image looks like, displaying the EPI (only the axial montage is shown here, because we turn of coronal and sagittal outputs for simplicity; we also use subbrick selection to just show the [0]th volume of the EPI---that is, the first one):
Thanks for showing those images, that helps for understanding the starting point. Much of my comments here will come from previous experience with datasets with intuitions about where to start; some of it is guessing with a couple lead ideas. Sorry that I don't have a definitive suggestion out of the box. Much of this can also be handled within afni_proc.py, which is where we would recommend moving this EPI-anatomical alignment information if you are intending to include this alignment in processing (there are many benefits to include this alignment within the full pipeline, to avoid unnecessary additional smoothing, automatically choosing a good reference EPI volume, and more).
Indeed, the overlap isn't great, so adding an option to have an initial center of mass alignment seems key. I'm not sure that there is large relative rotation, so many I would start by just adding the -cmass cmass without big_move, giant_move or ginormous_move to start.
The anatomical (DSET_ANAT) has good tissue contrast, particularly in the cortex. The GM is brightest, then WM, then ventricle/CSF. It is also skullstripped already.
The FMRI dset (DSET_EPI) does not have strong contrast. That is, it is hard to tell GM, WM and CSF visually, which will be a challenge. It also looks like it has brightness inhomogeneity issues---large regions of the brain have different baseline brightness, separate from tissue structures. This is a separate challenge. I think that WM might be a bit brighter than GM. It is hard for me to see ventricles---those might be bright?
Some thoughts from the above:
Given the EPI contrast, I think this means that lpc+ZZ might be the best cost function to start with. However, sometimes in cases where there is low tissue contrast, the nmi cost function does better.
We can try to do "local unifizing" on the EPI to reduce the influence of brightness inhomogenity, trying to bump up the 'edge features' within the data without being overwhelmed by the underlying brightness pattern. To do that, which is what we typically do for human datasets in afni_proc.py, we use AFNI's 3dLocalUnifize:
Thank you so much! The alignment is way way better, it now just looks like an issue of scaling. Are there particular parameters you would recommend for this? (orange overlay is the structural)
I would try sticking with the NMI cost function, and add in -giant_move so that a large space of rotations is traversed (and two passes are used during alignment).
For infants, the tissue contrast in the anatomical dataset can be reversed from what we would see in older subjects' T1 images. The nmi cost function may be the best for this. You can also try -rigid_body. Also the -perc option can be bumped up if the EPI data has high values bunched together near the top that give the image contrast. If you have a pre-steady state EPI, you may be able to eke out some structural contrast from that instead of a later volume.
Thank you all for your suggestions! Despite trying several of these suggestions, the results are still not quite lining up. In order to see if we could improve alignment, I tried using the nudge tool in FSLeyes and manually lined up the functional data to more closely resemble the orientation of the structural (structural in red/orange, nudged functional in gray)
I then re-ran align_epi_anat.py, (with NMI, without giant_move, with and without -cmass) but unfortunately the alignment was worse than it looked prior to running the script (shifted significantly)--See next image.
Any idea why this would happen? Hoping that if we can resolve this issue it will improve things overall. Thanks again!
Once nudged (BTW, the afni GUI also has a nudge plugin), the datasets shouldn't need the giant_move or big_move option. Did you try the -rigid_body option?
If you like, I could try to take a look at it. Send the two datasets to me via OneDrive, box.com or Globus.
Thanks for the quick reply! Unfortunately getting rid of giant/big_move and trying the -rigid_body option still led to the introduction of a shift despite the initial nudging making the scans look somewhat well aligned.
infant_structural_template (the anatomical I'm trying to get into individual functional space-- based on an ROI atlas I created using dHCP subjects)
sub-CC00087AN14_ses-31800_task-rest_desc-preproc_bold_ALFF (original functional data file)
rotated_sub-CC00087AN14_ses-31800_task-rest_desc-preproc_bold_copy (nudged functional data file)
Ideally, if I can get a good transform to the original functional data, it will save a lot of time when I try to apply the same process to the rest of my subjects, but if I can get it working with the nudged data too, that would be awesome. I appreciate any help/advice!
I think some of the initial strategy you worked out with Paul was pretty good, but it didn't look like that in FSLeyes because of differences in the way that AFNI and FSL deal with displaying oblique data. To view them similarly, you can remove the obliquity from a copy of all the datasets with 3drefit -deoblique ...
The EPI dataset has very poor tissue contrast, so it's hard to evaluate for a fine alignment. The cost functions that work with this (nmi, ls) won't distinguish between gray, white and CSF. Also, there's a significant chunk of brain that's not visible in the EPI dataset in the occipital cortex. That's not due to alignment because it's in the original dataset and might be caused by a bad coil.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.