With the advent of @SSWarper in addition to some tricks with zero-padding and centering that you’ve kindly provided, the quality of alignment between our BOLD and anatomical data has improved dramatically. The approaches you’ve provided are also robust. For data from a 1500-subject, 15-site data consortium with which we collaborate, data from 13 of 15 sites went from average Dice coeffs of ~0.78 to ~0.92. Really nice. This is not to mention even greater improvements in Dice coeffs for warped anatomical to template.
We wonder, then, if you might have suggestions for how we might improve alignment in our two straggler sites? Perhaps there is something instructive in these deviants that might help further improve the techniques you’ve provided? If you can provide us with an upload link, we’ll be happy to share from each site a representative subject’s data in addition to the afni_proc script we’re currently running.
Good to hear about your success with these methods. To be sure, your overall procedure and the Dice coefficient rely on both the nonlinear alignment to a template and the alignment of the EPI data to the anatomical dataset, so both will have an effect. I will PM you with upload instructions, or you can send me the data through some other means. Please provide your current procedure for context.
Hi Paul,
These two subjects have some special circumstances.
First, both datasets have unusually large non-uniformity with very high enhancement in the cortex. That meant a solution usually involved targeting that higher percentile value at 98% to keep most of the cortex.
There is also very little structural contrast, that is very little difference between white and gray matter and CSF and larger voxels than we typically see nowadays. The default cost functions, lpc or lpc+ZZ, rely on that contrast for alignment. Instead, nmi and lpa+ZZ were generally more reliable for these subjects.
Subject 1 had an additional complication of being extremely oblique; so much so, that the starting cardinal grids of each barely overlaid. That makes the alignment (without a giant_move or ginormous_move option) move onto a grid that won’t get most of the brain and presents complications for the intermediate datasets. The solution is to use the -master_anat option to that of the EPI but use the voxel resolution of the anatomical dataset or use the grid of the deobliqued anatomical dataset.
I will note that both subjects started off with a good alignment, as acquired. For the oblique acquisition of subject 1, really only a 3dWarp command was needed to verify that. With the low structural contrast, it’s difficult to tell where edges are and if anything gets improved. In this case, the “ginormous_move” option can make things worse for data that is already aligned by moving them apart and then searching with hazy data to get back to the start. The rigid_body option excludes scaling and shearing.
Although I didn’t use it here, I am glad to see that you used the check_flip option. It is a good idea to use that with multi-site and scanner studies.
Here are various formulations of align_epi_anat.py that worked well for these subjects.
Subj 1 (Oblique - 27 degrees)
3dAutomask -apply_prefix epi_am.nii.gz epi_short.nii’[5]’
align_epi_anat.py -anat T1_ns.nii -epi epi_am.nii.gz -epi_base 0 -perc 98 -cost lpa+ZZ -suffix _al10 -overwrite -epi_strip None -anat_has_skull no
align_epi_anat.py -anat T1_ns.nii -epi epi_am.nii.gz -epi_base 0 -perc 98 -cost nmi -suffix _al12 -overwrite -epi_strip None -anat_has_skull no -rigid_body
I would recommend also that you avoid storing the EPI datasets as float, and convert to short 16-bit integers.
3dcalc -a ‘epi.nii’ -expr ‘a’ -datum short -prefix epi_short.nii
Also I am not clear where your Dice Coefficient comes from.
Thanks so much for diving into our alignment problem. Much appreciated! We’ll unleash your fixes on the data and will report back with the results early in the new year. The Dice coefficient we’ve been using is the one that gets reported in the summary text file generated by afni_proc.
Just wanted to add that the use of the dice coefficient from afni_proc.py’s output is a little surprising. That shows the overlap of the EPI and anat masks in the template space (with tlrc block). That’s going to depend on how good the masks are generated. We think of them as a guide to see if alignment has succeeded at all. Some minimum overlap is good, but a high value doesn’t necessarily mean better alignment. Otherwise, we’d use that for our alignment cost function. It’s still best to take a look at the data.
At last circling back to this conversation and trying out your proposed fixes. I’m actually going to try out all of them on the full set of difficult data and see which one wins. In this context, three questions / requests for clarification emerge:
First, I’m assuming that T1_ns.nii means the SkullStripped T1?
Second, what does epi_am_un.nii.gz refer to?
Third, and the only substantive question in this list, is what is the least biased way for us to decide which approach wins this competition? Would it be the Dice coeff of the native-space anatomy and aligned EPI for each approach?
Look at the results to see (really) what’s best. If dice coefficient or any other simple overlap would be good, then that would be the cost function instead. Check for edges and flip underlay/overlay, fade in and out,…
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.