I was wondering if there are any tricks for getting @animal_warper to work better with T2w images? I tried setting the cost to lpc+ZZ and that improved things but it’s still not great. The white matter aligns reasonably well in some spots but the gray matter is pretty off (brain size overestimated). Maybe the scans I received to process are simply too bad for this to work, but I feel it has worked better in the past for worse-looking T1w scans. Any leads?
Hi Chris,
If the template and the subject intensities are completely or partially reversed (CSF-dark in one and light in the other), then lpc, lpc+ZZ or nmi are usually good choices. If the sizes are very different, then the -supersize option might help too. Otherwise, I am considering adding a unifize option that might call 3dUnifize or 3dLocalUnifize that could help for some datasets. You can also invert the dataset to make a more T1-like dataset with an 3dAutomask and 3dcalc - see below. You already know these things, but for others’ benefit, there are other potential problems like strange coordinates to start with or overly large FOV.
invert T2/CT/EPI more like T1 by inverting and mask
3dAutomask -apply_prefix anat_amd.nii.gz -dilate 3 anat.nii.gz
set max = 3dBrickStat -max anat_amd.nii.gz
3dcalc -a anat_amd.nii.gz -expr “step(a)*(${max}-a)” -prefix anat_amd_rev.nii.gz