I’m looking to create a study-specific brain template for a case study we ran. There are two participants who are identical twins, so we figured that there would be less distortion of the signal if we warp to an average of their two brains as opposed to the standard MNI template. I know that you can make custom templates with @toMNI_Awarp and @toMNI_Qwarpar, but from my understanding you need a dataset with at least 9 files to use @toMNI_Qwarpar. If anyone has any insight on how to do this, the help would be greatly appreciated!
This could be done with the following steps:
[li] Align the twins’ brain images approximately with 3dAllineate, so that they are “pretty close”.
[/li][li] Use 3dQwarp -plusminus to warp the two previously aligned brain images towards each other – to “meet in the middle”.
If you want them to be close to MNI space, then in step 1 you should align them to the MNI template instead of to each other. The goal of step 1 is to get them close enough so that step 2 can work well.
Adding on to Bob’s good advice, you can use 3dAllineate to compute the “in-between” space too. cat_matvec can compute this with from the affine transformation with the -S (square root) option. We have a script here that does something similar for a single subject across two sessions:
That in-between volume is meant to remove bias from either of the source sessions. This part only concerns the affine part of the alignment. For the nonlinear part, as Bob noted, you will need to use 3dQwarp.
Another important consideration is to make the data intensity values similar if you are computing a mean “template” with 3dUnifize. Most of this kind of thing is done in make_template_dask.py (a successor for the similar scripts you mentioned), but that’s not distributed in our standard AFNI distribution yet, and it’s probably overkill for this similar but somewhat simpler situation.
If sizes are very similar, you might consider using a rigid_body or rigid_equiv version of the affine transformation. These are both options for align_epi_anat.py that includes calls to 3dAllineate. The rigid_body limits the alignment by using 3dAllineate’s “-shift_rotate” option. The rigid equivalent also produces a transformation that does not scale or shear the dataset, but it’s derived from the full affine transformation (compute with cat_matvec -P or use align_epi_anat.py). With data from different subjects, it’s likely this would be preferable.
Twins, even “identical” twins, aren’t really identical, so it’s hard to say if their brains would be a very close match. Monochorionic twins, for example, can have very different anatomical characteristics.
I will add to Daniel’s customary excellent advice – whatever you do, look at the two brain images after alignment. You can flip between them easily in AFNI by setting one as the Underlay and the other as the Overlay, and then using the ‘u’ keypress (with the mouse cursor focus in the image viewer) to flip them – with the ‘See Overlay’ off. This method is a good way to easily see where the two brain volumes do line up well and where they do not. Only you can decide if the differences are important for your study, of course.