independent volreg on brain and phantom in same acquisition B)-

Greetings, venerable AFNIstas–

Short but potentially challenging question: We are acquiring a series of dynamic t1 images. In each dynamic are the head of a living human and a gadolinium phantom that we’re using for drift correction. Both move somewhat independently over the ~25 minute dynamic t1 sequence. This requires conducting and applying to the same images volume registration of both human head and phantom across acquisitions. I have some bad ideas about how to do this and would like to ask you for good ideas. Thoughts?

All the best,

Paul

Not completely sure of what is going on, but a few scattered ideas:

1.Cluster these images first (3dClusterize/3dclust/3dmerge/3dkmeans) if the gadolinium is separable from the head. Motion correction on each cluster separately (3dvolreg/align_epi_anat.py/3dWarpDrive/3dAllineate) and then join them back together at the end (3dcalc).

  1. Since these are T1 images, you might try a rigid or rigid_equiv alignment to a template (@auto_tlrc/align_epi_anat.py).

  2. Skullstrip each volume first (@SSwarper/3dSkullstrip) to separate the brain from the gadolinium or 3dAutomask.

These are good scattered ideas, Daniel. Thank you. I’m going to get serious about these data this week and will report back…

Attaching a 3D rendering of subject with phantom for your viewing displeasure.

PhantomZombie.jpg

Thanks for the cool image. The outer rings are very bright, so it should be easy to separate from the rest of the head. Maybe a 3dcalc would be enough if the intensities are higher than brain. Otherwise, I would guess some variation on those ideas would work.