This is probably not exactly an AFNI question, but I hope someone here might give pointers anyway.
I want to extract the brain shape (or cortex) shape from a macaque structural T1 scan as faithfully to the actual shape as possible.
My go-to method is to non-linearly align the NMT to the subject's brain, the apply the transform to the NMT segmentation (or brain mask) volume and go from there.
Is this optimal method? Or are there better ways to do that, maybe in a direct way - by reliably detecting boundaries between tissue types? With AFNI or with other tools but in macaques.
That's usually good enough for most purposes. As you know, we typically use
@animal_warper to do this. That program computes alignment into the standard space template, but it also transforms the template into the native space using the inverse transformation and transforms additional masks and atlas regions into the native space. In some cases, like surgical interventions, this may not be perfect, but it's typically pretty good. You can adjust the alignment to be more severe with options that allow for different penalty factor, letting the nonlinear warp bend more, or remove the penalty factor completely.
AFNI includes a couple programs for some segmentation - 3dSeg and 3dkmeans, that might make the separation of brain easier in more difficult cases.
Ting Xu has a deep learning method for segmenting macaque brain that you might try too.
For whatever reason I've been using a short tcsh script not @animal_warper, so I'll compare the performance between the two and I'll check out the deep learning version as well.
Thanks a lot,
I've used both and while Ting's U-net is faster, the @animal_warper is more consistent and precise, and whie most approaches are strongy dependent on the quality of the individual scan, @animal_warper is really forgiving in this aspect.
Thank you, Chris.
This is not my main project so U-net is still on my list of things to try, I am curious how it will do. But my script seems to give acceptable results anyway.