I feel like I’m missing something obvious, but here’s my situation: I have warped the ONPRC18 dti template of the rhesus macaque to NMT using the RheMAP based transforms I previously calcuated with ANTs (antsApplyTransforms). Now I want to incorporate this in our ‘single-subject pipeline’ that uses @animal_warper to align NMT/CHAR/SARM to the individual so that I also get a connectivity template in each monkey’s native space.
This is where things start to get confusing. The antsApplyTransforms function cannot deal with 3dAllineate’s 1D files and using 3dAllineate on the (5D) tensor volume does not seem capable of saving the result (ERROR (nifti_image_write_hdr_img2)). I wouldn’t be surprised if there was an afni function I’m blissfully unaware of that will transform the 5D tensor file, or should think about decomposing into sub-bricks, apply the transform and re-compose to 5D.
Could also be I’m saying really uninformed stuff, as I’m not at all used to working with dti data. I just want to apply transforms from @animal_warper to a 5D nifti file that defines tendors.
Ok, the ‘3dAllineate not being capable to save’ was my mistake. I can actually use 3dAllineate on the 5D tensor file but the output has a different datatype than the source (Float32 instead of Float64) and the result is now no longer seen as a tensor volume, but instead as scalar. Maybe this is where things go wrong?
When I compare headers, these are the differences:
name offset nvals values
intent_code 68 1 1005
datatype 70 1 64
bitpix 72 1 64
name offset nvals values
intent_code 68 1 0
datatype 70 1 16
bitpix 72 1 32
The dimensionality is the same, the transformation has been applied, but the datatype has changed. Can I prevent 3dAllineate from doing that?
Using nifti_tool to edit the header seems to solve the issue. I will now close this dialogue with myself on the message board and hope it will be useful for someone in the future (:P)
nifti_tool -mod_hdr -mod_field intent_code 1005 -infile IN.nii -prefix OUT.nii
This may be a slightly messier issue than you are picturing. I am not confident about warping tensor files, since the warps would not modify the rotations in the tensors themselves, and it seems like that would be important. The warp would merely move the tensors, not re-evaluate them.
Ignoring that, note that most AFNI programs do not even handle 64-bit float data. NIFTI images will be converted to 32-bit floats upon input (there is likely terminal text about that). In this case, 3dAllineate will downgrade to 32-bits, apply the warp, and write the output.
*** Do not use nifti_tool -mod_hdr to convert to 64-bit floats. ***
This conversion would just claim that the output was 64-bit floats, but it would not be. It is still actually 32-bits on disk, and the dimensions would no longer fit the specified data size.
To actually convert back to 64-bits, use “nifti_tool -convert2dtype”, as in something like:
nifti_tool -copy_image -infiles warped.32bit.nii.gz -convert2dtype NIFTI_TYPE_FLOAT64 -prefix warped.64bit.nii.gz
Does that seem reasonable?
Thanks, you are right on all accounts. The spatial transformation works but the tensors need to be reoriented as well. WIth ANTs one can use ReorientTensorImage but this needs the transform that was applied to the volume and it doesn’t seem to work with afni’s 1D transform files. Maybe, I should move this whole routine to ANTs and stop trying to make the afni transforms work in ANTs.
During a hackathon a bit ago, Bob Cox was working with the ANTs folks about making these transformations play together better, but I don’t know how far that got. For a simple affine, it shouldn’t be toooo complicated. But certainly, don’t make your life more difficult than it needs to be.
We generally don’t rotate DTI tensor values, only the positions, AFAIK. As Rick points out, it’s problematic, and it probably won’t be really right for a nonlinear alignment problems, which is how people would generally do an aligment nowadays.
We do take care of the somewhat related problem of transforming gradient vectors (1D lists of xyz vectors) for data that has to be rotated for motion correction or for “axialization” to a template. One could compute the eigenvectors from the tensor and rotate each of those. Then recompute the tensor from those transformed eigenvectors. Seems some trouble though. The use case seems a bit difficult to see. Do you need to transform the tensor or maybe something else like the primary eigenvector or some other aspect of the DTI atlas? That atlas has FA, primary eigenvectors and a number of other common DTI maps that could be transformed.
In case you do want to pursue something for this, consider programs like Vecwarp for applying affine transformations to lists of xyz coordinates.
I don’t know much about the AFNI-ANTs transformation conversion, but combining different definitions of affine warps has been described here:
If you try it, let me know how it works out.
Nonlinear transformations, surprisingly, should be easier to apply because they are a simpler dx,dy,dz at each voxel by most software packages. Of course, the direction of each package may be different, and there can be issues in the gridding of input and output, so one has to approach that with care too.