hi there, this is low priority so feel free to skip for time. but curious if i had a very low res, low contrast time series if there's an accuracy limit on 3dvolreg's ability to estimate its motion. specifically i have ABCD-style T1 and T2 anatomicals that had navigator scans interspersed for prospective motion correction. these nav images are also outputted with the data. if i sew these low res, low contrast nav images together into a time series (see attached) and estimate its motion with 3dvolreg, i'm not sure how veridical the estimates would be (see also attached). it'd be cool to correlate estimated head motion with QC metrics of the resulting structurals, but with such low quality images (8mm isotropic) i'm not sure i'm setting myself up for an unstable analysis.
finally, i see this in 3dvolreg's help:
** roll = shaking head 'no' left-right
** pitch = nodding head 'yes' up-down
** yaw = wobbling head sideways (ear toward shoulder)
but shouldn't it be
** roll = wobbling head sideways (ear toward shoulder)
** pitch = nodding head 'yes' up-down
** yaw = shaking head 'no' left-right
I don't think there is an a priori rule for how low (resolution) 3dvolreg can be expected to go. It will depend on contrast, distortion, noise, etc. I think what you are suggesting is interesting, and you should just try it and visualize the results to see how things look. When I was in Cape Town, one of other postdocs worked on making a volumetric navigator to apply during acquisition, for realtime motion estimation+correction---that worked by acquiring a quick navigator at 8mm resolution after each (DWI) volume and quickly aligning it to the previous one. So, it is doable.
In general, one should always view alignment as being to some length scale, it is never absolute. That message will just be brought home particularly clearly with this data. Voxel size and data smoothness contribute to establishing an expectation limit for average alignment.
There is also "apparent motion", which has been described really nicely by Jo Etzel here and here. Life is complicated with FMRI, even for simple alignment!
Re. the definition of roll/pitch/yaw---hmm, that is interesting. I guess I'm not surprised people might have different systems/viewpoints/expectations on this. In looking around the AFNI code, at least, the cited one seems to be consistently maintained. This is a more definite/concrete description within 3dvolreg's help:
roll = rotation about the I-S axis
pitch = rotation about the R-L axis
yaw = rotation about the A-P axis
and that is consistent, too, with the AFNI "volume rendering" plugin definitions.
So, in the end, I guess there are two kinds of people in the world:
Those who view subjects lying in a scanner as "whole body airplanes", as if their outward-extended arms would be wings, and therefore the plane's "tail to nose" axis that defines rolling maps to the toe-to-head human body axis (AKA, I-S in standard human acquisitions).
Those who view the subjects as "head-only airplanes", so that the airplane nose maps to the human nose, and then the A-P axis defines the rolling.
To add on a bit to Paul's comments, it might be that for some cases, motion correction or other alignment (like EPI to anatomical) may not be useful, as in the case for anesthetized, head-posted, or other similarly static volumes. One could still compute the correction but not apply it, and then use the parameters as regressors.
The difference between the yaw and roll in the two ways is confusing, but we rarely need to use them separately. I think of the head-only way as the "Superman" flying with his head propped up. Again, it's not critical in the end, just that we know about which axis is the motion.
thank you gentleman! this is very helpful and insightful. keep in mind the timeseries isn't bold weighted (TE=3ms at 3T) just localizers across time. so although i love Jo's work and have lately too been thinking about apparent motion and censoring (different dataset), i'm not even sure that phenomena would show up in this data. but i'll give my idea a shot and see if anything comes of it. the point would be to look for specific properties of structural artifacts (ringing or various kinds of ghosting in various directions) and try to understand how these are influenced by motion occurring during the structural scan itself. but it's a slow TR, too (2.5s) so very likely won't reveal anything and one really needs an external camera system with high framerate, for example. anyway, i really appreciate the chat and hope it didn't steal too much of your time. as for coordinate frames, i will try to think like a whole body airplane from now on!
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.