incorrect tlrc correction

hello afni users

I have an image that is correctly aligned in native space but when the image is warped to standard space the image becomes skewed and unusable. However, it is only affecting certain subjects. I do not understand why some subjects are not affected and others are. I am using the uber_subject.py gui to create the preprocessing scripts.

here is the script line where the images seemed to be not correctly aligned.

@auto_tlrc -base MNI_avg152T1+tlrc -input warped_ns+orig -no_ss -init_xform AUTO_CENTER

is there a way to correct the alignment at the tlrc level.

i have attached a screen shot of the skewed image.

thank you for any help with this

jef

Hi, Jef-

OK, we can look into this. It may be the initial overlap is not very good, and/or that the tissue contrast or level of detail is mismatched.

A couple background questions:
A) Your input dataset already been warped once? I’m guessing this by the filename. I’m a little curious why there might be multiple warps/alignments? That might be fine, but each regridding can blur the data a bit and then one might use other considerations to concatenate data.
B) Is your input a T1w anatomical? It is a little hard to tell from the image; there isn’t a lot of structure that is discernible, and I can’t tell whether that is due to the warp or to original dset properties.
C) What is the purpose of this alignment? Is it part of EPI/FMRI processing? That might affect the use choices, and maybe recommendation of a better tool to perform this step.
D) Is there any reason not to use full nonlinear alignment to template space? That is usually much preferred to auto_tlrc, which is only affine alignment—the level of detail matching will necessarily and generally be lower. We might recommend using @SSwarper, for example, to perform both nonlinear alignment to the template with skullstripping of the anatomical simultaneously.

–pt

thank you for your reply

I am inputting a T1 anatomical image and 1 run of epi data. I am preprocessing the data without any timing vectors at this point. However, during this, the data is reported as being oblique. So prior to entering the T1 into the uber_subject.py script I am performing a 3dWarp -deoblique correction, then entering in the 3dWarp.HEAD in the script. And for the epi data I am performing a 3dTshift, then using the 3dTshift.HEAD file and performing a 3dWarp -deoblique correction. Then entering the 3dWarp.HEAD file in the script.

Both the tcat and the tshift images being in original view are correct, However, the volreg image being in Talairach View is severely skewed as per the attached image. And the skewing error is only occur on about 30% of the subjects. At this point, i have looked at 49 subjects and about 18 are skewed and the others are correct. Same task and same scanner protocol.

I notice yesterday that the rmepivolreg (original view) is good and the rmepinomask image (Talairach view) is skewed. I looked at these images as the data was being populated just trying to see where in the script the error skewing the data is occurring.

So that is the background and I think it helps clears up some of your questions.

the question regarding the nonlinear alignment to template space, I am not sure to what that is referring. My experience with afni is the andys brain book tutorial and now this data. So I apologize for my lack of experience, and will greatly appreciate any help you can offer.

thank you

jef

Hi, Jef-

OK, so my reading of this is that you are interested in building a pipeline for your EPI and anatomical T1w dataset, and that you want to use a standard space/reference template for the final location of the EPI data. Sounds great.

The obliquity “issue”: so, for historical reasons, AFNI will (at present) vociferously warn about obliquity. This really is not that big a thing, and we were just discussing about rolling back the warnings… the main issue that comes up is with visualizing data, and knowing that how an oblique dataset appears to overlap with another dset with differing obliquity (or without any obliquity at all) will not accurately reflect how the two dsets actually overlay in reality. But that is OK, and by the time the data are processed, generally obliquity will have been applied and no longer be present as a potential difference. Yay.

I’ll note that there are a series of videos on various MRI processing topics here:
https://www.youtube.com/c/afnibootcamp
… which might help. Here is one about various alignment considerations, for example, because alignment comes up in many parts of standard FMRI processing:
https://www.youtube.com/watch?v=PaZinetFKGY&list=PL_CD549H9kgqJ1GDXAs1BWkgEimAHZeNX

Re. aligning to standard space: basically, when performing alignment, you want to know how much stretching/squashing you expect to be necessary to get your datasets well enough aligned. You also need to know the properties of your data like contrast, scale detail, etc. to know much stretching/squashing your data can be expected to respond well to. The above alignment videos cover a lot of that. A good general rule is that when aligning anatomical data between 2 different subjects (and subject-to-template is a case of this), nonlinear alignment will be needed for reasonably accurate matching of features. There are just too many differences between brains to get good alignment with lower order (e.g., rigid body or linear affine) alignment, at least with human data.

For setting up a pipeline, using afni_proc.py (AP) would be recommended over the older uber_subject.py. It has more options (yay!), is more uptodate, and is easier to ask questions about on the Message Board here (and we help people set these up a lot). If only your EPIs are oblique (and not the anatomicals), then you likely won’t need to do any special steps for those, and they can just be input to afni_proc.py “as is”. We have a lot of processing examples as “starter” commands to get the ball rolling (for various task/resting/naturalistic scans, whether you want to do surface analysis or volumetric, single-echo or multi-echo, ROI-based or voxelwise, etc.), which you can tweak and expand to meet your processing needs in detail. See the EXAMPLES list here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/programs/alpha/afni_proc.py_sphx.html#ahelp-afni-proc-py
We also have explicit processing scripts that were used for various papers in the Codex (Code Examples) here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/fmri/main_toc.html

Because I am sure you have infinite reading time, another useful resource about setting up processing choices and why we recommend some things form FMRI processing (with specific afni_proc.py options) is:
Taylor PA, Chen G, Glen DR, Rajendra JK, Reynolds RC, Cox RW (2018). FMRI processing with AFNI: Some comments and corrections on ‘Exploring the Impact of Analysis Software on Task fMRI Results’. bioRxiv 308643; doi:10.1101/308643
https://www.biorxiv.org/content/10.1101/308643v1.abstract

And then of course what processing would be complete without QC considerations? This article details quality control tools available within AFNI and specifically with afni_proc.py, as well as how to browse and what to look for:
Reynolds RC, Taylor PA, Glen DR (2023). Quality control practices in FMRI analysis: Philosophy, methods and examples using AFNI. Front. Neurosci. 16:1073800. doi: 10.3389/fnins.2022.1073800
https://www.frontiersin.org/articles/10.3389/fnins.2022.1073800/full/

So, if I might be so bold, my recommendation might be to start with a basic AP example for setting up processing, even treating your data as resting state. We have a particularly simple start script for quick QC—it won’t do nonlinear alignment, but it can give you a quick pass at EPI-anatomical alignment and help you quickly look at other features in your data. You can try the following, just input subject ID, anatomical dset and one or more EPIs in the following option slots:


ap_run_simple_rest.tcsh \
    -run_proc          \
    -subjid SUBJ_ID \
     -nt_rm 2                \
    -anat  DSET_ANAT      \
    -epi DSET_EPI1 DSET_EPI2 ...

You can see the AP command it builds in one of the output files. From there, you can evaluate how things look in the APQC HTML that is created, and then we can start building up the command to do whatever other options you want, and also add in running @SSwarper for the nonlinear alignment part (AP takes SSwarper output as an input directly, see this AP example).

How does that sound (assuming you have been braver enough to read to the end of this veeeery long reply, my apologies)?

–pt