Hi AFNI experts,
I had done the following preprocessing to the T1 image:
3dWarp -oblique2card -prefix T1_de T1+orig
#new anat just be intermediate, stripped; transformation is not applied.
align_epi_anat.py -anat2epi -anat T1_de+orig \
-save_skullstrip -suffix _al_junk \
-epi epi+orig -epi_base 0 \
-epi_strip 3dAutomask \
-volreg off -tshift off
@auto_tlrc -base TT_N27+tlrc -input T1_de_ns+orig -no_ss
After these, I drew a huge quantity of ROIs on the processed T1(T1_de_ns+tlrc) images by hand for each subject. And that took me a very long time.
Now I am asked to do all analysis of EPIs on the original T1 (T1+orig) images. I can not redraw because these ROIs are giant amounts of work.
Do you know how to transform all my ROIs to match the original T1 images?
Thank you in advance!
Yu
There are a couple points with your existing processing. First, the application of a separate deobliquing step will cause smoothing from the separate interpolation. You can let align_epi_anat.py take care of the concatenation of the transformations instead, including that obliquity handling, by just using the original input with align_epi_anat.py. Secondly, I think you are using the wrong output from the align_epi_anat.py step, but there may be some additional processing not shown here. The skullstripped output you use for the following step is not the output aligned to the EPI dataset, so that may not be what you want. Lastly, consider auto_warp.py to nonlinearly align your datasets to the template.
Hi Daniel,
Thanks a lot for your answers!
The transformations from EPI to T1 and from T1 to tlrc space were calculated firstly in my processing. And these transformations were catenated and applied later. So the skull stripped output was used here.
Yu
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.