(Mis)alignment

Hello there!

I recently used afni_proc.py to pre-process my rs-fMRI data. When I visualised the volreg output and overlaid it onto the final anatomical image, I noticed that there is a bit of misalignment, namely the EPI image seems to be “pulled up” (see image).

I tried with various align cost functions, the -epi2anat option, skull-stripped and un-stripped images but it always seems to end up in this situation.

Since the two images are almost perfectly aligned in the raw data, I was wondering whether there is a way to somehow omit the align step (even though I’m aware that the transformation matrix from it is required for the registration at volreg)?

Any help would be greatly appreciated!

Greetings from (a rather unnaturally sunny) London!

Misho

Hi, Misho-

Sorry to hear that you are having to spend so much time with alignment, rather than being out in the (fleeting) London sun…

It might be useful to see your afni_proc.py command, for starters–could you please copy+paste that here?

The default cost function for EPI-anatomical alignment is lpc (or lpc+ZZ, a modified version for when things are potentially wonky). This usually works well, assuming that the EPI and anatomical start out with good overlap and that the EPI has appropriate tissue contrast. To focus on alignment, one wants to check that sulcal and gyral features line up well, that ventricles overlap well, etc. Namely, that interior and boundary brain regions line up well.

In the image above, the tissue contrast does look a bit low. Can you post the images from the afni_proc.py-generated QC HTML? Namely, the ones from the “vorig” block and the “ve2a” block? That will help diagnose by displaying the raw EPI volume used for alignment, as well as the EPI underlaying the edges of the aligned anatomical.

thanks,
pt

Hi!

Whoops, I did mean to copy-paste the command… It’s:


afni_proc.py \
    -blocks despike tshift align tlrc volreg mask combine blur scale regress  \
    -copy_anat t1.nii \
    -anat_has_skull yes \
    -dsets_me_echo f_e1*.nii \
    -dsets_me_echo f_e2*.nii \
   -dsets_me_echo f_e3*.nii \
   -echo_times 12.0 28.0 44.0    \
    -reg_echo 2 \
    -tcat_remove_first_trs 10 \
    -align_opts_aea -cost lpc+ZZ -giant_move -check_flip \
    -tlrc_base MNI152_T1_2009c+tlrc \
    -tlrc_NL_warp \
    -volreg_align_to MIN_OUTLIER \
    -volreg_align_e2a -volreg_tlrc_warp \
    -mask_epi_anat yes \
    -mask_segment_anat yes   \
     -mask_segment_erode yes \
     -combine_method tedana  \
     -combine_tedana_path /software/system/tedana/20200226/bin/tedana \
    -blur_size 6 \
     -blur_in_mask yes   \
      -regress_bandpass 0.01 0.08 \
    -regress_ROI WMe CSF   \
     -regress_motion_per_run \
     -regress_censor_motion 3.0 \
    -regress_censor_outliers 0.05 \
    -regress_apply_mot_types demean deriv \
    -regress_est_blur_epits \
    -html_review_style pythonic

And I also make a few changes, namely:

  1. 3dTshift - Remove ‘-tzero 0’ and add ‘-TR 2.5s -tpattern seqminus’ instead;
  2. combine:
    a) remove the following:
    tedana_wrapper.py -input pb02.$subj.r$run.e*.tshift+orig.BRIK
    -TE $echo_times
    -mask mask_epi_anat.$subj+orig
    -results_dir tedana_r$run
    -ted_label r$run
    -tedana_prog /software/system/tedana/20200226/bin/tedana
    -prefix tedprep
    and add instead this:
    tedana -d pb03.SUBJ.r01.e01.volreg+tlrc.BRIK
    pb03.SUBJ.r01.e02.volreg+tlrc.BRIK
    pb03.SUBJ.r01.e03.volreg+tlrc.BRIK
    -e 12.0 28.0 44.0 --mask mask_epi_anat.SUBJ+tlrc.BRIK --debug
    b) Change ‘3dcopy tedana_r$run/TED.r$run/dn_ts_OC.nii pb03.$subj.r$run.combine’ to ‘3dcopy dn_ts_OC.nii pb04.$subj.r$run.combine+orig’

By the way, I’ve also tried with and without giant move but nothing changes really.

I’ve attached the output for ‘ve2a’. For some reason, I don’t have the vorig block, but I’ve attached the raw T1 and EPI images (raw.png) - the panels are showing raw EPI, anat overlaid on EPI and then 3 panels where the EPI is overlaid on the anat (EPI is colourful in panels 2-5).

Cheers!
Misho

Hi, Misho-

OK, I think your version of AFNI is fairly old—what is “afni -ver”? I guess this because your ve2a block has the anatomical as underlay with edges of the EPI as overlay (that has been reversed in APQC presentation for a while), and there is no vorig block there.

You can update AFNI with:


@update.afni.binaries -d

Thanks for sending the raw EPI images, though, anyways. A couple things to note:

  • The tissue contrast looks odd: typically, in an EPI, I am used to seeing bright ventricle, fairly bright GM, and WM is the darkest. See the attached image (from the Bootcamp example EPI in AFNI_data6/FT_analysis, actually–the APQC image of vorig). In your case, the ventricels are dark, but the GM is relatively bright. What kind of EPI contrast is this? This will probably change the appropriate cost function that should be used—lpc is for when the source and base dsets have opposite contrast; here, it isn’t really a full inverse tissue contrast, but it also isn’t the same contrast as the T1w anatomical (because the GM and WM contrast is opposite). I wonder about trying “nmi”?
  • There is a fair amount of distortion in the EPI: see the frontal regions, and medial regions just superior to the sinuses. That will also affect alignment. The frontal distortion looks like B0 inhomogeneity distortion. That will kind of limit the amount of alignment that the affine registration can do; but I don’t think there is enough detail to try anything nonlinear (there typically isn’t with EPI dsets).
  • I don’t think -giant_move will do anything here—as you say, the initial overlay is pretty good. I think the biggest issue is finding a good cost function to deal with this EPI tissue contrast.

Re. changing the proc script: that can be OK if you really have to, but that often isn’t ideal, esp. for being able to re-run or tweak re-runs easily. It might be worth checking with Rick if there are ways to accomplish your goals with afni_proc.py options directly. Esp. since you appear to have an older AFNI version, perhaps some updates since then will help.

–pt

And actually, in looking at the QC image sent, the alignment is still pretty good, all things considered.

–pt

Hi again!

Thank you so much for the feedback!

The version I was using was indeed old. I must say the latest one is quite cool! I really like the QC ouput and the overlaying function in the GUI!
The ‘nmi’ cost function did wonders, cheers! I also switched to the second echo as a ref echo as it has much more signal (even though the contrast is tiny bit poorer).

I’ve pre-processed our entire dataset and the few scans I’ve checked so far are looking really nice!
There are a few things that are puzzling me though (sorry for getting a bit off-topic):
[ol]
[li] I’m curious as to why the QC html file doesn’t get generated for some of the sessions (see part of the ouput bellow).
[/li][li] Looking at the Degree of Freedom part, it got me wondering… What is an ‘acceptable’ post-(?!)pre-processing DoF value?
[/li][li] Given the substantial signal loss that is plaguing our data, I wanted to ask you what would be a good way of creating a mask that would include only voxels that have signal. Not sure how to approach this as there are some differences in the extent of signal loss in the different participant sessions.
[/li][/ol]

Thank you!!!
Misho


apqc_make_html.py -qc_dir QC_SUBJ
/nan/ceph/network/system/el7/AFNI/AFNI_21.1.07/afnipy/lib_afni1D.py:1302: SyntaxWarning: ‘str’ object is not callable; perhaps you missed a comma?
print(‘** uncensor from vec: nt = %d, but nocen len = %d’
Traceback (most recent call last):
File “/software/system/AFNI/AFNI_21.1.07/apqc_make_html.py”, line 193, in
AAII = lah.read_descrip_json(img_json)
File “/nan/ceph/network/system/el7/AFNI/AFNI_21.1.07/afnipy/lib_apqc_html.py”, line 1746, in read_descrip_json
ainfo.set_all_from_dict(ddd) # set everything in this obj that we can
File “/nan/ceph/network/system/el7/AFNI/AFNI_21.1.07/afnipy/lib_apqc_html.py”, line 642, in set_all_from_dict
self.add_subtext(DICT)
File “/nan/ceph/network/system/el7/AFNI/AFNI_21.1.07/afnipy/lib_apqc_html.py”, line 625, in add_subtext
xx = parse_apqc_text_fields(DICT[‘subtext’])
File “/nan/ceph/network/system/el7/AFNI/AFNI_21.1.07/afnipy/lib_apqc_html.py”, line 517, in parse_apqc_text_fields
sspbar = make_inline_pbar_str(ss)
File “/nan/ceph/network/system/el7/AFNI/AFNI_21.1.07/afnipy/lib_apqc_html.py”, line 546, in make_inline_pbar_str
with open(fname_json, ‘r’) as fff:
FileNotFoundError: [Errno 2] No such file or directory: ‘media/qc_08_mot_grayplot.pbar.json’

… and to note for future generations:

From further investigation offline, it seems like this error was caused by a server issue. Various subjects were submitted on an HPC system, and probably on just a small number of servers some subjects’ processing failed for some reason. But upon re-running the APQC HTML generation, things worked for those subjects (presumably they ended up on a different operating system or that system had been fixed in the meantime).

–pt