poor alignment from EPI to anat for macaque rsfmri data

Dear all,
I am a new AFNI learner. Recently, I am trying to preprocess macaque rsfmri data, however, some issues troubled me.
For resting-state fMRI data, the voxel resolution is 2.1875 x 3.1 x 2.1875 mm, TR is 2000ms. For T1 data, the voxel resolution is 0.2734 x 0.5 x 0.2734 mm, TR is 11.4ms.

I refered to afni's tutorial and used the following code to preprocess the data:

set dsets_NL_warp = ( result_sub-1001/sub-1001_anat_warp2std_nsu.nii.gz \
                    result_sub-1001/sub-1001_anat_composite_linear_to_template.1D \
                    result_sub-1001/sub-1001_anat_shft_WARP.nii.gz                )  

afni_proc.py                                                                \
    -subj_id                  sub-1001                                      \
    -blocks            tshift align tlrc volreg blur mask scale regress     \
    -dsets                    sub-1001/func/sub-1001_task-rest_bold_corr.nii.gz     \
    -copy_anat                result_sub-1001/sub-1001_anat_nsu.nii.gz      \
    -anat_has_skull           no                                            \
    -anat_uniform_method      none                                          \
    -radial_correlate_blocks  tcat volreg                                   \
    -radial_correlate_opts    -sphere_rad 14                                \
    -tcat_remove_first_trs    2                                             \
    -volreg_align_to          MIN_OUTLIER                                   \
    -volreg_align_a2e                                                       \
    -volreg_tlrc_warp                                                       \
    -volreg_warp_dxyz         1                                          \
    -volreg_compute_tsnr      yes                                           \
    -align_opts_aea           -cost lpc+ZZ    -giant_move                   \
                              -check_flip                                   \
    -align_unifize_epi        local                                         \
    -tlrc_base                ../NMT_v2.1_sym/NMT_v2.1_sym_05mm/NMT_v2.1_sym_05mm_SS.nii.gz     \
    -tlrc_NL_warp                                                           \
    -tlrc_NL_warped_dsets     ${dsets_NL_warp}                              \
    -blur_size                2                                             \
    -mask_segment_anat        yes                                           \
    -mask_segment_erode       yes                                           \
    -mask_import              Tvent template_ventricles_1mm+tlrc            \
    -mask_intersect           Svent CSFe Tvent                              \
    -regress_ROI              WMe Svent                                     \
    -regress_ROI_per_run      WMe Svent                                     \
    -regress_motion_per_run                                                 \
    -regress_apply_mot_types  demean deriv                                  \
    -regress_censor_motion    0.2                                           \
    -regress_censor_outliers  0.05                                          \
    -regress_polort           2                                             \
    -regress_bandpass         0.01 0.1                                      \
    -regress_est_blur_errts                                                 \
    -regress_est_blur_epits                                                 \
    -regress_run_clustsim     no                                            \
    -html_review_style        pythonic 

Unfortunately, the QC results showed that the alignment between anat and EPI is poor.


So I have some questions for my preprocessing process:

  1. How can I obtain fine alignment between anat and EPI and which parameters should I adjust?
  2. Is the code used for preprocessing appropriate? Sorry, I'm just getting started to learn afni.
  3. How can I estimate FD?

Best,
Ruilin

Hi, Ruilin-

The image you are showing from the APQC HTML is of the initial EPI-anatomical overlap. The EPI is overlaid on the anatomical. The way to judge alignment is to look at what should be the next set of images down in the HTML: the "ve2a" (volumetric EPI-to-anatomical alignment). The edges of the anatomical will be displayed over the EPI.

Now, the initial EPI-anatomical overlap is important to check if later alignment doesn't work. Alignment is typically helped by having better initial overlap of EPI and anatomical (i.e., no big relative shift or rotation). This is discussed in the AFNI Academy playlist on Alignment. The image you provide shows good initial alignment, so that is good.

Another important consideration is the contrast that each volume has: having reasonable tissue contrast helps alignment (again, see above Academy video playlist). The would be shown more clearly in the "vorig" part of the APQC HTML, which would be at the top of the page. I wonder how much contrast your EPI has, from looking at this image? but it can be a bit tricky with the overlay colorbar. So, if you could show:

  • the ve2a image
  • the vorig image of the EPI volume
    that would be helpful.

So, to your questions:

  1. Please first check the most appropriate alignment-checking image, ve2a, then we will have a better idea of whether there is a problem. Seeing the image of the raw EPI data will also help.
  2. That will depend on Q1 and the asked-for images.
  3. Rather than FD, which is the L1-norm of the motion parameters, we prefer using Enorm, the L2-norm of the motion parameters, because it is more sensitive to individual large motions. It's described in this AFNI Academy video, and on slides 40-44 of the AFNI handout: afni14_alignment.pdf. The Enorm is shown in the "mot" part of the QC, as described here:

--pt

Hi, ptaylor

Thank you so much for the reply. According to your suggestion, I have provided the ve2a image and the vorig image of the EPI volume as follows


Best,
Ruilin

The voxel size is relatively large for macaque data, possibly because of the very large FOV that covers the whole head, and not just the brain. That makes for poorer tissue contrast in the EPI that the default cost function in alignment uses. Try changing the cost function in your afni_proc.py command by changing a line like this to specify nmi cost functions and avoid large scaling:

 -align_opts_aea          -cost "nmi"  \

Hi, dglen
Thanks for your suggestion. I have change the cost function to 'nmi' in my afni_proc.py, however, the alignment still looks poor. :sob:

set dsets_NL_warp = (/data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/data_13_aw/sub-1002/sub-1002_anat_warp2std_nsu.nii.gz \
                    /data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/data_13_aw/sub-1002/sub-1002_anat_composite_linear_to_template.1D \
                   /data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/data_13_aw/sub-1002/sub-1002_anat_shft_WARP.nii.gz                )  

afni_proc.py                                                                \
    -subj_id                  sub-1002                                      \
    -blocks            tshift align tlrc volreg blur mask scale regress     \
    -dsets                    /data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/data_01_basic/sub-1002/func/sub-1002_task-rest_bold.nii.gz     \
    -copy_anat                /data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/data_13_aw/sub-1002/sub-1002_anat_nsu.nii.gz      \
    -anat_has_skull           no                                            \
    -anat_uniform_method      none                                          \
    -radial_correlate_blocks  tcat volreg                                   \
    -radial_correlate_opts    -sphere_rad 14                                \
    -tcat_remove_first_trs    2                                             \
    -volreg_align_to          MIN_OUTLIER                                   \
    -volreg_align_e2a                                                       \
    -volreg_tlrc_warp                                                       \
    -volreg_warp_dxyz         1                                          \
    -volreg_compute_tsnr      yes                                           \
    -align_opts_aea           -cost nmi             \
                              -check_flip     -feature_size 0.5      \
    -align_unifize_epi        local                                         \
    -tlrc_base                /data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/NMT_v2.1_sym/NMT_v2.1_sym_05mm/NMT_v2.1_sym_05mm_SS.nii.gz     \
    -tlrc_NL_warp                                                           \
    -tlrc_NL_warped_dsets     ${dsets_NL_warp}                              \
    -blur_size                2                                             \
    -mask_segment_anat        yes                                           \
    -mask_segment_erode       yes                                           \
    -mask_import              Tvent /data/home/bnu006/UW-Madison_Rhesus_MRI/preprocess/template_ventricles_1mm+tlrc            \
    -mask_intersect           Svent CSFe Tvent                              \
    -regress_ROI              WMe Svent                                     \
    -regress_ROI_per_run      WMe Svent                                     \
    -regress_motion_per_run                                                 \
    -regress_apply_mot_types  demean deriv                                  \
    -regress_censor_motion    0.2                                           \
    -regress_censor_outliers  0.05                                          \
    -regress_polort           2                                             \
    -regress_bandpass         0.01 0.1                                      \
    -regress_est_blur_errts                                                 \
    -regress_est_blur_epits                                                 \
    -regress_run_clustsim     no                                            \
    -html_review_style        pythonic          

Best,
Ruilin

Hi, Ruilin-

Actually, I would say that alignment looks quite good for this data. The major features, like tissue boundaries, look like they match up between the underlay EPI and where the anatomical edges are. The EPI has some dropout and distortion, which the anatomical doesn't have, hence there are some minor differences around the edges. Consider how the original EPI dataset has pretty small tissue contrast as well (I think that might be why "nmi" cost is more suitable here). In summary, I think this is good for this data.

I will note that typically for animal datasets, we do not use this option:

-align_unifize_epi        local

We do use it for human datasets, to help deal with EPI brightness inhomogeneity and low tissue contrast. However, nonhuman datasets tend to have more non-brain material of non-negligible brightness in the EPI FOV, so we avoid this option. Perhaps because your EPI here has very little bright non-brain material, it has been OK to use actually.

But again, given the imperfections of EPI, I think this is pretty good alignment overall. Affine alignment parameters are determined by global fitting, and it is overall good. Local EPI distortion and signal dropout are local features, and fitting can't re-introduce lost signal, for example.

--pt

Hi, ptaylor
Thank you very much for your constructive suggestions. For these macaque rs-fMRI data, I want to calculate the FC between ROIs. However, there are also a few questions I'd like to consult.

  1. Could I carry out FC between ROIs as the data has some dropout and distortion.

  2. In the preprocessing steps, I resampleted rsfmri data to 111, regressed out white matter and ventricle signals and head motion paramenters, then filtered (0.01-0.1), detrending (-regress_polort 2 ), but did not smooth. Are these steps reasonable and do I need to make further adjustments?

  3. I learned from the tutorial that I should do FC analysis with the final errts dset, right?


    Thanks again for your kindly help and guidance.
    Best regards,
    Ruilin

Hi, Ruilin-

Some unordered replies:

3- Re. the final outputs to use: yes, the errts* dataset is the major time series output of interest for the later analyses (correlations, etc.). Indeed, they will look smooth (since blurring was applied) and uniformly gray (because errts have mean=0 by definition; scaling was applied, so brightness inhomogeneity of fluctuations is reduced; and BOLD signal variations are relatively small, typically, anyways, inside the brain).

0- Re. an unasked question, but about the comment of next calculating FC (that is, the correlation) of time series within ROIs: if you are going to do ROI based analysis, you do not want to include blurring in your processing. That smears signal across ROI boundaries, basically artificially increasing correlation values among neighboring ROIs. There will effectively be blurring within each ROI when you calculate correlation, because the time series get averaged. So, you would then want to remove the "blur" block and its "-blur_size .." option.

2- What to regress out is very much an open question. There are some factors to consider though.

  • To regress out WM and CSF, one wants to be really sure of having good tissue maps for each. So, you want very good tissue segmentation or mapping. You don't want to mistakenly regress out signal from gray matter voxels, because that will remove your signal of interest. In fact, many times you would erode your WM and CSF maps, to reduce the chance that they are near your GM tissue. You are eroding mask segments here; are you quite confident that the tissue maps are well drawn?
  • Bandpassing is common in FMRI resting state, largely for historical reasons. However, there are huge statistical costs to be paid for it (many degrees of freedom lost from the signal), and there is still useful information above 0.1 Hz. You might don't seem to have a huge fraction of time points lost due to motion, so you might still be relatively OK, but please check out this Resting State Processing Note for more discussion of considerations. For example, your current model starts with 454 degrees of freedom, but ends up with only 150, largely because of including bandpassing.
  • Detrending: the general guideline for choosing polort regression is given for the opt in the help file, based on the amount of time the time series lasts:
     -regress_polort DEGREE  : specify the polynomial degree of baseline
    
             e.g. -regress_polort 2
             default: 1 + floor(run_length / 150.0)
    
    You have 454 time points and TR=2s, so that is about 900s total. According to that formula, you would want to use polort = 7. Now, a subtle point discussed in that help is that high polorts do become unstable. So, you might be better off sticking with polort=2, but regressing out the very lowest frequencies (another way of accounting for drift, and one that is more stable for long time series). This is mentioned in the help like:
       * It is also possible to use a high-pass filter to model baseline
        drift (using sinusoids).  Since sinusoids do not model quadratic
        drift well, one could consider using both, as in:
              -regress_polort 2         \
              -regress_bandpass 0.01 1
    
    Someone interestingly, this can intersect with bandpassing to LFFs like in resting state. So, if you do decide to implement LFF bandpassing to 0.01-0.1 Hz, you will be covered on this can can leave -regress_bandpass 0.01 0.1 and polort 2; but if you remove LFF bandpassing, you might still want to do this high pass filtering, which will just remove a few degrees of freedom.

5- You just might want to change this:

-radial_correlate_blocks  tcat volreg

to

-radial_correlate_blocks  tcat volreg regress

... to have an additional QC image of the radial correlation for the final processed data.

6- Re. upsampling: I am generally not a fan of large upsampling of FMRI data. You can't create extra information or details, and while the final images might look smoother, there is a large cost of computational time/expense with using larger datasets. Also, you probably can't really trust very fine details in results of much larger voxel acquisitions. Given your input voxelsize, I might make the final resolution slightly higher than the min input voxel, like 2mm isotropic.

7- That consideration (voxelsizes) also applies to what ROIs you can/should use. The CHARM/SARM/ARM or D99 atlases all have very fine ROis at some levels---you probably should restrict yourself to ones where a good number of 2mm isotropic voxels would fit. For CHARM, that likely means not using the finest hierarchical level, for example.

8- Are your data suitable for that kind of analysis? I would say so. Again, the alignment looks pretty good overall, esp. for macaque data. EPI is always distorted to some extent, and there is always dropout. You can check hte TSNR in the regions you want to study, and make sure it is OK. If there is low or inconsistent TSNR in spots, then stay away from those.

Hope that is useful.

--pt

Two more notes:

  • 3dNetCorr can be used for generating ROI-based correlation matrices (and also whole brain correlation maps of each average time series, at the same time).

  • You could add one or more atlases that are already in the final output space (but not on the final EPI grid) to your afni_proc.py command, so that they will get transformed to the final EPI grid (preserving integer values and ROI labels). For example, if you wanted to do this with the CHARM dataset, then add:

-ROI_import CHARM  CHARM_in_NMT_v2.1_sym_05mm.nii.gz \

You can add multiple ROI dsets this way, with multiple instances of ROI_import LABEL DSET_NAME.

--pt

Hi, ptaylor
Thanks very much for your reply. Your suggestion is really helpful. In the following steps, I used 3dNetCorr to generate ROI-based correlation matrices. Howerer, I also have some questions that need to consult you.

  1. Refer to 11.2.2. Rest FMRI processing: the MACAQUE_DEMO_REST — AFNI, SUMA and FATCAT: v24.1.11 (nih.gov), I used the following code the calculate FC between ROIs with CHARM altsa (which is already in the final output space through "-ROI_import' in afni_proc.py command). However, the results seem stronge (eg., the frontal cortex has low FC with parietal cortical)
    3dNetCorr -echo_edu
    -overwrite
    -allow_roi_zeros -push_thru_many_zeros
    -fish_z
    -inset errts.sub-1001-1.nii.gz
    -in_rois ROI_import_CHARM_resam.nii.gz
    -prefix FC_CHARM

    FC_CHARM_000_CC

When I added " -mask full_mask.sub-1001-1.nii.gz " (the mask is from the output of afni_proc.py: full_mask.sub*), the result significantly changed

I know that my data has some dropout and distortion, so a couple finer ROIs end up. If I want to conduct gradient or Network analysis after calculate FC between ROIs, should I provide a mask? If sure, could the mask from the output of afni_proc.py be used?



  1. I used fat_mat_tableize.py commond to convert *.netcc file to *.txt file. Unfortunately, it didn't seem to have worked out (could not see *.log and *.txt file). So how can I convert *.netcc file output by 3dNetCorr to *.mat or *.txt or *.csv file, as I want to do further analysis in Matlab or R.

I really appreciate your guidance. Thanks again.
Best regards,
Ruilin

If I'm not mistaken, this is actually the question that has been asked here, in a new thread?

--pt