Anaticor issue with multiecho data

AFNI wizards,

I was fiddling around with anaticor on multiecho data, and I ran into an issue at the 3dREML stage. Error here:


** ERROR: -dsort dataset './Local_WMe_rall+tlrc.HEAD' doesn't match input dataset 'pb06.501_con_gam_anaticor.r01.scale+tlrc.HEAD pb06.501_con_gam_anaticor.r02.scale+tlrc.HEAD'
 +       nt[dsort]=672  nt[input] = 224
 +       nx[dsort]=55  nx[input] = 55
 +       ny[dsort]=65  ny[input] = 65
 +       nz[dsort]=55  nz[input] = 55
** FATAL ERROR: Can't continue after -dsort mismatch error

The original data is two runs of 112 timepoints, each with 3 echoes. It looks to me that the Local_WMe_rall was created from 2 runs * 3 echoes (672 timepoints), rather than the optimally combined data (224 timepoints).

afni_proc code here:


# ==========================================================================
# script generated by the command:
#
# afni_proc.py -subj_id 501_act_gam_anaticor -blocks despike tshift align     \
#     tlrc volreg mask combine blur scale regress -copy_anat                  \
#     ./../derivatives/anatomical.nii.gz -tcat_remove_first_trs 3             \
#     -tshift_opts_ts -tpattern                                               \
#     @/mnt/data1/dose_response/data/slice_times.txt -dsets_me_echo           \
#     sub-501_ses-main_task-act_run-1_echo-1_bold.nii.gz                      \
#     sub-501_ses-main_task-act_run-2_echo-1_bold.nii.gz -dsets_me_echo       \
#     sub-501_ses-main_task-act_run-1_echo-2_bold.nii.gz                      \
#     sub-501_ses-main_task-act_run-2_echo-2_bold.nii.gz -dsets_me_echo       \
#     sub-501_ses-main_task-act_run-1_echo-3_bold.nii.gz                      \
#     sub-501_ses-main_task-act_run-2_echo-3_bold.nii.gz -echo_times 11 28 45 \
#     -volreg_base_dset sub-501_ses-main_task-act_run-1_echo-1_sbref.nii.gz   \
#     -mask_epi_anat yes -combine_method OC -align_opts_aea -cost lpc+ZZ      \
#     -giant_move -tlrc_base /home/dowdlelt/abin/MNI152_T1_2009c+tlrc         \
#     -tlrc_NL_warp -volreg_tlrc_warp -regress_motion_per_run                 \
#     -regress_stim_times                                                     \
#     /mnt/data1/dose_response/model_params/afni/tms_onsets.txt               \
#     -regress_anaticor_fast -regress_stim_labels tms -regress_basis GAM      \
#     -regress_apply_mot_types demean deriv -regress_est_blur_errts           \
#     -regress_reml_exec -blur_size 10 -bash -scr_overwrite -execute

I think I could hunt through the generated script and correct the portion that extracts the Anaticor regressor (Thanks Bootcamp!) - specfically, this section here:


# --------------------------------------------------
# fast ANATICOR: generate local WMe time series averages
# create catenated volreg dataset
3dTcat -prefix rm.all_runs.volreg pb03.$subj.r*.volreg+tlrc.HEAD

# mask white matter before blurring
3dcalc -a rm.all_runs.volreg+tlrc -b mask_WMe_resam+tlrc                      \
       -expr "a*bool(b)" -datum float -prefix rm.all_runs.volreg.mask

# generate ANATICOR voxelwise regressors via blur
3dmerge -1blur_fwhm 30 -doall -prefix Local_WMe_rall                          \
    rm.all_runs.volreg.mask+tlrc

but I wanted to make sure I wasn’t making some obvious mistake in attempting this. Notice any incorrect input? If I run without anaticor, everything works perfeclty (including the reml portion) Thank you for your help.

Hello,

Multi-echo data should not be processed as multiple runs,
at least in the regression step. Generally, the echoes are
combined before that point.

Consider afni_proc.py options such as:
-dsets_me_run
-echo_times
-combine_method
-mask_epi_anat yes

A more complete command can be seen in Example 12b
from the help.

  • rick

Hey Rick,

Thanks for responding - I used all of those options and did combine the data (-combine_method OC). If I run without the -regress_anaticor_fast flag, I get combined datasets, stats etc with no problem.

To be certain it wasn’t a multiple run issue, I just ran it again, with only one run and the same error is returned - the anaticor regressor is being calculated from the uncombined data, leading to it having 3x the number of timepoints.

Based on the anaticor section of the generated script, it appears that afni_proc’s rm.all_runs dataset is being incorrectly generated by combining all of the volreg output - which is uncombined data:


# fast ANATICOR: generate local WMe time series averages
# create catenated volreg dataset
3dTcat -prefix rm.all_runs.volreg pb03.$subj.r*.volreg+tlrc.HEAD

When a combine block is included, shouldn’t afni_proc generate something like the following code instead?:


# fast ANATICOR: generate local WMe time series averages
# create catenated volreg dataset
3dTcat -prefix rm.all_runs.volreg pb04.$subj.r*.combine+tlrc.HEAD

Though perhaps rm.all_runs.volreg+tlrc should then be renamed rm.all_runs.combine+tlrc, for accuracy.

Oh, I did not see all of the options to the right in
that comment. I will take a look at it, thanks!

  • rick

Unfortunately I still have not been able to get to this.
Wiktor has also asked about using -mask_import for
similar things (and it was planned to be done early in
the first place).

This will get done soon. Sorry for the delay.

  • rick

Hey Rick,

Thanks for the update. To add a little bit more information: -regress_anaticor does work, it seems that only the fast (-regress_anaticor_fast) has this issue. Of course, -regress_anaticor is a lot (lot lot) slower.

Thanks, I will keep that in mind (or at least wonder
“what the heck was I trying to remember about this?”,
and come back and check this thread then :).

  • rick

Okay, I should have gotten to this earlier. Looking at a
test case, it looks like I must have actually dealt with this
long ago, though I never really ran into it as a problem
(had not yet tried anaticor).

Have you tried more recent versions?

I should still update the comments and file names in that
section though.

  • rick

I have tested and confirmed that -regress_anaticor_fast is now working with multiecho data. Thanks for checking on all of this and the regular update pace.

That is great, thanks!