Is my multi-echo resting-state fMRI preprocessing pipeline correct? (AFNI + tedana)?

Hello AFNI experts!

I am using AFNI for multi-echo resting-state fMRI preprocessing and have summarized the key steps of my pipeline below. Could you please advise: (1) if there is anything (e.g. steps' order/parameters) in my workflow that might be inappropriate? (2) As shown in my pipeline, whether estimating motion from the first echo (e01) with AFNI’s volreg, applying it to all echoes before tedana, and later using these parameters for regression on the tedana-denoised BOLD is best practice? I’ve noticed that some people re-estimate motion parameters on the tedana -denoised combined BOLD after tedana, then regress it again which I don't think it make sense to me. Looking forward to your comments, thank you so much! :grinning:

Stage 1 — Anatomical preprocessing (@SSwarper) 
• Input: subject T1-weighted image. 
• Steps: skull stripping, bias-field correction, nonlinear registration to MNI152_2009_template_SSW.nii.gz. 
• Key outputs: 
– anatSS (skull-stripped T1 in native space) 
– anatQQ (T1 in MNI space) 
– native↔MNI nonlinear warp fields (used later for normalization) 

 

Stage 2 — AFNI preprocessing for multi-echo EPI 
• Command: afni_proc.py with blocks (in order): blip tshift align volreg mask combine regress; discard first 5 TRs; AP/PA SE-EPI for blip; alignment with -cost lpc+ZZ -giant_move -check_flip. 
• Motion correction: ran volreg (aligned to MIN_OUTLIER with epi→anat: -volreg_align_to MIN_OUTLIER -volreg_align_e2a). The volreg-resampled echoes (pb03.*.e01/e02/e03) and AFNI full mask were used as inputs to tedana. 
• Combine (QC-only, not used downstream): kept a mean combine so AFNI treats data as multi-echo (not multi-run); did not use AFNI’s mean-combined output for analysis. 
• Regress (QC-only, not used downstream): included the regress block only to produce AFNI’s HTML review/derived files. Did not use AFNI’s regressed time series or stats downstream. 
• Tissue masks: WM/CSF segmented on the native T1 (anatSS) for later nuisance regression. 

 

Stage 2b — Multi-echo ICA denoising (tedana) 
• Inputs: three volreg outputs (pb03.*.e01/e02/e03) concatenated --fittype curvefit, --tedpca mdl. 
• Output: tedana_desc-denoised_bold.nii.gz (carried forward for normalization). 

 

Stage 3 — Post-processing in standard space 
• Normalization: warped the tedana-denoised time series to MNI using the epi→anat affine plus the @SSwarper nonlinear warp (3dNwarpApply). 
• Detrending: performed in 3dTproject with -polort 2. 
• Nuisance regression (3dTproject in MNI): 
– Motion: AFNI volreg estimates (demeaned + first derivatives) from Stage 2. 
– Physio: mean WM and mean CSF signals extracted from the tedana-denoised BOLD in MNI space, using WM/CSF masks segmented on the native T1 (anatSS) and NN-warped to MNI (3dmaskave to compute means). 
– Global signal: not regressed (matching the brain variability metrics–focused literature I followed). 
• Censoring & temporal filtering: applied AFNI censor file (FD threshold 0.3 mm) and band-pass 0.01–0.1 Hz. 
• Smoothing: 3dBlurInMask in MNI space, FWHM = 4 mm. 
• Final output: regressed, smoothed NIfTI in MNI space + AFNI QC HTML. 

Howdy-

Thanks for the question and all these details. Would you be able to put the full AP command itself, to accompany this?

Note that within AP at least, on a nomenclature note, I don't know that I would separate out "post-processing" like that. Those steps just look like part of full single subject processing.

--pt

Hi pt!

Thank you for your reply. :grinning:

  1. Sure — I’ve included below the afni_proc.py command I used in Stage 2, along with the key commands from Stage 2b and Stage 3.
  2. In my pipeline, I separated Stage 3 as “post-processing” because these steps — warp to standard space, nuisance regression, temporal filtering, and smoothing — are applied specifically after tedana’s multi-echo ICA denoising, following tedana ’s [recommendations]. This separation is intentional to accommodate multi-echo processing recommendations (e.g., performing spatial normalization after tedana echo combination) and to keep the tedana stage independent.

Looking forward to your comments and please LMK if you need any other info, thank you so much!

**Stage 2 — AFNI preprocessing for multi-echo EPI** 
afni_proc.py \
        -subj_id "${UNIQUE_SUBID}" \
        -out_dir "${WORK_DIR}/${UNIQUE_SUBID}.results" \
        -script "${PROC_SCRIPT_PATH}" \
        -dsets_me_run "${EPI_FILES_ARRAY[@]}" \
        -blocks blip tshift align volreg mask combine regress \
        -echo_times 16.0 37.79 59.58 \
        -combine_method mean \
        -copy_anat "${ANAT_SS_FILE}" -anat_has_skull no \
        -blip_forward_dset "${BLIP_FWD_FILE}" \
        -blip_reverse_dset "${BLIP_REV_FILE}" \
        -tcat_remove_first_trs 5 \
        -align_opts_aea -cost lpc+ZZ -giant_move -check_flip \
        -volreg_align_to MIN_OUTLIER -volreg_align_e2a \
        -mask_segment_anat yes -mask_segment_erode yes \
        -regress_censor_motion 0.3 -regress_bandpass 0.01 0.1 \
        -regress_apply_mot_types demean deriv \
        -regress_est_blur_epits -regress_est_blur_errts \
        -html_review_style pythonic
**Stage 2b — Multi-echo ICA denoising (tedana)** 
========================================================================
# ===           PART 2: INDEPENDENT TEDANA RUN (with Checkpoint)       ===
# ========================================================================
if [[ -f "$TEDANA_FINAL_NIFTI" ]]; then
    echo "[${SUB}] ✅ PART 2: Tedana denoised file already exists. Skipping."
else
    echo "[${SUB}] 🚀 PART 2: Starting independent tedana processing..."
    ECHO1_VOLREG=$(find "$AFNI_RESULTS_DIR" -name "pb03.*.e01.volreg+orig.HEAD" | head -1)
    ECHO2_VOLREG=$(find "$AFNI_RESULTS_DIR" -name "pb03.*.e02.volreg+orig.HEAD" | head -1)
    ECHO3_VOLREG=$(find "$AFNI_RESULTS_DIR" -name "pb03.*.e03.volreg+orig.HEAD" | head -1)
    MASK_FILE_AFNI=$(find "$AFNI_RESULTS_DIR" -name "full_mask.*+orig.HEAD" | head -1)
    
    if [[ -z "$ECHO1_VOLREG" || -z "$ECHO2_VOLREG" || -z "$ECHO3_VOLREG" || -z "$MASK_FILE_AFNI" ]]; then
        echo "❌ FATAL: Could not find all prerequisite files for tedana in ${AFNI_RESULTS_DIR}"; exit 1;
    fi

    TEDANA_INPUT_NIFTI="${WORK_DIR}/_for_tedana_cat.nii.gz"
    echo "[${SUB}]  Concatenating echos into ${TEDANA_INPUT_NIFTI}"
    3dZcat -prefix "${TEDANA_INPUT_NIFTI}" -overwrite \
        "${ECHO1_VOLREG%.HEAD}" "${ECHO2_VOLREG%.HEAD}" "${ECHO3_VOLREG%.HEAD}"

    MASK_FILE_NIFTI="${WORK_DIR}/_for_tedana_mask.nii.gz"
    echo "[${SUB}]  Converting mask to NIfTI format: ${MASK_FILE_NIFTI}"
    3dcopy -overwrite "${MASK_FILE_AFNI%.HEAD}" "${MASK_FILE_NIFTI}"

    TEDANA_OUT_DIR="${WORK_DIR}/tedana_output"
    echo "[${SUB}] 🧠 Launching tedana. Output will be in ${TEDANA_OUT_DIR}"
    tedana -d "${TEDANA_INPUT_NIFTI}" -e 16.0 37.79 59.58 --mask "${MASK_FILE_NIFTI}" \
        --out-dir "${TEDANA_OUT_DIR}" --prefix "tedana" \
        --fittype curvefit --tedpca mdl --overwrite

    TEDANA_RESULT_FILE="${TEDANA_OUT_DIR}/tedana_desc-denoised_bold.nii.gz"
    if [[ ! -f "$TEDANA_RESULT_FILE" ]]; then
        echo "❌ FATAL: Tedana did not produce the expected output file: ${TEDANA_RESULT_FILE}"; exit 1;
    fi
    cp "${TEDANA_RESULT_FILE}" "${TEDANA_FINAL_NIFTI}"
    echo "[${SUB}] ✅ PART 2: Tedana completed successfully."
fi
**Stage 3 — Post-processing in standard space** 
TEDANA_WARPED_OUTPUT="${SUBJ_OUTPUT}/dn_ts_OC_tlrc"
FINAL_REGRESSED_OUTPUT="${SUBJ_OUTPUT}/errts.${SUB}_integrated.tlrc"
SMOOTHED_OUTPUT="${SUBJ_OUTPUT}/errts.${SUB}_integrated.final.tlrc"

# Step 1: Warp to template space
if [[ -f "${TEDANA_WARPED_OUTPUT}+tlrc.HEAD" ]]; then
  echo "[${SUB}]   ✅ Step 1: Warping already completed. Skipping."
else
  echo "[${SUB}]   - Step 1: Warping tedana output to standard space..."
  3dNwarpApply \
    -nwarp "${NwarpChain}" \
    -source "${TEDANA_SOURCE}" \
    -master "${MNI_TEMPLATE}" \
    -prefix "${TEDANA_WARPED_OUTPUT}" \
    -overwrite
  [[ -f "${TEDANA_WARPED_OUTPUT}+tlrc.HEAD" ]] || { echo "❌ FATAL: Warping failed"; exit 1; }
  echo "[${SUB}]     ✅ Warping completed"
fi

# Step 2: Nuisance regression in template space
if [[ -f "${FINAL_REGRESSED_OUTPUT}+tlrc.HEAD" ]]; then
  echo "[${SUB}]   ✅ Step 2: Regression already completed. Skipping."
else
  echo "[${SUB}]   - Step 2: Regressing noise in standard space..."
  WM_TS_FILE="${SUBJ_OUTPUT}/WM_timeseries.1D"
  CSF_TS_FILE="${SUBJ_OUTPUT}/CSF_timeseries.1D"
  STD_WM_MASK="${SUBJ_OUTPUT}/mask_WMe_tlrc"
  STD_CSF_MASK="${SUBJ_OUTPUT}/mask_CSFe_tlrc"

  if [[ -n "$NATIVE_WM_MASK" ]]; then
    3dNwarpApply -nwarp "${NL_WARP_FILE}" -interp NN \
      -source "${NATIVE_WM_MASK%.HEAD}" -master "${TEDANA_WARPED_OUTPUT}+tlrc" \
      -prefix "${STD_WM_MASK}" -overwrite
    3dmaskave -quiet -mask "${STD_WM_MASK}+tlrc" "${TEDANA_WARPED_OUTPUT}+tlrc" > "${WM_TS_FILE}"
    echo "[${SUB}]     ✅ WM signal extracted"
  else
    echo "0" > "${WM_TS_FILE}"; echo "[${SUB}]     ⚠️ WM mask not found, using fallback"
  fi

  if [[ -n "$NATIVE_CSF_MASK" ]]; then
    3dNwarpApply -nwarp "${NL_WARP_FILE}" -interp NN \
      -source "${NATIVE_CSF_MASK%.HEAD}" -master "${TEDANA_WARPED_OUTPUT}+tlrc" \
      -prefix "${STD_CSF_MASK}" -overwrite
    3dmaskave -quiet -mask "${STD_CSF_MASK}+tlrc" "${TEDANA_WARPED_OUTPUT}+tlrc" > "${CSF_TS_FILE}"
    echo "[${SUB}]     ✅ CSF signal extracted"
  else
    echo "0" > "${CSF_TS_FILE}"; echo "[${SUB}]     ⚠️ CSF mask not found, using fallback"
  fi

  echo "[${SUB}]     🔄 Running 3dTproject..."
  3dTproject \
    -prefix "${FINAL_REGRESSED_OUTPUT}" \
    -input "${TEDANA_WARPED_OUTPUT}+tlrc" \
    -ort "${MOTION_DEMEAN}" -ort "${MOTION_DERIV}" \
    -ort "${WM_TS_FILE}" -ort "${CSF_TS_FILE}" \
    -censor "${CENSOR_FILE}" -cenmode NTRP \
    -polort 2 -bandpass 0.01 0.1 -verb -overwrite
  [[ -f "${FINAL_REGRESSED_OUTPUT}+tlrc.HEAD" ]] || { echo "❌ FATAL: Regression failed"; exit 1; }
  echo "[${SUB}]     ✅ Regression completed"
fi

# Step 3: Spatial smoothing
if [[ -f "${SMOOTHED_OUTPUT}+tlrc.HEAD" ]]; then
  echo "[${SUB}]   ✅ Step 3: Smoothing already completed. Skipping."
else
  echo "[${SUB}]   - Step 3: Smoothing in standard space..."
  MASK_FILE_TLRC="${SUBJ_OUTPUT}/full_mask_tlrc"
  if [[ -n "$NATIVE_FULL_MASK" ]]; then
    3dNwarpApply -nwarp "${NL_WARP_FILE}" -interp NN \
      -source "${NATIVE_FULL_MASK%.HEAD}" -master "${MNI_TEMPLATE}" \
      -prefix "${MASK_FILE_TLRC}" -overwrite
    echo "[${SUB}]     ✅ Warped native mask to standard space"
  else
    echo "[${SUB}]     ⚠️ Native mask not found, creating from template"
    3dcalc -a "${MNI_TEMPLATE}" -expr 'step(a)' -prefix "${MASK_FILE_TLRC}" -overwrite
  fi

  echo "[${SUB}]     🔄 Applying spatial smoothing (3dBlurInMask, FWHM=4)..."
  3dBlurInMask -input "${FINAL_REGRESSED_OUTPUT}+tlrc" -FWHM 4 \
    -mask "${MASK_FILE_TLRC}+tlrc" -prefix "${SMOOTHED_OUTPUT}" -overwrite
  [[ -f "${SMOOTHED_OUTPUT}+tlrc.HEAD" ]] || { echo "❌ FATAL: Smoothing failed"; exit 1; }
  echo "[${SUB}]     ✅ Smoothing completed"
fi

Howdy-

Thanks for sharing those code blocks and link (for the latter, note that this is the longer term/stable weblink for tedana).

I don't see anything in the online docs about why tedana would need to be separated out into from afni_proc.py like that. I don't see anything about original/native/template space mentioned. I am curious if @handwerkerd , a primary developer of tedana, has comments on the need to split things out.

And you can call tedana as the combination method within afni_proc.py directly. We have had that from the early days of different MEICA implementations, both the original Kundu et al. one and the much more modern DuPre et al. one in tedana. We have worked with the developers of tedana when integrating these procedures. So, I think you shouldn't need to go through the extra work of splitting things like this. It would probably introduce other non-optimal features like multiple regridding and smoothing, and multiple regression steps, and potential for other issues.

We have several examples in the afni_proc.py help file about using tedana's MEICA within afni_proc.py (using the m_tedana combine method and other variants), and we have described further details of this and other processing choices in this recent article about afni_proc.py usage.

Also in that article, there are some cautions to not include 0.01-0.1 Hz bandpassing during resting state processing. If you search for "LFF" in that draft, you will find the discussion, as well as references to further articles like:

        Shirer WR, Jiang H, Price CM, Ng B, Greicius MD
        Optimization of rs-fMRI pre-processing for enhanced signal-noise
            separation, test-retest reliability, and group discrimination
        Neuroimage. 2015 Aug 15;117:67-79.

        Gohel SR, Biswal BB
        Functional integration between brain regions at rest occurs in
            multiple-frequency bands
        Brain connectivity. 2015 Feb 1;5(1):23-34.

        Caballero-Gaudes C, Reynolds RC
        Methods for cleaning the BOLD fMRI signal
        Neuroimage. 2017 Jul 1;154:128-49

The afni_proc.py paper above describes some other helpful options in different scenarios. One thing we added a couple years ago is the ability to brightness-unifize a copy of the EPI dataset for EPI-anatomical alignment; this often helps the quality of alignment, esp. when the EPI has notable brightness inhomogeneities. I have only ever seen it help human dataset alignment, and at worst have no effect. Therefore, you might consider adding:

-align_unifize_epi         local

--pt

Hi pt!

Thank you very much for your detailed reply and guidance. :grinning:I’d like to follow up with a few questions and clarifications:

  1. From reading the tedana documentation, in my understanding that the echoes fed into tedana should be in native space. For example, in the fMRIPrep workflow incorpated tedana they mention extracting echo-wise data after slice-timing, motion, and distortion correction in native space, feeding these into tedana, and then applying normalization to the denoised output. Is my understanding correct that tedana expects inputs in native space?

  2. Regarding whether to separate tedana from afni_proc.py:

  • I am pretty new to AFNI as well as multi-echo data processing. I realize now that separating might introduce non-optimal features like multiple regridding, smoothing, or regression passes, as you mentioned. Could you explain why these extra features arise when tedana is run outside of afni_proc.py?
  • In my current design, I was planning the sequence as: blip → tshift → align → volreg → mask → tedana → normalization → detrending → nuisance regression → censoring/filtering → smoothing. If I integrate tedana into afni_proc.py as the combine block, how should the blocks and their order be set up to achieve this structure?
  • More importantly, in my earlier attempts I ran into errors when combing tedana through afni_proc.py: (1) my system’s Python 3 environment tried to run an older Python 2 version of tedana, producing syntax errors; and (2) the new tedana (v25+) uses BIDS-style file names (…desc-denoised_bold.nii.gz), which did not match the old naming expected by afni_proc.py. These conflicts led to multiple failures, so I ended up decoupling the steps—running afni_proc.py first and then tedana separately, the reason why is also that I assumed the decoupling would not cause any non-optinal feautures as you mentioned).
  1. My main goal is to compute brain variability metrics (SD, MSSD) from multi-echo resting fMRI. Most papers I’ve reviewed use a 0.01–0.1 Hz filter in preprocessing when they caculted the resting fMRI SD/MSSD (as the following papers). Do you think that with multi-echo data I should handle filtering differently, in light of the cautions you mentioned?
https://pubmed.ncbi.nlm.nih.gov/35811000/
https://academic.oup.com/scan/article/18/1/nsad044/7271139
2024- BOLD signal variability as potential new biomarker of functional neurological disorders
  1. Finally, about your suggestion to use -align_unifize_epi local: should I just add this line right above or below my existing -align_opts_aea -cost lpc+ZZ -giant_move -check_flip options in the script?

Thank you again for your time and advice—it’s been very helpful for me as I set up this pipeline!

Best,
Jie

Hi, Jie-

Thanks for sharing that example link. While the example is performed in native space there, it isn't clear to me that processing in native space is necessary there. I don't see a discussion/description fo that.

For regridding and blurring from separately applied transformations,

  • that is probably most thorought discussed in this AFNI Academy video on Alignment (first in a playlist that then covers more specific topics).
  • Within the paper I linked to before, it is discussed in the section/paragraph starting:

    2.1.1 Processing convenience and rigor:
    Many underlying steps are managed within the afni_proc.py program itself when building the processing script, in ways to optimize mathematical benefits. ...

  • ... and the paper also cites this article about that and other processing features within AFNI/afni_proc.py:
    Jo HJ, Gotts SJ, Reynolds RC, Bandettini PA, Martin A, Cox RW, Saad ZS (2013). Effective preprocessing procedures virtually eliminate distance-dependent motion artifacts in resting state FMRI. Journal of Applied Mathematics: art.no. 935154.

For applying everything within afni_proc.py, I will point to one of the AP examples from the aforemented paper, which in this GitHub repo. That uses "OC" as the combine method, but you could just change that to "m_tedana". I will paste the adjusted command here---leaving out a couple features there, like the ROI_import (but you might find those interesting, esp. for QC, which is described more in the paper)---and note that it even included blip up/down correction, which you are using:

afni_proc.py                                                                  \
    -subj_id                     ${subj}                                      \
    -dsets_me_run                ${dsets_epi_me}                              \
    -echo_times                  ${me_times}                                  \
    -copy_anat                   ${anat_cp}                                   \
    -anat_has_skull              no                                           \
    -anat_follower               anat_w_skull anat ${anat_skull}              \
    -blocks                      tshift align tlrc volreg mask combine        \
                                 blur scale regress                           \
    -radial_correlate_blocks     tcat volreg regress                          \
    -tcat_remove_first_trs       ${nt_rm}                                     \
    -blip_forward_dset           "${epi_forward}"                             \
    -blip_reverse_dset           "${epi_reverse}"                             \
    -align_unifize_epi           local                                        \
    -align_opts_aea              -cost lpc+ZZ -giant_move -check_flip         \
    -tlrc_base                   ${template}                                  \
    -tlrc_NL_warp                                                             \
    -tlrc_NL_warped_dsets        ${dsets_NL_warp}                             \
    -volreg_align_to             MIN_OUTLIER                                  \
    -volreg_align_e2a                                                         \
    -volreg_tlrc_warp                                                         \
    -volreg_warp_dxyz            ${final_dxyz}                                \
    -volreg_compute_tsnr         yes                                          \
    -mask_epi_anat               yes                                          \
    -combine_method              m_tedana                                     \
    -blur_size                   ${blur_size}                                 \
    -regress_motion_per_run                                                   \
    -regress_censor_motion       ${cen_motion}                                \
    -regress_censor_outliers     ${cen_outliers}                              \
    -regress_apply_mot_types     demean deriv                                 \
    -regress_est_blur_epits                                                   \
    -regress_est_blur_errts                                                   \
    -html_review_style           pythonic

A couple notes:

  • in that command, we ran sswarper2 prior to AP, to do both skullstripping of the anatomical dset and estimating nonlinear alignment to a template space (MNI, in that example). That is what we typically recommend, to save time+energy when reprocessing is necessary. The warp results of sswarper2 are provided to AP there, via the -tlrc_NL_warped_dsets ${dsets_NL_warp} opt in particular (see the GitHub for the description of ${dsets_NL_warp}), and the skullstripped anatomical results are provided by:
    -anat_has_skull              no                              \
    -anat_follower               anat_w_skull anat ${anat_skull} \
    
  • you might be interested in the other m_tedana_* variations, please see the AP help about them.
  • the "blip" block isn't necessary within your list of blocks---it is an implicit block, that AP will add in the correct spot when you use -blip_* .. opts (see Table 1 in the above-linked paper).

For errors about Python 2---what were you using for your combine method? The old/original Kundu et al. code for "tedana" as a combine method uses that. We would strongly recommend using the more modern DuPre et al one, "m_tedana" (because they were known as the "MEICA group"), and that uses Python 3.*. That might have been the issue?

For whether or not to do LFF (something like 0.01-0.1 Hz) bandpassing---it is so expensive in terms of degrees of freedom (DFs). If DFs are appropriately counted, using that processing step leaves very little room for participant motion, before one has mathematical issues with the processing. I understand that many papers use LFF bandpassing, but I guess most do because others do... There is still meaningful information above 0.1 Hz. Once you do LFF bandpassing in processing, you also can't calculate fALFF. There are many challenges with it---please see the above-cited articles for more about it. The issue is not multi-echo vs single-echo processing---in general, it is a process that removes a lot of info from data, and when properly accounted for in DF counting, might be leaving just noise behind. So, I would say the default choice should be not to bandpass unless you have a really good reason to need it, rather than default to include it.

For the -align_unifize_epi .. opt: the order of options in an AP command doesn't matter. We group them typically by the block they belong to (i.e., all the -regress_* .. opts together, all the -volreg_* .. ones together), but that is primarily for the human-readability aspect. So, I would say you could put it anywhere in the cluster of -align_* opts.

--pt

Hi Jie,

Just adding a little to the useful things Paul has said...

There is an advantage to staying in orig space when the subjects have very little motion relative to the grid size. But if any combination of volreg parameters gets up to even 1/3 or 1/2 of a voxel, all bets are off. At that point there is full interpolation of the original data, and it might as well be in standard space. And then there would not be the multiple resample operations, blurring the data more than is necessary.

In your stream of processing (blip → tshift → align → volreg → mask → tedana → normalization → detrending → nuisance regression → censoring/filtering → smoothing), besides the spatial normalization processing, the order is a bit different form what we might suggest. To detrend, then perform nuisance regression, then censor and apply other filtering, all of the nuisance regression would need to be applied to the later filter regressors, and the censoring should be taken into account the entire way. So we put everything into a single model, to carry out the regression appropriately, and to avoid regressing later regressors. Smoothing before or after nuisance regression should not matter, though we tend to including a scaling operation, which does make a small difference in the order.

Regarding python versions, note that you can run whichever tedana program you would like to, and it is simply up to you to have an appropriate python environment for your tedana version. That is separate from afni_proc.py. As Paul mentions, the current tedana (which uses python3) is preferred, using one of the m_tedana* afni_proc.py -combine_method cases.

Regarding BIDS, there is absolutely no problem with using BIDS naming in afni_proc.py inputs, there never has been. Many of the more recent examples use BIDS input.

You might consider afni_proc.py -show_example "demo 1c" -verb 2, or consider example "publish 3i". The "1c" example is a little closer to using m_tedana in that adds a couple of interpolation options:

-tshift_interp             -wsinc9               \
-volreg_warp_final_interp  wsinc5                \

-rick