Conceptual @SSwarper inquiry

Hi all, I have 2 questions about this preprocessing command. My team has split this process into 2 separate scripts:
part 1

# labels
set subj           = $1

set template       = MNI152_2009_template_SSW.nii.gz 

# upper directories
set topdir     = /data/projects/STUDIES/IMPACT/fMRI
set dir_bids      = ${topdir}/bids
set dir_ssw        = ${topdir}/derivatives/ssw
set dir_log        = ${topdir}/derivatives/logs

# subject directories
set sdir_basic     = ${dir_bids}/sub-${subj}
set sdir_epi       = ${sdir_basic}/func
set sdir_anat        = ${sdir_basic}/anat
set sdir_ssw       = ${dir_ssw}/sub-${subj}

# --------------------------------------------------------------------------
# data and control variables
# --------------------------------------------------------------------------

# dataset inputs
set dset_anat_00  = ${sdir_anat}/sub-${subj}_T1w.nii.gz

# control variables

# check available N_threads and report what is being used
# + consider using up to 16 threads (alignment programs are parallelized)
# + N_threads may be set elsewhere; to set here, uncomment the following line:
### setenv OMP_NUM_THREADS 16

set nthr_avail = ` -disp_num_cpu'`
set nthr_using = `afni_check_omp`

echo "++ INFO: Using ${nthr_avail} of available ${nthr_using} threads"

# ---------------------------------------------------------------------------
# run programs
# ---------------------------------------------------------------------------

time @SSwarper                                                        \
    -base    "${template}"                                            \
    -subid   "${subj}"                                                \
    -input   "${dset_anat_00}"                                        \
	-cost_nl_final lpa                                                \
    -odir    "${sdir_ssw}"

echo "++ FINISHED SSW"

Part 2

# specify script to execute
set cmd           = IMPACT_Setup_02_ssw

# labels
set subj          = ( s4347 )

# LOG: 
# Round 1: s606 s333 s807
# Round 2: s1287 s4326 s926 s1476 s1324 s169 s35 s1350 s701 s601 s191 s578 s820 s523 s692 s4272
# Round 3: s4345, s1253, s1143, s1323, s4127, s1000, s1745, s1595, s4348, s1630, s418, s4347, s745

# upper directories
set dir_scr       = $PWD
set topdir     = /data/projects/STUDIES/IMPACT/fMRI
set dir_log        = ${topdir}/derivatives/logs
# running
set cdir_log      = ${dir_log}/logs_${cmd}
set scr_cmd       = ${dir_scr}/${cmd}.tcsh

# --------------------------------------------------------------------------
foreach subj ($subj)

# make directory for storing text files to log the processing
\mkdir -p ${cdir_log}

# --------------------------------------------------------------------------

set log = ${cdir_log}/log_${cmd}_sub-${subj}.txt

# run command script (verbosely and stop at any failure); log terminal text.

tcsh -xf ${scr_cmd} ${subj} |& tee ${log}


Q1: Does this command require this nested design in order to run multiple participants? Is there another reason why it should be nested?

When looking for answers to this question on the afni help pages I saw that there is a newer version of @SSwarper, named ssw2. There are not many differences between the two scripts. Would anyone recommend switching to ssw2? Why or why not? (Context: we are launching a new study and are revisiting the rationale for each step of our preprocessing pipeline)

Thanks so much!


I think your team is following the model of how we typically process groups of subjects, such as in many of the scripts/repositories of code we make public in the Codex section of the documentation (e.g., the GitHub repo linked to from this paper).

Basically, the "part 1" script is what we usually name with do_*.tcsh, and the "part 2" script is what we usually name with run_*.tcsh.

  • The part 1/do_*.tcsh script does the processing for a single subject.
  • the part 2/run_*.tcsh script loops over the group of subjects, calling the do_*.tcsh script many times, so that the whole group gets processed.

The analysis doesn't have to be done this way necessarily, but we find it quite useful to separate these roles. The run_*.tcsh script is nearly the same for any processing, and if using a computing cluster you might add in the functionality of swarm processing, etc.

Re. sswarper2: Yes, indeed, this is an update which is meant to a like-for-like replacement as much as possible for @SSwarper. The basic option usage is identical, and the basic outputs of interest are also identical---that makes it easy to swap in scripting, both for running the command itself and for using it in processing, too, for example.

The results will typically be quite similar, but sswarper2 should be slightly better and also a bit more consistent in a few cases. We describe it in one of our recent posters from OHBM, which you find here. (... and this reminds me we should post about those in the Announcements, which I will do now.)