resting state with afni_proc plus SS Warper

Was playing around with resting state data using new to me tools of SS Warper combined with with anaticor (akin to example 9B but with B-O-B’s SS Warper suggestions).


(a) is this a reasonable combination of SS Warper results with AFNI_PROC?

(b) any suggestions for speeding this up?

Was playing on my quad core MacBook Pro to test it before rolling it out to our SLURM scheduling/REDHAT 7 computer cluster.

But–it’s taking forever.

Tnx and Happy Spring to All!


set subjects = (1234 5678)
set topdir = /Users/ddickstein/GROUP
set task = REST
set group = PEEPS
set subjdata = $topdir/RawImagingData/$group
set results = $topdir/IndvlLvlAnalyses_Cnsr1mm

Go to the results tree and work from there

cd $results

If it does not yet exist, create the group directory within the results directory

! = not; -d = directory

if ( ! -d $group ) mkdir $group

Go to the newly created, or already exsting group directory

cd $group

If it does not yet exist, create the task directory within the group directory

! = not; -d = directory

if ( ! -d $task ) mkdir $task

Go to the newly created, or already exsting task directory

cd $task

For each of the subjects, specified in the subjects list…

foreach subj ( $subjects )

# Delete the subject's existing results folder
if ( -d $subj ) then
	# Print the following message to the screen
echo "-- deleting old results for $subj"
# Force recursive removal
    rm -fr $subj
# Make a new subject's results folder, and go into that folder to work
mkdir $subj
cd $subj

set subj_dir = $subjdata/$subj

set btemplate = MNI152_2009_template_SSW.nii.gz
set tpath = @FindAfniDsetPath ${btemplate}
if( “$tpath” == “” ) exit 1 -subj_id ${subj}
-dsets $subj_dir/afni/${subj}_REST.nii.gz
-blocks tshift align tlrc volreg blur mask scale regress
-copy_anat $subj_dir/anat/anatSS.${subj}.nii
-anat_has_skull no
-tcat_remove_first_trs 4
-align_opts_aea -ginormous_move -deoblique on -cost lpc+ZZ
-volreg_align_to MIN_OUTLIER
-volreg_tlrc_warp -tlrc_base $tpath/$btemplate
-mask_epi_anat yes
-regress_censor_motion 0.2
-regress_censor_outliers 0.05
-regress_bandpass 0.01 0.1
-regress_apply_mot_types demean deriv
-regress_run_clustsim no
-regress_est_blur_errts \

cd …


Yes, @SSwarper can be run with any FMRI data (task, rest, naturalistic…).

To speed up (or, well, leverage as much speediness as possible), note that:

  1. modern versions use the “lite” 3dQwarp processing-- so make sure you are using an uptodate AFNI.

  2. It’s parallelized with OpenMP, so deeeeeefinitely make use of that-- set OMP_NUM_THREADS on your system to be many threads. I often use 12 or 20.

When running on a cluster and using SLURM to do swarm/batch processing, we typically include this at the top of our tcsh scripts (could be applied to any shell script in other syntax):

# set thread count if we are running SLURM
if ( $?SLURM_CPUS_PER_TASK ) then

That way, the knowledge about how many processors have been allocated to your task are merrily passed along to the program. You should see some message in the terminal when the warping starts running about how many processors it is using, so you should be able to tell if this effort has been successful.


Tnx for speedy helpful reply!

To confirm–I first have to run the SS Warper command–which itself takes a while.

And then–run the script I sent for the part (which will also take a while but faster on cluster running multiple threads?

(vs my MacBook test of 1 subj x 12 hours at sub brick 134 on 3dLocalstat).


Yes, @SSwarper is a precursor step to (that’s a good thing-- if you have to update your processing and re-run, you don’t have to redo the somewhat slow alignment). For this step, you reeeeeeeally want to use several cores/threads. This is where you really want OMP_NUM_THREADS set to be >1, if it all possible, and I provided script stuff for using SLURM explicitly in my previous message.

Running comes next, using the results of @SSwarper. Also having some parallelization wtih OMP_NUM_THREADS set to be >1 (if possible) would also benefit your processing here. Note that applying the warps and regridding lots of EPI volumes can/will still take a while. Quality processing takes time!

For processing a group of subjects or something, you really would be better served by using a more powerful desktop or a cluster. 12 hours seems a bit extreme, but total processing time depends on a lot of things: size of volumes, number of EPIs, whether disks are SSD, amount of memory per processor, phase of the Moon, etc.


Re-reading your reply from April–and suggestion to use 3dQwarp rather than SSwarper. Having read the help for that program, and postings on the afni board–questions: (1) how do we know that my university’s SLURM server is running the correct updated version of 3dQwarp–as it looks like there are frequent changes to that program? (2) do you have an example of typical 3dQwarp usage that would replace/be akin to @SSwarp in the following–so that the results of the 3dQwarp command could be used for resting state example 9B? @SSwarper -input path_2_my_subjects_anat_scan -base MNI152_2009_template_SSW.nii.gz -subid 1234. Tnx!


I don’t see where I was advocating for using 3dQwarp rather than @SSwarper itself. I described the properties of 3dQwarp because it is used as part of @SSwarper, not because I thought it should be used instead. @SSwarper will both skullstrip and align to standard space-- two things that we have found useful to do together. So, I would use @SSwarper in general, as a precursor step to

Your questions:
#1- what is the output of:

afni -ver

? The “-lite” functionality became default in 3dQwarp Jan 8, 2019, so a version after that would be running “lighter”. But again, you want a bigger desktop to use when processing a group.

#2- if you didn’t want to use @SSwarper (but I haven’t seen a reason why you would?), then you would just use this option in (while also having ‘tlrc’ as one of your blocks):


Again, to be clear, I am not recommending this, unless there is something I have forgotten from the earlier thread?