uber_subject.py and dmBLOCK


I am relatively new to fMRI analysis and AFNI, and have been spending a good chunk of time working through the AFNI bootcamp workshops online and the “Class Handouts”. I have a question regarding the uber_subject.py script I was hoping to get some help with.

For my task, I have my stimulus (jittered duration 2500ms, 4000ms or 5500ms) and then my feedback screen (1500ms). I am interested in modeling both the stimulus and feedback screen. I read that dmBLOCK may be a good option when the stimuli have different durations; however, I do not see this as an option in the ‘init basis funcs’ or ‘init file types’ in the uber_subject.py GUI. I was wondering whether it would make sense if I typed “dmBLOCK” as the 'init basis funcs" and “stim_AM1” as the init file types. Is this correct given that I have varied durations for my stimulus, but a static duration for my feedback?

Thank you in advance for your assistance.


Hi Tamara,

That is right. You can specify the ‘basis’ function and
stim file ‘type’ separately, by editing the table entries

So use ‘basis’ dmBLOCK and ‘type’ AM1 for the stimulus
class, and ‘basis’ BLOCK(1.5) and ‘type’ times for the
feedback condition.

You do not need to bother with the ‘init basis funcs’ or
‘init file types’ boxes.

Check the resulting afni_proc.py command that is
displayed, to be sure that -regress_basis_multi has the
correct basis functions, and -regress_stim_types has
AM1 and times.

  • rick

Thank you for your help!

I ran the uber_subjects.py with the settings you suggested and I received the following error that I am not sure I understand:

e[7m** FATAL ERROR:e[0m ‘-stim_times 1’ file ‘stimuli/NonRevP_learn_corr_fed.txt’ has 1 auxiliary values per time point [nopt=14]

I am not sure what is meant that my regressors have auxiliary values. I should mention that my regressors are defined as: stimulus onset (seconds): stimulus duration. Is this because I included the duration in my regressors?

I read in another discussion feed that “Timing with auxiliary values should be given via either -stim_times_AM1 or -stim_times_AM2, and likely the latter.” With this in mind, should I change the stim types from “times” to “AM1”?

I have also copied and pasted the resultant uber_subjects script below

Thank you again,

execute via :

tcsh -xef proc.s_9444A |& tee output.proc.s_9444A

=========================== auto block: setup ============================

script setup

take note of the AFNI version

afni -ver

check that the current AFNI version is recent enough

afni_history -check_date 23 Mar 2018
if ( $status ) then
echo “** this script requires newer AFNI binaries (than 23 Mar 2018)”
echo " (consider: @update.afni.binaries -defaults)"

the user may specify a single subject to run with

if ( $#argv > 0 ) then
set subj = $argv[1]
set subj = s_9444A

assign output directory name

set output_dir = $subj.results

verify that the results directory does not yet exist

if ( -d $output_dir ) then
echo output dir “$subj.results” already exists

set list of runs

set runs = (count -digits 2 1 4)

create results and stimuli directories

mkdir $output_dir
mkdir $output_dir/stimuli

copy stim files into stimulus directory

cp /home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_learn_corr_fed.txt

copy anatomy to results dir

3dcopy /home/bmiadmin/Tamara/PRR_Analysis/9444_test/9444.anat+orig

============================ auto block: tcat ============================

apply 3dTcat to copy input dsets to results dir,

while removing the first 0 TRs

3dTcat -prefix $output_dir/pb00.$subj.r01.tcat
3dTcat -prefix $output_dir/pb00.$subj.r02.tcat
3dTcat -prefix $output_dir/pb00.$subj.r03.tcat
3dTcat -prefix $output_dir/pb00.$subj.r04.tcat

and make note of repetitions (TRs) per run

set tr_counts = ( 153 153 153 153 )


enter the results directory (can begin processing data)

cd $output_dir

========================== auto block: outcount ==========================

data check: compute outlier fraction for each volume

touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 3 -legendre
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D

# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
    echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
        >> out.pre_ss_warn.txt


catenate outlier counts into a single time series

cat outcount.r*.1D > outcount_rall.1D

get run number and TR index for minimum outlier volume

set minindex = 3dTstat -argmin -prefix - outcount_rall.1D\'
set ovals = ( 1d_tool.py -set_run_lengths $tr_counts \ -index_to_run_tr $minindex )

save run and TR indices for extraction of vr_base_min_outlier

set minoutrun = $ovals[1]
set minouttr = $ovals[2]
echo “min outlier: run $minoutrun, TR $minouttr” | tee out.min_outlier.txt

================================= tshift =================================

time shift data so all slice timing is the same

foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb01.$subj.r$run.tshift


extract volreg registration base

3dbucket -prefix vr_base_min_outlier

================================= align ==================================

for e2a: compute anat alignment transformation to EPI registration base

(new anat will be intermediate, stripped, 9444.anat_ns+orig)

align_epi_anat.py -anat2epi -anat 9444.anat+orig
-save_skullstrip -suffix _al_junk
-epi vr_base_min_outlier+orig -epi_base 0
-epi_strip 3dAutomask
-volreg off -tshift off

================================== tlrc ==================================

warp anatomy to standard space

@auto_tlrc -base TT_N27+tlrc -input 9444.anat_ns+orig -no_ss

store forward transformation matrix in a text file

cat_matvec 9444.anat_ns+tlrc::WARP_DATA -I > warp.anat.Xat.1D

================================= volreg =================================

align each dset to base volume, align to anat, warp to tlrc space

verify that we have a +tlrc warp dataset

if ( ! -f 9444.anat_ns+tlrc.HEAD ) then
echo “** missing +tlrc warp dataset: 9444.anat_ns+tlrc.HEAD”

register and warp

foreach run ( $runs )
# register each volume to the base image
3dvolreg -verbose -zpad 1 -base vr_base_min_outlier+orig
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run
-1Dmatrix_save mat.r$run.vr.aff12.1D

# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb01.$subj.r$run.tshift+orig -expr 1   \
       -prefix rm.epi.all1

# catenate volreg/epi2anat/tlrc xforms
cat_matvec -ONELINE                                         \
           9444.anat_ns+tlrc::WARP_DATA -I                  \
           9444.anat_al_junk_mat.aff12.1D -I                \
           mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D

# apply catenated xform: volreg/epi2anat/tlrc
3dAllineate -base 9444.anat_ns+tlrc                         \
            -input pb01.$subj.r$run.tshift+orig             \
            -1Dmatrix_apply mat.r$run.warp.aff12.1D         \
            -mast_dxyz 2                                    \
            -prefix rm.epi.nomask.r$run

# warp the all-1 dataset for extents masking 
3dAllineate -base 9444.anat_ns+tlrc                         \
            -input rm.epi.all1+orig                         \
            -1Dmatrix_apply mat.r$run.warp.aff12.1D         \
            -mast_dxyz 2 -final NN -quiet                   \
            -prefix rm.epi.1.r$run

# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc


make a single file of registration params

cat dfile.r*.1D > dfile_rall.1D


create the extents mask: mask_epi_extents+tlrc

(this is a mask of voxels that have valid data at every TR)

3dMean -datum short -prefix rm.epi.mean rm.epi.min.r*.HEAD
3dcalc -a rm.epi.mean+tlrc -expr ‘step(a-0.999)’ -prefix mask_epi_extents

and apply the extents mask to the EPI data

(delete any time series with missing data)

foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc
-expr ‘a*b’ -prefix pb02.$subj.r$run.volreg

warp the volreg base EPI dataset to make a final version

cat_matvec -ONELINE
9444.anat_ns+tlrc::WARP_DATA -I
9444.anat_al_junk_mat.aff12.1D -I > mat.basewarp.aff12.1D

3dAllineate -base 9444.anat_ns+tlrc
-input vr_base_min_outlier+orig
-1Dmatrix_apply mat.basewarp.aff12.1D
-mast_dxyz 2
-prefix final_epi_vr_base_min_outlier

create an anat_final dataset, aligned with stats

3dcopy 9444.anat_ns+tlrc anat_final.$subj

record final registration costs

3dAllineate -base final_epi_vr_base_min_outlier+tlrc -allcostX
-input anat_final.$subj+tlrc |& tee out.allcostX.txt


warp anat follower datasets (affine)

3dAllineate -source 9444.anat+orig
-master anat_final.$subj+tlrc
-final wsinc5 -1Dmatrix_apply warp.anat.Xat.1D
-prefix anat_w_skull_warped

================================== blur ==================================

blur each volume of each run

foreach run ( $runs )
3dmerge -1blur_fwhm 4.0 -doall -prefix pb03.$subj.r$run.blur

================================== mask ==================================

create ‘full_mask’ dataset (union mask)

foreach run ( $runs )
3dAutomask -dilate 1 -prefix rm.mask_r$run pb03.$subj.r$run.blur+tlrc

create union of inputs, output type is byte

3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj

---- create subject anatomy mask, mask_anat.$subj+tlrc ----

(resampled from tlrc anat)

3dresample -master full_mask.$subj+tlrc -input 9444.anat_ns+tlrc
-prefix rm.resam.anat

convert to binary anat mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc
-prefix mask_anat.$subj

compute tighter EPI mask by intersecting with anat mask

3dmask_tool -input full_mask.$subj+tlrc mask_anat.$subj+tlrc
-inter -prefix mask_epi_anat.$subj

compute overlaps between anat and EPI masks

3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_overlap.txt

note Dice coefficient of masks, as well

3ddot -dodice full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_dice.txt

---- create group anatomy mask, mask_group+tlrc ----

(resampled from tlrc base anat, TT_N27+tlrc)

3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group
-input /home/bmiadmin/abin/TT_N27+tlrc

convert to binary group mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc
-prefix mask_group

================================= scale ==================================

scale each voxel time series to have a mean of 100

(be sure no negatives creep in)

(subject to a range of [0,200])

foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb03.$subj.r$run.blur+tlrc
3dcalc -a pb03.$subj.r$run.blur+tlrc -b rm.mean_r$run+tlrc
-c mask_epi_extents+tlrc
-expr ‘c * min(200, a/b*100)*step(a)*step(b)’
-prefix pb04.$subj.r$run.scale

================================ regress =================================

compute de-meaned motion parameters (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_nruns 4
-demean -write motion_demean.1D

compute motion parameter derivatives (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_nruns 4
-derivative -demean -write motion_deriv.1D

convert motion parameters for per-run regression

1d_tool.py -infile motion_demean.1D -set_nruns 4
-split_into_pad_runs mot_demean

1d_tool.py -infile motion_deriv.1D -set_nruns 4
-split_into_pad_runs mot_deriv

create censor file motion_${subj}_censor.1D, for censoring motion

1d_tool.py -infile dfile_rall.1D -set_nruns 4
-show_censor_count -censor_prev_TR
-censor_motion 0.3 motion_${subj}

note TRs that were not censored

set ktrs = 1d_tool.py -infile motion_${subj}_censor.1D \ -show_trs_uncensored encoded


run the regression analysis

3dDeconvolve -input pb04.$subj.r*.scale+tlrc.HEAD
-censor motion_${subj}_censor.1D
-polort 3
-num_stimts 64
-stim_times 1 stimuli/NonRevP_learn_corr_fed.txt ‘BLOCK(1.4)’
-stim_label 1 NonRevP_learn_corr_fed
-stim_times_AM1 2 stimuli/NonRevP_learn_corr.txt ‘dmBLOCK’
-stim_label 2 NonRevP_learn_corr
-stim_times 3 stimuli/NonRevP_learn_incor_fed.txt ‘BLOCK(1.4)’
-stim_label 3 NonRevP_learn_incor_fed
-stim_times_AM1 4 stimuli/NonRevP_learn_incor.txt ‘dmBLOCK’
-stim_label 4 NonRevP_learn_incor
-stim_times 5 stimuli/NonRevP_perf_corr_fed.txt ‘BLOCK(1.4)’
-stim_label 5 NonRevP_perf_corr_fed
-stim_times_AM1 6 stimuli/NonRevP_perf_corr.txt ‘dmBLOCK’
-stim_label 6 NonRevP_perf_corr
-stim_times 7 stimuli/NonRevP_perf_incor_fed.txt ‘BLOCK(1.4)’
-stim_label 7 NonRevP_perf_incor_fed
-stim_times_AM1 8 stimuli/NonRevP_perf_incor.txt ‘dmBLOCK’
-stim_label 8 NonRevP_perf_incor
-stim_times 9 stimuli/RevP_acq_corr_fed.txt ‘BLOCK(1.4)’
-stim_label 9 RevP_acq_corr_fed
-stim_times_AM1 10 stimuli/RevP_acq_corr.txt ‘dmBLOCK’
-stim_label 10 RevP_acq_corr
-stim_times 11 stimuli/RevP_acq_incor_fed.txt ‘BLOCK(1.4)’
-stim_label 11 RevP_acq_incor_fed
-stim_times_AM1 12 stimuli/RevP_acq_incor.txt ‘dmBLOCK’
-stim_label 12 RevP_acq_incor
-stim_times 13 stimuli/RevP_rev_corr_fed.txt ‘BLOCK(1.4)’
-stim_label 13 RevP_rev_corr_fed
-stim_times_AM1 14 stimuli/RevP_rev_corr.txt ‘dmBLOCK’
-stim_label 14 RevP_rev_corr
-stim_times 15 stimuli/RevP_rev_incor_fed.txt ‘BLOCK(1.4)’
-stim_label 15 RevP_rev_incor_fed
-stim_times_AM1 16 stimuli/RevP_rev_incor.txt ‘dmBLOCK’
-stim_label 16 RevP_rev_incor
-stim_file 17 mot_demean.r01.1D’[0]’ -stim_base 17 -stim_label 17 roll_01
-stim_file 18 mot_demean.r01.1D’[1]’ -stim_base 18 -stim_label 18
-stim_file 19 mot_demean.r01.1D’[2]’ -stim_base 19 -stim_label 19 yaw_01
-stim_file 20 mot_demean.r01.1D’[3]’ -stim_base 20 -stim_label 20 dS_01
-stim_file 21 mot_demean.r01.1D’[4]’ -stim_base 21 -stim_label 21 dL_01
-stim_file 22 mot_demean.r01.1D’[5]’ -stim_base 22 -stim_label 22 dP_01
-stim_file 23 mot_demean.r02.1D’[0]’ -stim_base 23 -stim_label 23 roll_02
-stim_file 24 mot_demean.r02.1D’[1]’ -stim_base 24 -stim_label 24
-stim_file 25 mot_demean.r02.1D’[2]’ -stim_base 25 -stim_label 25 yaw_02
-stim_file 26 mot_demean.r02.1D’[3]’ -stim_base 26 -stim_label 26 dS_02
-stim_file 27 mot_demean.r02.1D’[4]’ -stim_base 27 -stim_label 27 dL_02
-stim_file 28 mot_demean.r02.1D’[5]’ -stim_base 28 -stim_label 28 dP_02
-stim_file 29 mot_demean.r03.1D’[0]’ -stim_base 29 -stim_label 29 roll_03
-stim_file 30 mot_demean.r03.1D’[1]’ -stim_base 30 -stim_label 30
-stim_file 31 mot_demean.r03.1D’[2]’ -stim_base 31 -stim_label 31 yaw_03
-stim_file 32 mot_demean.r03.1D’[3]’ -stim_base 32 -stim_label 32 dS_03
-stim_file 33 mot_demean.r03.1D’[4]’ -stim_base 33 -stim_label 33 dL_03
-stim_file 34 mot_demean.r03.1D’[5]’ -stim_base 34 -stim_label 34 dP_03
-stim_file 35 mot_demean.r04.1D’[0]’ -stim_base 35 -stim_label 35 roll_04
-stim_file 36 mot_demean.r04.1D’[1]’ -stim_base 36 -stim_label 36
-stim_file 37 mot_demean.r04.1D’[2]’ -stim_base 37 -stim_label 37 yaw_04
-stim_file 38 mot_demean.r04.1D’[3]’ -stim_base 38 -stim_label 38 dS_04
-stim_file 39 mot_demean.r04.1D’[4]’ -stim_base 39 -stim_label 39 dL_04
-stim_file 40 mot_demean.r04.1D’[5]’ -stim_base 40 -stim_label 40 dP_04
-stim_file 41 mot_deriv.r01.1D’[0]’ -stim_base 41 -stim_label 41 roll_05
-stim_file 42 mot_deriv.r01.1D’[1]’ -stim_base 42 -stim_label 42 pitch_05
-stim_file 43 mot_deriv.r01.1D’[2]’ -stim_base 43 -stim_label 43 yaw_05
-stim_file 44 mot_deriv.r01.1D’[3]’ -stim_base 44 -stim_label 44 dS_05
-stim_file 45 mot_deriv.r01.1D’[4]’ -stim_base 45 -stim_label 45 dL_05
-stim_file 46 mot_deriv.r01.1D’[5]’ -stim_base 46 -stim_label 46 dP_05
-stim_file 47 mot_deriv.r02.1D’[0]’ -stim_base 47 -stim_label 47 roll_06
-stim_file 48 mot_deriv.r02.1D’[1]’ -stim_base 48 -stim_label 48 pitch_06
-stim_file 49 mot_deriv.r02.1D’[2]’ -stim_base 49 -stim_label 49 yaw_06
-stim_file 50 mot_deriv.r02.1D’[3]’ -stim_base 50 -stim_label 50 dS_06
-stim_file 51 mot_deriv.r02.1D’[4]’ -stim_base 51 -stim_label 51 dL_06
-stim_file 52 mot_deriv.r02.1D’[5]’ -stim_base 52 -stim_label 52 dP_06
-stim_file 53 mot_deriv.r03.1D’[0]’ -stim_base 53 -stim_label 53 roll_07
-stim_file 54 mot_deriv.r03.1D’[1]’ -stim_base 54 -stim_label 54 pitch_07
-stim_file 55 mot_deriv.r03.1D’[2]’ -stim_base 55 -stim_label 55 yaw_07
-stim_file 56 mot_deriv.r03.1D’[3]’ -stim_base 56 -stim_label 56 dS_07
-stim_file 57 mot_deriv.r03.1D’[4]’ -stim_base 57 -stim_label 57 dL_07
-stim_file 58 mot_deriv.r03.1D’[5]’ -stim_base 58 -stim_label 58 dP_07
-stim_file 59 mot_deriv.r04.1D’[0]’ -stim_base 59 -stim_label 59 roll_08
-stim_file 60 mot_deriv.r04.1D’[1]’ -stim_base 60 -stim_label 60 pitch_08
-stim_file 61 mot_deriv.r04.1D’[2]’ -stim_base 61 -stim_label 61 yaw_08
-stim_file 62 mot_deriv.r04.1D’[3]’ -stim_base 62 -stim_label 62 dS_08
-stim_file 63 mot_deriv.r04.1D’[4]’ -stim_base 63 -stim_label 63 dL_08
-stim_file 64 mot_deriv.r04.1D’[5]’ -stim_base 64 -stim_label 64 dP_08
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg
-x1D_uncensored X.nocensor.xmat.1D
-fitts fitts.$subj
-errts errts.${subj}
-bucket stats.$subj

if 3dDeconvolve fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dDeconvolve error, failing…’
echo ’ (consider the file 3dDeconvolve.err)’

display any large pairwise correlations from the X-matrix

1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt

create an all_runs dataset to match the fitts, errts, etc.

3dTcat -prefix all_runs.$subj pb04.$subj.r*.scale+tlrc.HEAD


create a temporal signal to noise ratio dataset

signal: if ‘scale’ block, mean should be 100

noise : compute standard deviation of errts

3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc"[$ktrs]"
3dTstat -stdev -prefix rm.noise.all errts.${subj}+tlrc"[$ktrs]"
3dcalc -a rm.signal.all+tlrc
-b rm.noise.all+tlrc
-c full_mask.$subj+tlrc
-expr ‘c*a/b’ -prefix TSNR.$subj


compute and store GCOR (global correlation average)

(sum of squares of global mean of unit errts)

3dTnorm -norm2 -prefix rm.errts.unit errts.${subj}+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc
> gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D' > out.gcor.1D
echo “-- GCOR = cat out.gcor.1D


compute correlation volume

(per voxel: average correlation across masked brain)

(now just dot product with average unit time series)

3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr ‘a*b’ -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc

create ideal files for fixed response stim types

1dcat X.nocensor.xmat.1D’[16]’ > ideal_NonRevP_learn_corr_fed.1D
1dcat X.nocensor.xmat.1D’[17]’ > ideal_NonRevP_learn_corr.1D
1dcat X.nocensor.xmat.1D’[18]’ > ideal_NonRevP_learn_incor_fed.1D
1dcat X.nocensor.xmat.1D’[19]’ > ideal_NonRevP_learn_incor.1D
1dcat X.nocensor.xmat.1D’[20]’ > ideal_NonRevP_perf_corr_fed.1D
1dcat X.nocensor.xmat.1D’[21]’ > ideal_NonRevP_perf_corr.1D
1dcat X.nocensor.xmat.1D’[22]’ > ideal_NonRevP_perf_incor_fed.1D
1dcat X.nocensor.xmat.1D’[23]’ > ideal_NonRevP_perf_incor.1D
1dcat X.nocensor.xmat.1D’[24]’ > ideal_RevP_acq_corr_fed.1D
1dcat X.nocensor.xmat.1D’[25]’ > ideal_RevP_acq_corr.1D
1dcat X.nocensor.xmat.1D’[26]’ > ideal_RevP_acq_incor_fed.1D
1dcat X.nocensor.xmat.1D’[27]’ > ideal_RevP_acq_incor.1D
1dcat X.nocensor.xmat.1D’[28]’ > ideal_RevP_rev_corr_fed.1D
1dcat X.nocensor.xmat.1D’[29]’ > ideal_RevP_rev_corr.1D
1dcat X.nocensor.xmat.1D’[30]’ > ideal_RevP_rev_incor_fed.1D
1dcat X.nocensor.xmat.1D’[31]’ > ideal_RevP_rev_incor.1D


compute sum of non-baseline regressors from the X-matrix

(use 1d_tool.py to get list of regressor colums)

set reg_cols = 1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"

also, create a stimulus-only X-matrix, for easy review

1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D

============================ blur estimation =============================

compute blur estimates

touch blur_est.$subj.1D # start with empty file

create directory for ACF curve files

mkdir files_ACF

– estimate blur for each run in epits –

touch blur.epits.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.epits.r$run.1D
all_runs.$subj+tlrc"[$trs]" >> blur.epits.1D

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{0..$(2)}'\' )
echo average epits FWHM blurs: $blurs
echo “$blurs # epits FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{1..$(2)}'\' )
echo average epits ACF blurs: $blurs
echo “$blurs # epits ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in errts –

touch blur.errts.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.errts.r$run.1D
errts.${subj}+tlrc"[$trs]" >> blur.errts.1D

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{0..$(2)}'\' )
echo average errts FWHM blurs: $blurs
echo “$blurs # errts FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{1..$(2)}'\' )
echo average errts ACF blurs: $blurs
echo “$blurs # errts ACF blur estimates” >> blur_est.$subj.1D

================== auto block: generate review scripts ===================

generate a review script for the unprocessed EPI data

gen_epi_review.py -script @epi_review.$subj
-dsets pb00.$subj.r*.tcat+orig.HEAD

generate scripts to review single subject results

(try with defaults, but do not allow bad exit status)

gen_ss_review_scripts.py -mot_limit 0.3 -exit0

========================== auto block: finalize ==========================

remove temporary files

\rm -f rm.*

if the basic subject review script is here, run it

(want this to be the last text output)

if ( -e @ss_review_basic ) ./@ss_review_basic |& tee out.ss_review.$subj.txt

return to parent directory

cd …

echo “execution finished: date


script generated by the command:

afni_proc.py -subj_id s_9444A -script proc.s_9444A -scr_overwrite -blocks \

tshift align tlrc volreg blur mask scale regress -copy_anat \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/9444.anat+orig -dsets \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/9444.1+orig.HEAD \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/9444.2+orig.HEAD \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/9444.3+orig.HEAD \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/9444.4+orig.HEAD \

-tcat_remove_first_trs 0 -volreg_align_to MIN_OUTLIER -volreg_align_e2a \

-volreg_tlrc_warp -blur_size 4.0 -regress_stim_times \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_learn_corr_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_learn_corr.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_learn_incor_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_learn_incor.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_perf_corr_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_perf_corr.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_perf_incor_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/NonRevP_perf_incor.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_acq_corr_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_acq_corr.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_acq_incor_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_acq_incor.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_rev_corr_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_rev_corr.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_rev_incor_fed.txt \

/home/bmiadmin/Tamara/PRR_Analysis/9444_test/RevP_rev_incor.txt \

-regress_stim_labels NonRevP_learn_corr_fed NonRevP_learn_corr \

NonRevP_learn_incor_fed NonRevP_learn_incor NonRevP_perf_corr_fed \

NonRevP_perf_corr NonRevP_perf_incor_fed NonRevP_perf_incor \

RevP_acq_corr_fed RevP_acq_corr RevP_acq_incor_fed RevP_acq_incor \

RevP_rev_corr_fed RevP_rev_corr RevP_rev_incor_fed RevP_rev_incor \

-regress_basis_multi ‘BLOCK(1.4)’ dmBLOCK ‘BLOCK(1.4)’ dmBLOCK \

‘BLOCK(1.4)’ dmBLOCK ‘BLOCK(1.4)’ dmBLOCK ‘BLOCK(1.4)’ dmBLOCK \

‘BLOCK(1.4)’ dmBLOCK ‘BLOCK(1.4)’ dmBLOCK ‘BLOCK(1.4)’ dmBLOCK \

-regress_stim_types times AM1 times AM1 times AM1 times AM1 times AM1 \

times AM1 times AM1 times AM1 -regress_censor_motion 0.3 \

-regress_apply_mot_types demean deriv -regress_motion_per_run \

-regress_make_ideal_sum sum_ideal.1D -regress_est_blur_epits \

-regress_est_blur_errts -regress_run_clustsim no

What does NonRevP_learn_corr_fed.txt look like? If
every regressor has a duration attached, that would be
with the intention of using dmBLOCK or dmUBLOCK
for all of them.

If all of the durations are the same for some stim timing
file, it is not necessary to include them. But if you have,
then perhaps dmBLOCK or dmUBLOCK should be used
for all stim classes.

  • rick