current memory mallocated error

Hello,

I ran my scrip that I created using proc.py, and it keeps giving me a “current memory mallocated” error. Below is part of the output and I also attach my script.

Sungjin

3dDeconvolve -input pb05.265003.r01.scale+tlrc.HEAD pb05.265003.r02.scale+tlrc.HEAD pb05.265003.r03.scale+tlrc.HEAD pb05.265003.r04.scale+tlrc.HEAD pb05.265003.r05.scale+tlrc.HEAD pb05.265003.r06.scale+tlrc.HEAD -censor motion_265003_censor.1D -ortvec mot_demean.r01.1D mot_demean_r01 -ortvec mot_demean.r02.1D mot_demean_r02 -ortvec mot_demean.r03.1D mot_demean_r03 -ortvec mot_demean.r04.1D mot_demean_r04 -ortvec mot_demean.r05.1D mot_demean_r05 -ortvec mot_demean.r06.1D mot_demean_r06 -ortvec mot_deriv.r01.1D mot_deriv_r01 -ortvec mot_deriv.r02.1D mot_deriv_r02 -ortvec mot_deriv.r03.1D mot_deriv_r03 -ortvec mot_deriv.r04.1D mot_deriv_r04 -ortvec mot_deriv.r05.1D mot_deriv_r05 -ortvec mot_deriv.r06.1D mot_deriv_r06 -polort 3 -num_stimts 1 -stim_times 1 stimuli/Timing_CGE.txt GAM -stim_label 1 Timing_CGE.txt -jobs 4 -GOFORIT 5 -fout -tout -x1D X.xmat.1D -xjpeg X.jpg -x1D_uncensored X.nocensor.xmat.1D -errts errts.265003 -bucket stats.265003
++ 3dDeconvolve extending num_stimts from 1 to 73 due to -ortvec
++ 3dDeconvolve: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ current memory malloc-ated = 1,521,978 bytes (about 1.5 million)
++ loading dataset pb05.265003.r01.scale+tlrc.HEAD pb05.265003.r02.scale+tlrc.HEAD pb05.265003.r03.scale+tlrc.HEAD pb05.265003.r04.scale+tlrc.HEAD pb05.265003.r05.scale+tlrc.HEAD pb05.265003.r06.scale+tlrc.HEAD
Killed

#!/bin/tcsh -xef

echo “auto-generated by afni_proc.py, Tue Oct 15 15:34:29 2019”
echo “(version 6.32, February 22, 2019)”
echo “execution started: date

to execute via tcsh:

tcsh -xef proc.265001 |& tee output.proc.265001

to execute via bash:

tcsh -xef proc.265001.s1 2>&1 | tee output.proc.265001.s1

=========================== auto block: setup ============================

script setup

take note of the AFNI version

afni -ver

check that the current AFNI version is recent enough

afni_history -check_date 17 Jan 2019
if ( $status ) then
echo “** this script requires newer AFNI binaries (than 17 Jan 2019)”
echo " (consider: @update.afni.binaries -defaults)"
exit
endif

the user may specify a single subject to run with

if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = 265001
endif

set sNUM = 1

assign output directory name

set output_dir = ${subj}.s$sNUM.results

verify that the results directory does not yet exist

if ( -d $output_dir ) then
echo output dir “$subj.results” already exists
exit
endif

set list of runs

set runs = (count -digits 2 1 6)

create results and stimuli directories

mkdir $output_dir
mkdir $output_dir/stimuli

copy stim files into stimulus directory

cp /home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt $output_dir/stimuli

copy anatomy to results dir

3dcopy CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/anatSS.SSW.nii
$output_dir/anatSS.SSW

copy external -tlrc_NL_warped_dsets datasets

3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW.nii
$output_dir/anatQQ.SSW
3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW.aff12.1D
$output_dir/anatQQ.SSW.aff12.1D
3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW_WARP.nii
$output_dir/anatQQ.SSW_WARP.nii

============================ auto block: tcat ============================

apply 3dTcat to copy input dsets to results dir,

while removing the first 0 TRs

3dTcat -prefix $output_dir/pb00.$subj.r01.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/deob.rest1.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r02.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/deob.pv.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r03.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/deob.14p.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r04.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/deob.6p.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r05.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/deob.flk.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r06.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/deob.rest2.s1+orig’[0…$]’

and make note of repetitions (TRs) per run

set tr_counts = ( 180 150 150 150 150 180 )

-------------------------------------------------------

enter the results directory (can begin processing data)

cd $output_dir

========================== auto block: outcount ==========================

data check: compute outlier fraction for each volume

touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 3 -legendre
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D

# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
    echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
        >> out.pre_ss_warn.txt
endif

end

catenate outlier counts into a single time series

cat outcount.r*.1D > outcount_rall.1D

get run number and TR index for minimum outlier volume

set minindex = 3dTstat -argmin -prefix - outcount_rall.1D\'
set ovals = ( 1d_tool.py -set_run_lengths $tr_counts \ -index_to_run_tr $minindex )

save run and TR indices for extraction of vr_base_min_outlier

set minoutrun = $ovals[1]
set minouttr = $ovals[2]
echo “min outlier: run $minoutrun, TR $minouttr” | tee out.min_outlier.txt

================================= tshift =================================

time shift data so all slice timing is the same

foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb01.$subj.r$run.tshift
pb00.$subj.r$run.tcat+orig
end

================================ despike =================================

apply 3dDespike to each run

foreach run ( $runs )
3dDespike -NEW -nomask -prefix pb02.$subj.r$run.despike
pb01.$subj.r$run.tshift+orig
end

--------------------------------

extract volreg registration base

3dbucket -prefix vr_base_min_outlier
pb02.$subj.r$minoutrun.despike+orig"[$minouttr]"

================================= align ==================================

for e2a: compute anat alignment transformation to EPI registration base

(new anat will be current anatSS.SSW+orig)

align_epi_anat.py -anat2epi -anat anatSS.SSW+orig
-suffix _al_junk
-epi vr_base_min_outlier+orig -epi_base 0
-epi_strip 3dAutomask
-anat_has_skull no
-cost lpc+ZZ
-volreg off -tshift off

================================== tlrc ==================================

nothing to do: have external -tlrc_NL_warped_dsets

warped anat : anatQQ.SSW+tlrc

affine xform : anatQQ.SSW.aff12.1D

non-linear warp : anatQQ.SSW_WARP.nii

================================= volreg =================================

align each dset to base volume, to anat, warp to tlrc space

verify that we have a +tlrc warp dataset

if ( ! -f anatQQ.SSW+tlrc.HEAD ) then
echo “** missing +tlrc warp dataset: anatQQ.SSW+tlrc.HEAD”
exit
endif

register and warp

foreach run ( $runs )
# register each volume to the base image
3dvolreg -verbose -zpad 1 -base vr_base_min_outlier+orig
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run
-cubic
-1Dmatrix_save mat.r$run.vr.aff12.1D
pb02.$subj.r$run.despike+orig

# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb02.$subj.r$run.despike+orig -expr 1        \
       -prefix rm.epi.all1

# catenate volreg/epi2anat/tlrc xforms
cat_matvec -ONELINE                                               \
           anatQQ.SSW.aff12.1D                                    \
           anatSS.SSW_al_junk_mat.aff12.1D -I                     \
           mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D

# apply catenated xform: volreg/epi2anat/tlrc/NLtlrc
# then apply non-linear standard-space warp
3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1                      \
             -source pb02.$subj.r$run.despike+orig                \
             -nwarp "anatQQ.SSW_WARP.nii mat.r$run.warp.aff12.1D" \
             -prefix rm.epi.nomask.r$run

# warp the all-1 dataset for extents masking 
3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1                      \
             -source rm.epi.all1+orig                             \
             -nwarp "anatQQ.SSW_WARP.nii mat.r$run.warp.aff12.1D" \
             -interp cubic                                        \
             -ainterp NN -quiet                                   \
             -prefix rm.epi.1.r$run

# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc

end

make a single file of registration params

cat dfile.r*.1D > dfile_rall.1D

----------------------------------------

create the extents mask: mask_epi_extents+tlrc

(this is a mask of voxels that have valid data at every TR)

3dMean -datum short -prefix rm.epi.mean rm.epi.min.r*.HEAD
3dcalc -a rm.epi.mean+tlrc -expr ‘step(a-0.999)’ -prefix mask_epi_extents

and apply the extents mask to the EPI data

(delete any time series with missing data)

foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc
-expr ‘a*b’ -prefix pb03.$subj.r$run.volreg
end

warp the volreg base EPI dataset to make a final version

cat_matvec -ONELINE
anatQQ.SSW.aff12.1D
anatSS.SSW_al_junk_mat.aff12.1D -I > mat.basewarp.aff12.1D

3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1
-source vr_base_min_outlier+orig
-nwarp “anatQQ.SSW_WARP.nii mat.basewarp.aff12.1D”
-prefix final_epi_vr_base_min_outlier

create an anat_final dataset, aligned with stats

3dcopy anatQQ.SSW+tlrc anat_final.$subj

record final registration costs

3dAllineate -base final_epi_vr_base_min_outlier+tlrc -allcostX
-input anat_final.$subj+tlrc |& tee out.allcostX.txt

================================== blur ==================================

blur each volume of each run

foreach run ( $runs )
3dmerge -1blur_fwhm 6.0 -doall -prefix pb04.$subj.r$run.blur
pb03.$subj.r$run.volreg+tlrc
end

================================== mask ==================================

create ‘full_mask’ dataset (union mask)

foreach run ( $runs )
3dAutomask -prefix rm.mask_r$run pb04.$subj.r$run.blur+tlrc
end

create union of inputs, output type is byte

3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj

---- create subject anatomy mask, mask_anat.$subj+tlrc ----

(resampled from tlrc anat)

3dresample -master full_mask.$subj+tlrc -input anatQQ.SSW+tlrc
-prefix rm.resam.anat

convert to binary anat mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc
-prefix mask_anat.$subj

compute tighter EPI mask by intersecting with anat mask

3dmask_tool -input full_mask.$subj+tlrc mask_anat.$subj+tlrc
-inter -prefix mask_epi_anat.$subj

compute overlaps between anat and EPI masks

3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_overlap.txt

note Dice coefficient of masks, as well

3ddot -dodice full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_dice.txt

---- create group anatomy mask, mask_group+tlrc ----

(resampled from tlrc base anat, MNI152_2009_template_SSW.nii.gz)

3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group
-input /home/sungjin/abin/MNI152_2009_template_SSW.nii.gz

convert to binary group mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc
-prefix mask_group

================================= scale ==================================

scale each voxel time series to have a mean of 100

(be sure no negatives creep in)

(subject to a range of [0,200])

foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb04.$subj.r$run.blur+tlrc
3dcalc -a pb04.$subj.r$run.blur+tlrc -b rm.mean_r$run+tlrc
-c mask_epi_extents+tlrc
-expr ‘c * min(200, a/b*100)*step(a)*step(b)’
-prefix pb05.$subj.r$run.scale
end

================================ regress =================================

compute de-meaned motion parameters (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-demean -write motion_demean.1D

compute motion parameter derivatives (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-derivative -demean -write motion_deriv.1D

convert motion parameters for per-run regression

1d_tool.py -infile motion_demean.1D -set_run_lengths 180 150 150 150 150 180
-split_into_pad_runs mot_demean

1d_tool.py -infile motion_deriv.1D -set_run_lengths 180 150 150 150 150 180
-split_into_pad_runs mot_deriv

create censor file motion_${subj}_censor.1D, for censoring motion

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-show_censor_count -censor_prev_TR
-censor_motion 0.5 motion_${subj}

note TRs that were not censored

set ktrs = 1d_tool.py -infile motion_${subj}_censor.1D \ -show_trs_uncensored encoded

------------------------------

run the regression analysis

3dDeconvolve -input pb05.$subj.r*.scale+tlrc.HEAD
-censor motion_${subj}_censor.1D
-ortvec mot_demean.r01.1D mot_demean_r01
-ortvec mot_demean.r02.1D mot_demean_r02
-ortvec mot_demean.r03.1D mot_demean_r03
-ortvec mot_demean.r04.1D mot_demean_r04
-ortvec mot_demean.r05.1D mot_demean_r05
-ortvec mot_demean.r06.1D mot_demean_r06
-ortvec mot_deriv.r01.1D mot_deriv_r01
-ortvec mot_deriv.r02.1D mot_deriv_r02
-ortvec mot_deriv.r03.1D mot_deriv_r03
-ortvec mot_deriv.r04.1D mot_deriv_r04
-ortvec mot_deriv.r05.1D mot_deriv_r05
-ortvec mot_deriv.r06.1D mot_deriv_r06
-polort 3
-num_stimts 1
-stim_times 1 stimuli/Timing_CGE.txt ‘GAM’
-stim_label 1 Timing_CGE.txt
-jobs 4
-GOFORIT 5
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg
-x1D_uncensored X.nocensor.xmat.1D
-errts errts.${subj}
-bucket stats.$subj

if 3dDeconvolve fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dDeconvolve error, failing…’
echo ’ (consider the file 3dDeconvolve.err)’
exit
endif

display any large pairwise correlations from the X-matrix

1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt

display degrees of freedom info from X-matrix

1d_tool.py -show_df_info -infile X.xmat.1D |& tee out.df_info.txt

– execute the 3dREMLfit script, written by 3dDeconvolve –

tcsh -x stats.REML_cmd

if 3dREMLfit fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dREMLfit error, failing…’
exit
endif

create an all_runs dataset to match the fitts, errts, etc.

3dTcat -prefix all_runs.$subj pb05.$subj.r*.scale+tlrc.HEAD

--------------------------------------------------

create a temporal signal to noise ratio dataset

signal: if ‘scale’ block, mean should be 100

noise : compute standard deviation of errts

3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc"[$ktrs]"
3dTstat -stdev -prefix rm.noise.all errts.${subj}_REML+tlrc"[$ktrs]"
3dcalc -a rm.signal.all+tlrc
-b rm.noise.all+tlrc
-c full_mask.$subj+tlrc
-expr ‘c*a/b’ -prefix TSNR.$subj

---------------------------------------------------

compute and store GCOR (global correlation average)

(sum of squares of global mean of unit errts)

3dTnorm -norm2 -prefix rm.errts.unit errts.${subj}_REML+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc
> gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D' > out.gcor.1D
echo “-- GCOR = cat out.gcor.1D

---------------------------------------------------

compute correlation volume

(per voxel: average correlation across masked brain)

(now just dot product with average unit time series)

3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr ‘a*b’ -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc

create fitts dataset from all_runs and errts

3dcalc -a all_runs.$subj+tlrc -b errts.${subj}+tlrc -expr a-b
-prefix fitts.$subj

create fitts from REML errts

3dcalc -a all_runs.$subj+tlrc -b errts.${subj}_REML+tlrc -expr a-b
-prefix fitts.$subj_REML

create ideal files for fixed response stim types

1dcat X.nocensor.xmat.1D’[24]’ > ideal_Timing_CGE.txt.1D

--------------------------------------------------------

compute sum of non-baseline regressors from the X-matrix

(use 1d_tool.py to get list of regressor colums)

set reg_cols = 1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"

also, create a stimulus-only X-matrix, for easy review

1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D

============================ blur estimation =============================

compute blur estimates

touch blur_est.$subj.1D # start with empty file

create directory for ACF curve files

mkdir files_ACF

– estimate blur for each run in epits –

touch blur.epits.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.epits.r$run.1D
all_runs.$subj+tlrc"[$trs]" >> blur.epits.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{0..$(2)}'\' )
echo average epits FWHM blurs: $blurs
echo “$blurs # epits FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{1..$(2)}'\' )
echo average epits ACF blurs: $blurs
echo “$blurs # epits ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in errts –

touch blur.errts.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.errts.r$run.1D
errts.${subj}+tlrc"[$trs]" >> blur.errts.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{0..$(2)}'\' )
echo average errts FWHM blurs: $blurs
echo “$blurs # errts FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{1..$(2)}'\' )
echo average errts ACF blurs: $blurs
echo “$blurs # errts ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in err_reml –

touch blur.err_reml.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.err_reml.r$run.1D
errts.${subj}_REML+tlrc"[$trs]" >> blur.err_reml.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.err_reml.1D'{0..$(2)}'\' )
echo average err_reml FWHM blurs: $blurs
echo “$blurs # err_reml FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.err_reml.1D'{1..$(2)}'\' )
echo average err_reml ACF blurs: $blurs
echo “$blurs # err_reml ACF blur estimates” >> blur_est.$subj.1D

add 3dClustSim results as attributes to any stats dset

mkdir files_ClustSim

run Monte Carlo simulations using method ‘ACF’

set params = ( grep ACF blur_est.$subj.1D | tail -n 1 )
3dClustSim -both -mask full_mask.$subj+tlrc -acf $params[1-3]
-cmd 3dClustSim.ACF.cmd -prefix files_ClustSim/ClustSim.ACF

run 3drefit to attach 3dClustSim results to stats

set cmd = ( cat 3dClustSim.ACF.cmd )
$cmd stats.$subj+tlrc stats.${subj}_REML+tlrc

================== auto block: generate review scripts ===================

generate a review script for the unprocessed EPI data

gen_epi_review.py -script @epi_review.$subj
-dsets pb00.$subj.r*.tcat+orig.HEAD

generate scripts to review single subject results

(try with defaults, but do not allow bad exit status)

gen_ss_review_scripts.py -mot_limit 0.5 -exit0
-ss_review_dset out.ss_review.$subj.txt
-write_uvars_json out.ss_review_uvars.json

========================== auto block: finalize ==========================

remove temporary files

\rm -f rm.*

if the basic subject review script is here, run it

(want this to be the last text output)

if ( -e @ss_review_basic ) then
./@ss_review_basic |& tee out.ss_review.$subj.txt

# generate html ss review pages
# (akin to static images from running @ss_review_driver)
apqc_make_tcsh.py -review_style basic -subj_dir . \
    -uvar_json out.ss_review_uvars.json
tcsh @ss_review_html |& tee out.review_html
apqc_make_html.py -qc_dir QC_$subj

echo "\nconsider running: \n\n    afni_open -b $subj.results/QC_$subj/index.html\n"

endif

return to parent directory (just in case…)

cd …

echo “execution finished: date

==========================================================================

script generated by the command:

afni_proc.py -subj_id 265001 -script proc.265001 -scr_overwrite -blocks \

tshift despike align tlrc volreg blur mask scale regress -copy_anat \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatSS.SSW.nii \

-anat_has_skull no -dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest1.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.pv.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.14p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.6p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.flk.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest2.s1+orig.HEAD \

-tcat_remove_first_trs 0 -align_opts_aea -cost lpc+ZZ -volreg_align_to \

MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -tlrc_base \

MNI152_2009_template_SSW.nii.gz -tlrc_NL_warp -tlrc_NL_warped_dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.nii \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.aff12.1D \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW_WARP.nii \

-blur_size 6.0 -regress_stim_times \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt -regress_stim_labels \

Timing_CGE.txt -regress_basis ‘BLOCK(6,1)’ -regress_censor_motion 0.5 \

-regress_apply_mot_types demean deriv -regress_motion_per_run \

-regress_opts_3dD -jobs 4 -GOFORIT 5 -regress_reml_exec \

-regress_compute_fitts -regress_make_ideal_sum sum_ideal.1D \

-regress_est_blur_epits -regress_est_blur_errts

Hi-

Do you have a loooot of time points in your combined dsets? That “killed” message is coming from your OS, meaning that your processor has run out of memory trying to perform that command (and your computer is not happy doing that…).

How many time points do you have? And is there a “larger” computer you have access to for analysis?

-pt

Hello pt,

Data consist of 6 blocks, and each block has either 150 or 180 time points. Do you this these are too many? I also want to let you know that I kept getting a warning that the data are oblique. So I used to use 3dwarp, but recently a colleague of mine suggested to use 3drefit, which made the data plumb. This is the only change that I made, and after this change, I got this “memory mallocated” error.

Sungjin

Hi, Sungjin-

Offhand, I wouldn’t expect the “deobliquing” with 3drefit to make any appreciable change to the memory demands of the data. Note the difference between deobliquing with those programs:

  • with 3dWarp, “-deoblique” means to apply the obliquity matrix info to resample the dset and put the brain back into its physical coordinate (i.e., scanner coordinate) location.
  • with 3drefit, “-deoblique” means to purge the obliquity matrix info, so the data is not resampled (only header info changes), and from hereon we pretend like the oblique coordinates were the scanner ones.

That doesn’t sound like “too much” data either, although that measure is necessarily relative to the specific machine that you have. If it were particular high-spatial resolution, that would increase memory demands.

Is that a laptop that it is being run on? Do you have a larger machine for it?

–pt

Hello pt,

It sounds like you suggest to use 3dwarp and I am re-running my script. I am also using an Intel Core i-5 LInux desktop with 12gb memory, so I don’t think I am underpowered, right?

Sungjin

Hi, Sungjin-

Well, as long as your EPI and anatomical dsets can be aligned, what you did is fine. (Was your anatomical dset also acquired with oblique coords, and, if so, did you deoblique it in the same manner with 3drefit?)

I would not recommend just applying 3dWarp -deoblique to your EPIs before processing, because that will resample your dset, which is a smoothing process. afni_proc.py should be able to deal with all of this internally, only perhaps needing an option or two for the “align” block-- but were you having trouble with EPI-anatomical alignment?

12 GB for processing sounds reasonable to me… what is your EPI voxel size? Actually, what is your afni_proc.py command?

–pt

Whenever I have an oblique warning, the obliquity is about 1 degree or even less, and I want to ask if I need to bother with deobliquing in this case? I attach screenshots of raw EPI data superimposed on raw anatomical data, and if you can make a suggestion, that would be nice.

The EPI voxel size is 3x3x4mm, and since I don’t know why you mean by “afni_proc.py command”, I attach my script below.

#!/bin/tcsh -xef

echo “auto-generated by afni_proc.py, Tue Oct 15 15:34:29 2019”
echo “(version 6.32, February 22, 2019)”
echo “execution started: date

to execute via tcsh:

tcsh -xef proc.265001 |& tee output.proc.265001

to execute via bash:

tcsh -xef proc.265001.s1 2>&1 | tee output.proc.265001.s1

=========================== auto block: setup ============================

script setup

take note of the AFNI version

afni -ver

check that the current AFNI version is recent enough

afni_history -check_date 17 Jan 2019
if ( $status ) then
echo “** this script requires newer AFNI binaries (than 17 Jan 2019)”
echo " (consider: @update.afni.binaries -defaults)"
exit
endif

the user may specify a single subject to run with

if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = 265001
endif

set sNUM = 1

assign output directory name

set output_dir = ${subj}.s$sNUM.results

verify that the results directory does not yet exist

if ( -d $output_dir ) then
echo output dir “$subj.results” already exists
exit
endif

set list of runs

set runs = (count -digits 2 1 6)

create results and stimuli directories

mkdir $output_dir
mkdir $output_dir/stimuli

copy stim files into stimulus directory

cp /home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt $output_dir/stimuli

copy anatomy to results dir

3dcopy CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/anatSS.SSW.nii
$output_dir/anatSS.SSW

copy external -tlrc_NL_warped_dsets datasets

3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW.nii
$output_dir/anatQQ.SSW
3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW.aff12.1D
$output_dir/anatQQ.SSW.aff12.1D
3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW_WARP.nii
$output_dir/anatQQ.SSW_WARP.nii

============================ auto block: tcat ============================

apply 3dTcat to copy input dsets to results dir,

while removing the first 0 TRs

3dTcat -prefix $output_dir/pb00.subj.r01.tcat \ CGE_Raw_Data/Session1/{subj}_S$sNUM/subject_raw/deob.rest1.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.subj.r02.tcat \ CGE_Raw_Data/Session1/{subj}_S$sNUM/subject_raw/deob.pv.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.subj.r03.tcat \ CGE_Raw_Data/Session1/{subj}_S$sNUM/subject_raw/deob.14p.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.subj.r04.tcat \ CGE_Raw_Data/Session1/{subj}_S$sNUM/subject_raw/deob.6p.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.subj.r05.tcat \ CGE_Raw_Data/Session1/{subj}_S$sNUM/subject_raw/deob.flk.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.subj.r06.tcat \ CGE_Raw_Data/Session1/{subj}_S$sNUM/subject_raw/deob.rest2.s1+orig’[0…$]’

and make note of repetitions (TRs) per run

set tr_counts = ( 180 150 150 150 150 180 )

-------------------------------------------------------

enter the results directory (can begin processing data)

cd $output_dir

========================== auto block: outcount ==========================

data check: compute outlier fraction for each volume

touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 3 -legendre
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D

# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
    echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
        >> out.pre_ss_warn.txt
endif

end

catenate outlier counts into a single time series

cat outcount.r*.1D > outcount_rall.1D

get run number and TR index for minimum outlier volume

set minindex = 3dTstat -argmin -prefix - outcount_rall.1D\'
set ovals = ( 1d_tool.py -set_run_lengths $tr_counts \ -index_to_run_tr $minindex )

save run and TR indices for extraction of vr_base_min_outlier

set minoutrun = $ovals[1]
set minouttr = $ovals[2]
echo “min outlier: run $minoutrun, TR $minouttr” | tee out.min_outlier.txt

================================= tshift =================================

time shift data so all slice timing is the same

foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb01.$subj.r$run.tshift
pb00.$subj.r$run.tcat+orig
end

================================ despike =================================

apply 3dDespike to each run

foreach run ( $runs )
3dDespike -NEW -nomask -prefix pb02.$subj.r$run.despike
pb01.$subj.r$run.tshift+orig
end

--------------------------------

extract volreg registration base

3dbucket -prefix vr_base_min_outlier
pb02.$subj.r$minoutrun.despike+orig"[$minouttr]"

================================= align ==================================

for e2a: compute anat alignment transformation to EPI registration base

(new anat will be current anatSS.SSW+orig)

align_epi_anat.py -anat2epi -anat anatSS.SSW+orig
-suffix _al_junk
-epi vr_base_min_outlier+orig -epi_base 0
-epi_strip 3dAutomask
-anat_has_skull no
-cost lpc+ZZ
-volreg off -tshift off

================================== tlrc ==================================

nothing to do: have external -tlrc_NL_warped_dsets

warped anat : anatQQ.SSW+tlrc

affine xform : anatQQ.SSW.aff12.1D

non-linear warp : anatQQ.SSW_WARP.nii

================================= volreg =================================

align each dset to base volume, to anat, warp to tlrc space

verify that we have a +tlrc warp dataset

if ( ! -f anatQQ.SSW+tlrc.HEAD ) then
echo “** missing +tlrc warp dataset: anatQQ.SSW+tlrc.HEAD”
exit
endif

register and warp

foreach run ( $runs )
# register each volume to the base image
3dvolreg -verbose -zpad 1 -base vr_base_min_outlier+orig
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run
-cubic
-1Dmatrix_save mat.r$run.vr.aff12.1D
pb02.$subj.r$run.despike+orig

# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb02.$subj.r$run.despike+orig -expr 1        \
       -prefix rm.epi.all1

# catenate volreg/epi2anat/tlrc xforms
cat_matvec -ONELINE                                               \
           anatQQ.SSW.aff12.1D                                    \
           anatSS.SSW_al_junk_mat.aff12.1D -I                     \
           mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D

# apply catenated xform: volreg/epi2anat/tlrc/NLtlrc
# then apply non-linear standard-space warp
3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1                      \
             -source pb02.$subj.r$run.despike+orig                \
             -nwarp "anatQQ.SSW_WARP.nii mat.r$run.warp.aff12.1D" \
             -prefix rm.epi.nomask.r$run

# warp the all-1 dataset for extents masking 
3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1                      \
             -source rm.epi.all1+orig                             \
             -nwarp "anatQQ.SSW_WARP.nii mat.r$run.warp.aff12.1D" \
             -interp cubic                                        \
             -ainterp NN -quiet                                   \
             -prefix rm.epi.1.r$run

# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc

end

make a single file of registration params

cat dfile.r*.1D > dfile_rall.1D

----------------------------------------

create the extents mask: mask_epi_extents+tlrc

(this is a mask of voxels that have valid data at every TR)

3dMean -datum short -prefix rm.epi.mean rm.epi.min.r*.HEAD
3dcalc -a rm.epi.mean+tlrc -expr ‘step(a-0.999)’ -prefix mask_epi_extents

and apply the extents mask to the EPI data

(delete any time series with missing data)

foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc
-expr ‘a*b’ -prefix pb03.$subj.r$run.volreg
end

warp the volreg base EPI dataset to make a final version

cat_matvec -ONELINE
anatQQ.SSW.aff12.1D
anatSS.SSW_al_junk_mat.aff12.1D -I > mat.basewarp.aff12.1D

3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1
-source vr_base_min_outlier+orig
-nwarp “anatQQ.SSW_WARP.nii mat.basewarp.aff12.1D”
-prefix final_epi_vr_base_min_outlier

create an anat_final dataset, aligned with stats

3dcopy anatQQ.SSW+tlrc anat_final.$subj

record final registration costs

3dAllineate -base final_epi_vr_base_min_outlier+tlrc -allcostX
-input anat_final.$subj+tlrc |& tee out.allcostX.txt

================================== blur ==================================

blur each volume of each run

foreach run ( $runs )
3dmerge -1blur_fwhm 6.0 -doall -prefix pb04.$subj.r$run.blur
pb03.$subj.r$run.volreg+tlrc
end

================================== mask ==================================

create ‘full_mask’ dataset (union mask)

foreach run ( $runs )
3dAutomask -prefix rm.mask_r$run pb04.$subj.r$run.blur+tlrc
end

create union of inputs, output type is byte

3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj

---- create subject anatomy mask, mask_anat.$subj+tlrc ----

(resampled from tlrc anat)

3dresample -master full_mask.$subj+tlrc -input anatQQ.SSW+tlrc
-prefix rm.resam.anat

convert to binary anat mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc
-prefix mask_anat.$subj

compute tighter EPI mask by intersecting with anat mask

3dmask_tool -input full_mask.$subj+tlrc mask_anat.$subj+tlrc
-inter -prefix mask_epi_anat.$subj

compute overlaps between anat and EPI masks

3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_overlap.txt

note Dice coefficient of masks, as well

3ddot -dodice full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_dice.txt

---- create group anatomy mask, mask_group+tlrc ----

(resampled from tlrc base anat, MNI152_2009_template_SSW.nii.gz)

3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group
-input /home/sungjin/abin/MNI152_2009_template_SSW.nii.gz

convert to binary group mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc
-prefix mask_group

================================= scale ==================================

scale each voxel time series to have a mean of 100

(be sure no negatives creep in)

(subject to a range of [0,200])

foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb04.$subj.r$run.blur+tlrc
3dcalc -a pb04.$subj.r$run.blur+tlrc -b rm.mean_r$run+tlrc
-c mask_epi_extents+tlrc
-expr ‘c * min(200, a/b*100)*step(a)*step(b)’
-prefix pb05.$subj.r$run.scale
end

================================ regress =================================

compute de-meaned motion parameters (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-demean -write motion_demean.1D

compute motion parameter derivatives (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-derivative -demean -write motion_deriv.1D

convert motion parameters for per-run regression

1d_tool.py -infile motion_demean.1D -set_run_lengths 180 150 150 150 150 180
-split_into_pad_runs mot_demean

1d_tool.py -infile motion_deriv.1D -set_run_lengths 180 150 150 150 150 180
-split_into_pad_runs mot_deriv

create censor file motion_${subj}_censor.1D, for censoring motion

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-show_censor_count -censor_prev_TR
-censor_motion 0.5 motion_${subj}

note TRs that were not censored

set ktrs = 1d_tool.py -infile motion_${subj}_censor.1D \ -show_trs_uncensored encoded

------------------------------

run the regression analysis

3dDeconvolve -input pb05.subj.r*.scale+tlrc.HEAD \ -censor motion_{subj}_censor.1D
-ortvec mot_demean.r01.1D mot_demean_r01
-ortvec mot_demean.r02.1D mot_demean_r02
-ortvec mot_demean.r03.1D mot_demean_r03
-ortvec mot_demean.r04.1D mot_demean_r04
-ortvec mot_demean.r05.1D mot_demean_r05
-ortvec mot_demean.r06.1D mot_demean_r06
-ortvec mot_deriv.r01.1D mot_deriv_r01
-ortvec mot_deriv.r02.1D mot_deriv_r02
-ortvec mot_deriv.r03.1D mot_deriv_r03
-ortvec mot_deriv.r04.1D mot_deriv_r04
-ortvec mot_deriv.r05.1D mot_deriv_r05
-ortvec mot_deriv.r06.1D mot_deriv_r06
-polort 3
-num_stimts 1
-stim_times 1 stimuli/Timing_CGE.txt ‘GAM’
-stim_label 1 Timing_CGE.txt
-jobs 4
-GOFORIT 5
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg
-x1D_uncensored X.nocensor.xmat.1D
-errts errts.${subj}
-bucket stats.$subj

if 3dDeconvolve fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dDeconvolve error, failing…’
echo ’ (consider the file 3dDeconvolve.err)’
exit
endif

display any large pairwise correlations from the X-matrix

1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt

display degrees of freedom info from X-matrix

1d_tool.py -show_df_info -infile X.xmat.1D |& tee out.df_info.txt

– execute the 3dREMLfit script, written by 3dDeconvolve –

tcsh -x stats.REML_cmd

if 3dREMLfit fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dREMLfit error, failing…’
exit
endif

create an all_runs dataset to match the fitts, errts, etc.

3dTcat -prefix all_runs.$subj pb05.$subj.r*.scale+tlrc.HEAD

--------------------------------------------------

create a temporal signal to noise ratio dataset

signal: if ‘scale’ block, mean should be 100

noise : compute standard deviation of errts

3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc"[ktrs]" 3dTstat -stdev -prefix rm.noise.all errts.{subj}_REML+tlrc"[$ktrs]"
3dcalc -a rm.signal.all+tlrc
-b rm.noise.all+tlrc
-c full_mask.$subj+tlrc
-expr ‘c*a/b’ -prefix TSNR.$subj

---------------------------------------------------

compute and store GCOR (global correlation average)

(sum of squares of global mean of unit errts)

3dTnorm -norm2 -prefix rm.errts.unit errts.${subj}_REML+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc
> gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D' > out.gcor.1D
echo “-- GCOR = cat out.gcor.1D

---------------------------------------------------

compute correlation volume

(per voxel: average correlation across masked brain)

(now just dot product with average unit time series)

3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr ‘a*b’ -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc

create fitts dataset from all_runs and errts

3dcalc -a all_runs.subj+tlrc -b errts.{subj}+tlrc -expr a-b
-prefix fitts.$subj

create fitts from REML errts

3dcalc -a all_runs.subj+tlrc -b errts.{subj}_REML+tlrc -expr a-b
-prefix fitts.$subj_REML

create ideal files for fixed response stim types

1dcat X.nocensor.xmat.1D’[24]’ > ideal_Timing_CGE.txt.1D

--------------------------------------------------------

compute sum of non-baseline regressors from the X-matrix

(use 1d_tool.py to get list of regressor colums)

set reg_cols = 1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"

also, create a stimulus-only X-matrix, for easy review

1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D

============================ blur estimation =============================

compute blur estimates

touch blur_est.$subj.1D # start with empty file

create directory for ACF curve files

mkdir files_ACF

– estimate blur for each run in epits –

touch blur.epits.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.epits.r$run.1D
all_runs.$subj+tlrc"[$trs]" >> blur.epits.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{0..$(2)}'\' )
echo average epits FWHM blurs: $blurs
echo “$blurs # epits FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{1..$(2)}'\' )
echo average epits ACF blurs: $blurs
echo “$blurs # epits ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in errts –

touch blur.errts.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.errts.r$run.1D
errts.${subj}+tlrc"[$trs]" >> blur.errts.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{0..$(2)}'\' )
echo average errts FWHM blurs: $blurs
echo “$blurs # errts FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{1..$(2)}'\' )
echo average errts ACF blurs: $blurs
echo “$blurs # errts ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in err_reml –

touch blur.err_reml.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.err_reml.r$run.1D
errts.${subj}_REML+tlrc"[$trs]" >> blur.err_reml.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.err_reml.1D'{0..$(2)}'\' )
echo average err_reml FWHM blurs: $blurs
echo “$blurs # err_reml FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.err_reml.1D'{1..$(2)}'\' )
echo average err_reml ACF blurs: $blurs
echo “$blurs # err_reml ACF blur estimates” >> blur_est.$subj.1D

add 3dClustSim results as attributes to any stats dset

mkdir files_ClustSim

run Monte Carlo simulations using method ‘ACF’

set params = ( grep ACF blur_est.$subj.1D | tail -n 1 )
3dClustSim -both -mask full_mask.$subj+tlrc -acf $params[1-3]
-cmd 3dClustSim.ACF.cmd -prefix files_ClustSim/ClustSim.ACF

run 3drefit to attach 3dClustSim results to stats

set cmd = ( cat 3dClustSim.ACF.cmd )
$cmd stats.subj+tlrc stats.{subj}_REML+tlrc

================== auto block: generate review scripts ===================

generate a review script for the unprocessed EPI data

gen_epi_review.py -script @epi_review.$subj
-dsets pb00.$subj.r*.tcat+orig.HEAD

generate scripts to review single subject results

(try with defaults, but do not allow bad exit status)

gen_ss_review_scripts.py -mot_limit 0.5 -exit0
-ss_review_dset out.ss_review.$subj.txt
-write_uvars_json out.ss_review_uvars.json

========================== auto block: finalize ==========================

remove temporary files

\rm -f rm.*

if the basic subject review script is here, run it

(want this to be the last text output)

if ( -e @ss_review_basic ) then
./@ss_review_basic |& tee out.ss_review.$subj.txt

# generate html ss review pages
# (akin to static images from running @ss_review_driver)
apqc_make_tcsh.py -review_style basic -subj_dir . \
    -uvar_json out.ss_review_uvars.json
tcsh @ss_review_html |& tee out.review_html
apqc_make_html.py -qc_dir QC_$subj

echo "\nconsider running: \n\n    afni_open -b $subj.results/QC_$subj/index.html\n"

endif

return to parent directory (just in case…)

cd …

echo “execution finished: date

==========================================================================

script generated by the command:

afni_proc.py -subj_id 265001 -script proc.265001 -scr_overwrite -blocks \

tshift despike align tlrc volreg blur mask scale regress -copy_anat \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatSS.SSW.nii \

-anat_has_skull no -dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest1.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.pv.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.14p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.6p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.flk.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest2.s1+orig.HEAD \

-tcat_remove_first_trs 0 -align_opts_aea -cost lpc+ZZ -volreg_align_to \

MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -tlrc_base \

MNI152_2009_template_SSW.nii.gz -tlrc_NL_warp -tlrc_NL_warped_dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.nii \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.aff12.1D \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW_WARP.nii \

-blur_size 6.0 -regress_stim_times \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt -regress_stim_labels \

Timing_CGE.txt -regress_basis ‘BLOCK(6,1)’ -regress_censor_motion 0.5 \

-regress_apply_mot_types demean deriv -regress_motion_per_run \

-regress_opts_3dD -jobs 4 -GOFORIT 5 -regress_reml_exec \

-regress_compute_fitts -regress_make_ideal_sum sum_ideal.1D \

-regress_est_blur_epits -regress_est_blur_errts

1.jpg

2.jpg

Hi, Sungjin-

That amount of obliquity shouldn’t matter at all-- I would not bother with deobliquing it.

Those voxels are standard+large, so I don’t think those dsets are so huge.

The script that you attached (several hundred lines of commented beauty) was generated by a single AFNI command, “afni_proc.py”. If you notice, at the top of that script, it says this:


echo "auto-generated by afni_proc.py, Tue Oct 15 15:34:29 2019"

So, I wanted the command used to generate that script; I am noticing now that the script itself contains the afni_proc.py command used to make it, stored in a comment at the bottom (Rick Reynolds, who wrote the afni_proc.py functionality, thought of everything); the command was (I have removed the comment “#” symbols):


afni_proc.py -subj_id 265001 -script proc.265001 -scr_overwrite -blocks \
 tshift despike align tlrc volreg blur mask scale regress -copy_anat \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatSS.SSW.nii \
 -anat_has_skull no -dsets \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest1.s1+orig.HEAD \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.pv.s1+orig.HEAD \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.14p.s1+orig.HEAD \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.6p.s1+orig.HEAD \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.flk.s1+orig.HEAD \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest2.s1+orig.HEAD \
 -tcat_remove_first_trs 0 -align_opts_aea -cost lpc+ZZ -volreg_align_to \
 MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -tlrc_base \
 MNI152_2009_template_SSW.nii.gz -tlrc_NL_warp -tlrc_NL_warped_dsets \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.nii \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.aff12.1D \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW_WARP.nii \
 -blur_size 6.0 -regress_stim_times \
 /home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt -regress_stim_labels \
 Timing_CGE.txt -regress_basis 'BLOCK(6,1)' -regress_censor_motion 0.5 \
 -regress_apply_mot_types demean deriv -regress_motion_per_run \
 -regress_opts_3dD -jobs 4 -GOFORIT 5 -regress_reml_exec \
 -regress_compute_fitts -regress_make_ideal_sum sum_ideal.1D \
 -regress_est_blur_epits -regress_est_blur_errts

I don’t see anything that particularly jumps out at me; I guess the REML part is somewhat memory intensive. I will ping Rick to see if he has any ideas on the memory aspect.

-pt

The afni_proc.py command is what you are using to generate the script. The command itself (after shell expansions) is included at the end of the script, so we can see it.

What is the output of these commands:

ls -lh pb05.265003.r*.scale+tlrc.BRIK*
3dinfo -prefix -datum pb05.265003.r01.scale+tlrc.HEAD
free -h

Also, is this run on a local server, or is it from some compute cluster? If the latter, you might have to specify memory requirements up front.

  • rick

Hello Rick,

I am sorry that I already deleted the results folder because the folder size is very big (over 150GB), and I had to free up the disc space for other operation. Now that I have ran the script again, the process was killed during warping this time. I copy the output file below. Could this be a hardware problem as I also notice that this Linux machine was getting noticeably slow when I opened a new script or web browser?

Sungjin

echo auto-generated by afni_proc.py, Tue Oct 15 15:34:29 2019
auto-generated by afni_proc.py, Tue Oct 15 15:34:29 2019
echo (version 6.32, February 22, 2019)
(version 6.32, February 22, 2019)
echo execution started: date
date
execution started: Thu Oct 24 12:06:21 EDT 2019
afni -ver
Precompiled binary linux_ubuntu_16_64: Mar 20 2019 (Version AFNI_19.0.26 ‘Tiberius’)
afni_history -check_date 17 Jan 2019
– is current: afni_history as new as: 17 Jan 2019
most recent entry is: 20 Mar 2019
if ( 0 ) then
if ( 0 > 0 ) then
set subj = 265001
endif
set sNUM = 1
set output_dir = 265001.s1.results
if ( -d 265001.s1.results ) then
set runs = ( count -digits 2 1 6 )
count -digits 2 1 6
mkdir 265001.s1.results
mkdir 265001.s1.results/stimuli
cp /home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt 265001.s1.results/stimuli
3dcopy CGE_Raw_Data/Session1/265001_S1/subject_raw/anatSS.SSW.nii 265001.s1.results/anatSS.SSW
++ 3dcopy: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
3dcopy CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.nii 265001.s1.results/anatQQ.SSW
++ 3dcopy: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
3dcopy CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.aff12.1D 265001.s1.results/anatQQ.SSW.aff12.1D
++ 3dcopy: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
3dcopy CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW_WARP.nii 265001.s1.results/anatQQ.SSW_WARP.nii
++ 3dcopy: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
3dTcat -prefix 265001.s1.results/pb00.265001.r01.tcat CGE_Raw_Data/Session1/265001_S1/subject_raw/raw.rest1.s1+orig[0…$]
++ 3dTcat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ elapsed time = 0.8 s
3dTcat -prefix 265001.s1.results/pb00.265001.r02.tcat CGE_Raw_Data/Session1/265001_S1/subject_raw/raw.pv.s1+orig[0…$]
++ 3dTcat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ elapsed time = 0.6 s
3dTcat -prefix 265001.s1.results/pb00.265001.r03.tcat CGE_Raw_Data/Session1/265001_S1/subject_raw/raw.14p.s1+orig[0…$]
++ 3dTcat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ elapsed time = 0.6 s
3dTcat -prefix 265001.s1.results/pb00.265001.r04.tcat CGE_Raw_Data/Session1/265001_S1/subject_raw/raw.6p.s1+orig[0…$]
++ 3dTcat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ elapsed time = 0.6 s
3dTcat -prefix 265001.s1.results/pb00.265001.r05.tcat CGE_Raw_Data/Session1/265001_S1/subject_raw/raw.flk.s1+orig[0…$]
++ 3dTcat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ elapsed time = 0.6 s
3dTcat -prefix 265001.s1.results/pb00.265001.r06.tcat CGE_Raw_Data/Session1/265001_S1/subject_raw/raw.rest2.s1+orig[0…$]
++ 3dTcat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ elapsed time = 0.7 s
set tr_counts = ( 180 150 150 150 150 180 )
cd 265001.s1.results
touch out.pre_ss_warn.txt
foreach run ( 01 02 03 04 05 06 )
3dToutcount -automask -fraction -polort 3 -legendre pb00.265001.r01.tcat+orig
++ 3dToutcount: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r01.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r01.tcat+orig.BRIK is 0.799908 degrees from plumb.
++ 33485 voxels passed mask/clip
if ( 1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)" ) then
1deval -a outcount.r01.1D{0} -expr step(a-0.4)
end
3dToutcount -automask -fraction -polort 3 -legendre pb00.265001.r02.tcat+orig
++ 3dToutcount: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r02.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r02.tcat+orig.BRIK is 0.799908 degrees from plumb.
++ 33500 voxels passed mask/clip
if ( 1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)" ) then
1deval -a outcount.r02.1D{0} -expr step(a-0.4)
end
3dToutcount -automask -fraction -polort 3 -legendre pb00.265001.r03.tcat+orig
++ 3dToutcount: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r03.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r03.tcat+orig.BRIK is 0.799908 degrees from plumb.
++ 33501 voxels passed mask/clip
if ( 1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)" ) then
1deval -a outcount.r03.1D{0} -expr step(a-0.4)
end
3dToutcount -automask -fraction -polort 3 -legendre pb00.265001.r04.tcat+orig
++ 3dToutcount: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r04.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r04.tcat+orig.BRIK is 0.799908 degrees from plumb.
++ 33658 voxels passed mask/clip
if ( 1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)" ) then
1deval -a outcount.r04.1D{0} -expr step(a-0.4)
end
3dToutcount -automask -fraction -polort 3 -legendre pb00.265001.r05.tcat+orig
++ 3dToutcount: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r05.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r05.tcat+orig.BRIK is 0.799908 degrees from plumb.
++ 33509 voxels passed mask/clip
if ( 1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)" ) then
1deval -a outcount.r05.1D{0} -expr step(a-0.4)
end
3dToutcount -automask -fraction -polort 3 -legendre pb00.265001.r06.tcat+orig
++ 3dToutcount: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r06.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r06.tcat+orig.BRIK is 0.799908 degrees from plumb.
++ 33587 voxels passed mask/clip
if ( 1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)" ) then
1deval -a outcount.r06.1D{0} -expr step(a-0.4)
end
cat outcount.r01.1D outcount.r02.1D outcount.r03.1D outcount.r04.1D outcount.r05.1D outcount.r06.1D
set minindex = 3dTstat -argmin -prefix - outcount_rall.1D\'
3dTstat -argmin -prefix - outcount_rall.1D’
++ 3dTstat: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: KR Hammett & RW Cox
e[7m*+ WARNING:e[0m Input dataset is not 3D+time; assuming TR=1.0
set ovals = ( 1d_tool.py -set_run_lengths $tr_counts -index_to_run_tr $minindex )
1d_tool.py -set_run_lengths 180 150 150 150 150 180 -index_to_run_tr 15
set minoutrun = 01
set minouttr = 15
echo min outlier: run 01, TR 15
tee out.min_outlier.txt
min outlier: run 01, TR 15
foreach run ( 01 02 03 04 05 06 )
3dTshift -tzero 0 -quintic -prefix pb01.265001.r01.tshift pb00.265001.r01.tcat+orig
++ 3dTshift: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r01.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r01.tcat+orig.BRIK is 0.799908 degrees from plumb.
end
3dTshift -tzero 0 -quintic -prefix pb01.265001.r02.tshift pb00.265001.r02.tcat+orig
++ 3dTshift: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r02.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r02.tcat+orig.BRIK is 0.799908 degrees from plumb.
end
3dTshift -tzero 0 -quintic -prefix pb01.265001.r03.tshift pb00.265001.r03.tcat+orig
++ 3dTshift: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r03.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r03.tcat+orig.BRIK is 0.799908 degrees from plumb.
end
3dTshift -tzero 0 -quintic -prefix pb01.265001.r04.tshift pb00.265001.r04.tcat+orig
++ 3dTshift: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r04.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r04.tcat+orig.BRIK is 0.799908 degrees from plumb.
end
3dTshift -tzero 0 -quintic -prefix pb01.265001.r05.tshift pb00.265001.r05.tcat+orig
++ 3dTshift: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r05.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r05.tcat+orig.BRIK is 0.799908 degrees from plumb.
end
3dTshift -tzero 0 -quintic -prefix pb01.265001.r06.tshift pb00.265001.r06.tcat+orig
++ 3dTshift: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb00.265001.r06.tcat+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb00.265001.r06.tcat+orig.BRIK is 0.799908 degrees from plumb.
end
foreach run ( 01 02 03 04 05 06 )
3dDespike -NEW -nomask -prefix pb02.265001.r01.despike pb01.265001.r01.tshift+orig
++ 3dDespike: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb01.265001.r01.tshift+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb01.265001.r01.tshift+orig.BRIK is 0.799908 degrees from plumb.
++ Input dataset is in short format, but output will be in float format
++ ignoring first 0 time points, using last 180
++ using 180 time points => -corder 6
++ Loading dataset pb01.265001.r01.tshift+orig
++ processing all 143360 voxels in dataset
++ Procesing time series with NEW model fit algorithm
++ smash edit thresholds: 3.1 … 5.0 MADs

  • [ 3.457% … 0.072% of normal distribution]
  • [ 8.839% … 3.125% of Laplace distribution]
    ++ start OpenMP thread #0
    ++ start OpenMP thread #1
    ++ start OpenMP thread #2
    ++ start OpenMP thread #3

    ++ Elapsed despike time = 905ms

++ FINAL: 25004700 data points, 1285900 edits [5.143%], 233777 big edits [0.935%]
++ Output dataset ./pb02.265001.r01.despike+orig.BRIK
end
3dDespike -NEW -nomask -prefix pb02.265001.r02.despike pb01.265001.r02.tshift+orig
++ 3dDespike: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb01.265001.r02.tshift+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb01.265001.r02.tshift+orig.BRIK is 0.799908 degrees from plumb.
++ Input dataset is in short format, but output will be in float format
++ ignoring first 0 time points, using last 150
++ using 150 time points => -corder 5
++ Loading dataset pb01.265001.r02.tshift+orig
++ processing all 143360 voxels in dataset
++ Procesing time series with NEW model fit algorithm
++ smash edit thresholds: 3.1 … 5.0 MADs

  • [ 3.457% … 0.072% of normal distribution]
  • [ 8.839% … 3.125% of Laplace distribution]
    ++ start OpenMP thread #0
    ++ start OpenMP thread #3
    ++ start OpenMP thread #2
    ++ start OpenMP thread #1

    ++ Elapsed despike time = 691ms

++ FINAL: 20837250 data points, 979211 edits [4.699%], 149477 big edits [0.717%]
++ Output dataset ./pb02.265001.r02.despike+orig.BRIK
end
3dDespike -NEW -nomask -prefix pb02.265001.r03.despike pb01.265001.r03.tshift+orig
++ 3dDespike: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb01.265001.r03.tshift+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb01.265001.r03.tshift+orig.BRIK is 0.799908 degrees from plumb.
++ Input dataset is in short format, but output will be in float format
++ ignoring first 0 time points, using last 150
++ using 150 time points => -corder 5
++ Loading dataset pb01.265001.r03.tshift+orig
++ processing all 143360 voxels in dataset
++ Procesing time series with NEW model fit algorithm
++ smash edit thresholds: 3.1 … 5.0 MADs

  • [ 3.457% … 0.072% of normal distribution]
  • [ 8.839% … 3.125% of Laplace distribution]
    ++ start OpenMP thread #1
    ++ start OpenMP thread #0
    ++ start OpenMP thread #2
    ++ start OpenMP thread #3

    ++ Elapsed despike time = 671ms

++ FINAL: 20837250 data points, 932750 edits [4.476%], 100598 big edits [0.483%]
++ Output dataset ./pb02.265001.r03.despike+orig.BRIK
end
3dDespike -NEW -nomask -prefix pb02.265001.r04.despike pb01.265001.r04.tshift+orig
++ 3dDespike: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb01.265001.r04.tshift+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb01.265001.r04.tshift+orig.BRIK is 0.799908 degrees from plumb.
++ Input dataset is in short format, but output will be in float format
++ ignoring first 0 time points, using last 150
++ using 150 time points => -corder 5
++ Loading dataset pb01.265001.r04.tshift+orig
++ processing all 143360 voxels in dataset
++ Procesing time series with NEW model fit algorithm
++ smash edit thresholds: 3.1 … 5.0 MADs

  • [ 3.457% … 0.072% of normal distribution]
  • [ 8.839% … 3.125% of Laplace distribution]
    ++ start OpenMP thread #1
    ++ start OpenMP thread #3
    ++ start OpenMP thread #0
    ++ start OpenMP thread #2

    ++ Elapsed despike time = 667ms

++ FINAL: 20837250 data points, 806934 edits [3.873%], 62777 big edits [0.301%]
++ Output dataset ./pb02.265001.r04.despike+orig.BRIK
end
3dDespike -NEW -nomask -prefix pb02.265001.r05.despike pb01.265001.r05.tshift+orig
++ 3dDespike: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb01.265001.r05.tshift+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb01.265001.r05.tshift+orig.BRIK is 0.799908 degrees from plumb.
++ Input dataset is in short format, but output will be in float format
++ ignoring first 0 time points, using last 150
++ using 150 time points => -corder 5
++ Loading dataset pb01.265001.r05.tshift+orig
++ processing all 143360 voxels in dataset
++ Procesing time series with NEW model fit algorithm
++ smash edit thresholds: 3.1 … 5.0 MADs

  • [ 3.457% … 0.072% of normal distribution]
  • [ 8.839% … 3.125% of Laplace distribution]
    ++ start OpenMP thread #0
    ++ start OpenMP thread #2
    ++ start OpenMP thread #1
    ++ start OpenMP thread #3

    ++ Elapsed despike time = 719ms

++ FINAL: 20837250 data points, 1004426 edits [4.820%], 214664 big edits [1.030%]
++ Output dataset ./pb02.265001.r05.despike+orig.BRIK
end
3dDespike -NEW -nomask -prefix pb02.265001.r06.despike pb01.265001.r06.tshift+orig
++ 3dDespike: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./pb01.265001.r06.tshift+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./pb01.265001.r06.tshift+orig.BRIK is 0.799908 degrees from plumb.
++ Input dataset is in short format, but output will be in float format
++ ignoring first 0 time points, using last 180
++ using 180 time points => -corder 6
++ Loading dataset pb01.265001.r06.tshift+orig
++ processing all 143360 voxels in dataset
++ Procesing time series with NEW model fit algorithm
++ smash edit thresholds: 3.1 … 5.0 MADs

  • [ 3.457% … 0.072% of normal distribution]
  • [ 8.839% … 3.125% of Laplace distribution]
    ++ start OpenMP thread #2
    ++ start OpenMP thread #0
    ++ start OpenMP thread #3
    ++ start OpenMP thread #1

    ++ Elapsed despike time = 826ms

++ FINAL: 25004700 data points, 1076068 edits [4.303%], 128650 big edits [0.515%]
++ Output dataset ./pb02.265001.r06.despike+orig.BRIK
end
3dbucket -prefix vr_base_min_outlier pb02.265001.r01.despike+orig[15]
++ 3dbucket: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
align_epi_anat.py -anat2epi -anat anatSS.SSW+orig -suffix _al_junk -epi vr_base_min_outlier+orig -epi_base 0 -epi_strip 3dAutomask -anat_has_skull no -cost lpc+ZZ -volreg off -tshift off
#++ align_epi_anat version: 1.58
#++ turning off volume registration
#Script is running (command trimmed):
3dAttribute DELTA ./vr_base_min_outlier+orig
#Script is running (command trimmed):
3dAttribute DELTA ./vr_base_min_outlier+orig
#Script is running:
3dAttribute DELTA /home/sungjin/fMRI/CGE/265001.s1.results/anatSS.SSW+orig
#++ Multi-cost is lpc+ZZ
#++ Removing all the temporary files
#Script is running:
\rm -f ./__tt_vr_base_min_outlier*
#Script is running:
\rm -f ./__tt_anatSS.SSW*
#Script is running (command trimmed):
3dcopy ./anatSS.SSW+orig ./__tt_anatSS.SSW+orig
++ 3dcopy: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
#Script is running (command trimmed):
3dinfo ./__tt_anatSS.SSW+orig | \grep ‘Data Axes Tilt:’|\grep ‘Oblique’
#++ Dataset /home/sungjin/fMRI/CGE/265001.s1.results/__tt_anatSS.SSW+orig is not oblique
#Script is running (command trimmed):
3dinfo ./vr_base_min_outlier+orig | \grep ‘Data Axes Tilt:’|\grep ‘Oblique’
#++ Dataset /home/sungjin/fMRI/CGE/265001.s1.results/vr_base_min_outlier+orig is oblique*
#Script is running:
3dAttribute DELTA /home/sungjin/fMRI/CGE/265001.s1.results/__tt_anatSS.SSW+orig
#++ Spacing for anat to oblique epi alignment is 1.000000
#++ Matching obliquity of anat to epi
#Script is running (command trimmed):
3dWarp -verb -card2oblique ./vr_base_min_outlier+orig -prefix ./__tt_anatSS.SSW_ob -newgrid 1.000000 ./__tt_anatSS.SSW+orig | \grep -A 4 ‘# mat44 Obliquity Transformation ::’ > ./__tt_anatSS.SSW_obla2e_mat.1D
++ 3dWarp: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
#++ using 0th sub-brick because only one found
#Script is running (command trimmed):
3dbucket -prefix ./__tt_vr_base_min_outlier_ts ./vr_base_min_outlier+orig’[0]’
++ 3dbucket: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
#++ resampling epi to match anat data
#Script is running (command trimmed):
3dresample -master ./__tt_anatSS.SSW_ob+orig -prefix ./__tt_vr_base_min_outlier_ts_rs -inset ./__tt_vr_base_min_outlier_ts+orig’’ -rmode Cu
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./__tt_vr_base_min_outlier_ts+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./__tt_vr_base_min_outlier_ts+orig.BRIK is 0.799908 degrees from plumb.
#++ removing skull or area outside brain
#Script is running (command trimmed):
3dAutomask -apply_prefix ./__tt_vr_base_min_outlier_ts_rs_ns ./__tt_vr_base_min_outlier_ts_rs+orig
++ 3dAutomask: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: Emperor Zhark
++ Loading dataset ./__tt_vr_base_min_outlier_ts_rs+orig
++ Forming automask

  • Fixed clip level = 417.475769
  • Used gradual clip level = 139.019440 … 478.463043
  • Number voxels above clip level = 98910
  • Clustering voxels …
  • Largest cluster has 86317 voxels
  • Clustering voxels …
  • Largest cluster has 85339 voxels
  • Filled 142 voxels in small holes; now have 85481 voxels
  • Filled 16044 voxels in large holes; now have 101525 voxels
  • Clustering voxels …
  • Largest cluster has 101460 voxels
  • Clustering non-brain voxels …
  • Clustering voxels …
  • Largest cluster has 11879879 voxels
  • Mask now has 101461 voxels
    ++ 101461 voxels in the mask [out of 11981340: 0.85%]
    ++ first 156 x-planes are zero [from R]
    ++ last 0 x-planes are zero [from L]
    ++ first 61 y-planes are zero [from A]
    ++ last 68 y-planes are zero [from P]
    ++ first 84 z-planes are zero [from I]
    ++ last 79 z-planes are zero [from S]
    ++ applying mask to original data
    ++ Writing masked data
    ++ CPU time = 0.000000 sec
    #++ Computing weight mask
    #Script is running (command trimmed):
    3dBrickStat -automask -percentile 90.000000 1 90.000000 ./__tt_vr_base_min_outlier_ts_rs_ns+orig
    #++ Applying threshold of 1024.127319 on /home/sungjin/fMRI/CGE/265001.s1.results/__tt_vr_base_min_outlier_ts_rs_ns+orig
    #Script is running (command trimmed):
    3dcalc -datum float -prefix ./__tt_vr_base_min_outlier_ts_rs_ns_wt -a ./__tt_vr_base_min_outlier_ts_rs_ns+orig -expr ‘min(1,(a/1024.127319))’
    ++ 3dcalc: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
    ++ Authored by: A cast of thousands
    #++ Aligning anat data to epi data
    #Script is running (command trimmed):
    3dAllineate -lpc+ZZ -wtprefix ./__tt_anatSS.SSW_ob_al_junk_wtal -weight ./__tt_vr_base_min_outlier_ts_rs_ns_wt+orig -source ./__tt_anatSS.SSW_ob+orig -prefix ./__tt_anatSS.SSW_ob_temp_al_junk -base ./__tt_vr_base_min_outlier_ts_rs_ns+orig -nocmass -1Dmatrix_save ./anatSS.SSW_al_junk_e2a_only_mat.aff12.1D -master SOURCE -weight_frac 1.0 -maxrot 6 -maxshf 10 -VERB -warp aff -source_automask+4 -onepass
    ++ 3dAllineate: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
    ++ Authored by: Zhark the Registrator
    ++ lpc+ parameters: hel=0.40 mi=0.20 nmi=0.20 crA=0.40 ov=0.40 [to be zeroed at Final iteration]
    ++ Source dataset: ./__tt_anatSS.SSW_ob+orig.HEAD
    ++ Base dataset: ./__tt_vr_base_min_outlier_ts_rs_ns+orig.HEAD
    ++ Loading datasets
    ++ 1481954 voxels in -source_automask+4
    ++ Zero-pad: xbot=0 xtop=4
    ++ 101398 voxels [0.8%] in weight mask
    ++ Number of points for matching = 101398
    ++ Local correlation: blok type = ‘RHDD(6.54321)’
    ++ lpc+ parameters: hel=0.40 mi=0.20 nmi=0.20 crA=0.40 ov=0.40 [to be zeroed at Final iteration]
    ++ shift param auto-range: -58.7…58.7 -82.2…82.2 -82.8…82.8
  • Range param#4 [z-angle] = -6.000000 … 6.000000
  • Range param#5 [x-angle] = -6.000000 … 6.000000
  • Range param#6 [y-angle] = -6.000000 … 6.000000
  • Range param#1 [x-shift] = -10.000000 … 10.000000
  • Range param#2 [y-shift] = -10.000000 … 10.000000
  • Range param#3 [z-shift] = -10.000000 … 10.000000
  • 12 free parameters
    ++ Normalized convergence radius = 0.001000
    ++ OpenMP thread count = 4
    ++ ======= Allineation of 1 sub-bricks using Local Pearson Signed + Others =======
    ++ ========== sub-brick #0 ========== [total CPU to here=0.0 s]
    ++ *** Fine pass begins ***
    • Enter alignment setup routine
    • copying base image
    • copying source image
    • copying weight image
    • using 101398 points from base image [use_all=0]
    • Exit alignment setup routine
    • histogram: source clip 0 … 0; base clip 0 … 0
    • versus source range 0 … 964.198; base range -43.9971 … 1367.43
  • 83524 total points stored in 177 ‘RHDD(6.54321)’ bloks
    • Initial cost = 40.199402
    • Initial fine Parameters = 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 1.0000 1.0000 0.0000 0.0000 0.0000

    • Finalish cost = 39.493938 ; 186 funcs
    • Set lpc+ parameters back to purity before Final iterations
    • histogram: source clip 0 … 0; base clip 0 … 0
    • versus source range 0 … 964.198; base range -43.9971 … 1367.43

    • Final cost = -0.543333 ; 179 funcs
  • Final fine fit Parameters:
    x-shift= 5.7113 y-shift= 0.0663 z-shift= -0.2459 … enorm= 5.7170 mm
    z-angle= 0.6580 x-angle= 0.1489 y-angle= -0.2553 … total= 0.7212 deg
    x-scale= 0.9087 y-scale= 0.9985 z-scale= 1.0007 … vol3D= 0.9079
    y/x-shear= 0.0031 z/x-shear= -0.0096 z/y-shear= -0.0009
    • Fine net CPU time = 0.0 s
      ++ Computing output image
      ++ image warp: parameters = 5.7113 0.0663 -0.2459 0.6580 0.1489 -0.2553 0.9087 0.9985 1.0007 0.0031 -0.0096 -0.0009
      ++ Wrote -1Dmatrix_save ./anatSS.SSW_al_junk_e2a_only_mat.aff12.1D
      ++ 3dAllineate: total CPU time = 0.0 sec Elapsed = 13.1
      ++ ###########################################################
      ++ # Please check results visually for alignment quality #
      ++ ###########################################################
      ++ # ‘-autoweight’ is recommended when using -lpc or -lpa #
      ++ # If your results are not good, please try again. #
      ++ ###########################################################
      #Script is running (command trimmed):
      cat_matvec -ONELINE ./anatSS.SSW_al_junk_e2a_only_mat.aff12.1D ./__tt_anatSS.SSW_obla2e_mat.1D -I > ./anatSS.SSW_al_junk_mat.aff12.1D
      #++ Combining anat to epi and oblique transformations
      #Script is running (command trimmed):
      3dAllineate -base ./__tt_vr_base_min_outlier_ts_rs_ns+orig -1Dmatrix_apply ./anatSS.SSW_al_junk_mat.aff12.1D -prefix ./anatSS.SSW_al_junk -input ./__tt_anatSS.SSW+orig -master SOURCE -weight_frac 1.0 -maxrot 6 -maxshf 10 -VERB -warp aff -source_automask+4 -onepass
      ++ 3dAllineate: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
      ++ Authored by: Zhark the Registrator
      ++ Source dataset: ./__tt_anatSS.SSW+orig.HEAD
      ++ Base dataset: ./__tt_vr_base_min_outlier_ts_rs_ns+orig.HEAD
      ++ Loading datasets
  • Range param#4 [z-angle] = -6.000000 … 6.000000
  • Range param#5 [x-angle] = -6.000000 … 6.000000
  • Range param#6 [y-angle] = -6.000000 … 6.000000
  • Range param#1 [x-shift] = -10.000000 … 10.000000
  • Range param#2 [y-shift] = -10.000000 … 10.000000
  • Range param#3 [z-shift] = -10.000000 … 10.000000
    ++ OpenMP thread count = 4
    ++ ========== Applying transformation to 1 sub-bricks ==========
    ++ ========== sub-brick #0 ========== [total CPU to here=0.0 s]
    • Enter alignment setup routine
    • copying base image
    • copying source image
    • no weight image
    • using 11 points from base image [use_all=0]
    • Exit alignment setup routine
      ++ using -1Dmatrix_apply
      ++ Computing output image
      ++ image warp: parameters = -0.9084 -0.0088 -0.0179 29.2348 -0.0072 0.9984 0.0026 0.6169 -0.0257 -0.0038 1.0006 1.3968
      ++ 3dAllineate: total CPU time = 0.0 sec Elapsed = 1.0
      ++ ###########################################################
      #++ Creating final output: anat data aligned to epi

copy is not necessary

#++ Saving history
#Script is running (command trimmed):
3dNotes -h “align_epi_anat.py -anat2epi -anat anatSS.SSW+orig -suffix
_al_junk -epi vr_base_min_outlier+orig -epi_base 0 -epi_strip 3dAutomask
-anat_has_skull no -cost lpc+ZZ -volreg off -tshift off”
./anatSS.SSW_al_junk+orig

#++ Removing all the temporary files
#Script is running:
\rm -f ./__tt_vr_base_min_outlier*
#Script is running:
\rm -f ./__tt_anatSS.SSW*

Finished alignment successfully

if ( ! -f anatQQ.SSW+tlrc.HEAD ) then
foreach run ( 01 02 03 04 05 06 )
3dvolreg -verbose -zpad 1 -base vr_base_min_outlier+orig -1Dfile dfile.r01.1D -prefix rm.epi.volreg.r01 -cubic -1Dmatrix_save mat.r01.vr.aff12.1D pb02.265001.r01.despike+orig
++ 3dvolreg: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
++ Authored by: RW Cox
e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
such as ./vr_base_min_outlier+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./vr_base_min_outlier+orig.BRIK is 0.799908 degrees from plumb.
++ Reading in base dataset ./vr_base_min_outlier+orig.BRIK
++ Oblique dataset:./pb02.265001.r01.despike+orig.BRIK is 0.799908 degrees from plumb.
++ Reading input dataset ./pb02.265001.r01.despike+orig.BRIK
++ Edging: x=3 y=3 z=2
++ Creating mask for -maxdisp

  • Automask has 42045 voxels
  • 8524 voxels left in -maxdisp mask after erosion
    ++ Initializing alignment base
    ++ Starting final pass on 180 sub-bricks: 0…1…2…3…4…5…6…7…8…9…10…11…12…13…14…15…16…17…18…19…20…21…22…23…24…25…26…27…28…29…30…31…32…33…34…35…36…37…38…39…40…41…42…43…44…45…46…47…48…49…50…51…52…53…54…55…56…57…58…59…60…61…62…63…64…65…66…67…68…69…70…71…72…73…74…75…76…77…78…79…80…81…82…83…84…85…86…87…88…89…90…91…92…93…94…95…96…97…98…99…100…101…102…103…104…105…106…107…108…109…110…111…112…113…114…115…116…117…118…119…120…121…122…123…124…125…126…127…128…129…130…131…132…133…134…135…136…137…138…139…140…141…142…143…144…145…146…147…148…149…150…151…152…153…154…155…156…157…158…159…160…161…162…163…164…165…166…167…168…169…170…171…172…173…174…175…176…177…178…179…
    ++ CPU time for realignment=0 s [=0 s/sub-brick]
    ++ Min : roll=-1.408 pitch=-0.099 yaw=-1.571 dS=-0.090 dL=-0.354 dP=-0.187
    ++ Mean: roll=-0.359 pitch=+0.627 yaw=-0.027 dS=+0.342 dL=+0.016 dP=+0.004
    ++ Max : roll=+0.026 pitch=+1.639 yaw=+0.459 dS=+1.065 dL=+0.413 dP=+0.161
    ++ Max displacements (mm) for each sub-brick:
    0.52(0.00) 0.30(0.34) 0.19(0.17) 0.29(0.18) 0.34(0.12) 0.19(0.25) 0.24(0.15) 0.33(0.14) 0.17(0.21) 0.19(0.12) 0.33(0.20) 0.17(0.19) 0.11(0.12) 0.27(0.21) 0.15(0.15) 0.00(0.15) 0.18(0.18) 0.25(0.15) 0.11(0.17) 0.20(0.15) 0.26(0.12) 0.20(0.18) 0.29(0.13) 0.33(0.15) 0.28(0.22) 0.44(0.19) 0.40(0.18) 0.30(0.12) 0.39(0.23) 0.43(0.13) 0.33(0.19) 0.40(0.17) 0.54(0.16) 0.55(0.19) 0.50(0.19) 0.43(0.12) 0.35(0.19) 0.49(0.29) 0.43(0.19) 0.39(0.13) 0.53(0.20) 0.47(0.17) 0.45(0.11) 0.58(0.16) 0.48(0.26) 0.58(0.11) 0.76(0.56) 0.78(0.23) 0.56(0.37) 0.57(0.14) 0.64(0.20) 0.60(0.24) 0.76(0.31) 0.94(0.33) 0.78(0.35) 2.30(1.79) 2.62(0.35) 1.88(0.83) 2.01(0.17) 1.95(0.16) 1.87(0.27) 1.84(0.18) 1.77(0.10) 1.76(0.23) 1.78(0.16) 1.71(0.13) 1.74(0.28) 1.74(0.19) 1.69(0.10) 1.72(0.23) 1.78(0.19) 1.74(0.09) 1.77(0.20) 1.77(0.20) 1.69(0.18) 1.68(0.12) 1.75(0.14) 1.69(0.16) 1.74(0.16) 1.74(0.19) 1.69(0.11) 1.67(0.14) 1.56(0.35) 1.55(0.25) 1.64(0.19) 1.61(0.20) 1.57(0.27) 1.66(0.17) 1.62(0.16) 1.64(0.30) 1.72(0.31) 1.66(0.18) 1.69(0.15) 1.72(0.16) 1.63(0.20) 1.70(0.10) 1.77(0.26) 1.75(0.25) 1.78(0.19) 1.75(0.20) 1.68(0.25) 1.74(0.10) 1.75(0.17) 1.67(0.15) 1.77(0.45) 1.86(0.54) 1.68(0.48) 1.61(0.48) 1.52(0.26) 1.51(0.24) 1.54(0.20) 1.57(0.27) 1.59(0.13) 1.58(0.21) 1.57(0.25) 1.56(0.11) 1.68(0.18) 1.65(0.21) 1.60(0.24) 1.69(0.14) 1.70(0.23) 1.65(0.22) 1.80(0.33) 1.63(0.45) 1.58(0.38) 1.57(0.37) 1.45(0.28) 1.48(0.18) 1.46(0.23) 1.55(0.13) 1.58(0.17) 1.60(0.14) 1.68(0.20) 1.66(0.23) 1.80(0.63) 1.80(0.19) 1.65(0.25) 1.80(0.21) 4.22(2.97) 5.80(1.66) 5.61(0.21) 5.46(0.15) 5.43(0.12) 5.45(0.60) 4.58(0.99) 4.72(0.39) 5.09(0.61) 4.71(0.64) 4.99(0.28) 4.96(0.27) 3.07(2.37) 2.83(0.32) 2.51(0.36) 2.31(0.42) 2.40(0.16) 2.35(0.21) 2.47(0.21) 2.45(0.10) 2.41(0.19) 2.29(0.26) 2.92(0.71) 3.35(0.47) 2.68(0.72) 2.80(0.19) 2.64(0.23) 2.64(0.16) 2.53(0.23) 2.09(0.46) 2.34(0.28) 2.49(0.24) 2.26(0.29) 2.59(0.54) 2.31(0.39) 2.42(0.17) 2.46(0.16) 2.21(0.30) 2.47(0.28) 2.36(0.19) 2.63(0.51) 2.59(0.23)
    ++ Max displacement in automask = 5.80 (mm) at sub-brick 139
    ++ Max delta displ in automask = 2.97 (mm) at sub-brick 138
    ++ Wrote dataset to disk in ./rm.epi.volreg.r01+orig.BRIK
    3dcalc -overwrite -a pb02.265001.r01.despike+orig -expr 1 -prefix rm.epi.all1
    ++ 3dcalc: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
    ++ Authored by: A cast of thousands
    e[7m*+ WARNING:e[0m input ‘a’ is not used in the expression
    cat_matvec -ONELINE anatQQ.SSW.aff12.1D anatSS.SSW_al_junk_mat.aff12.1D -I mat.r01.vr.aff12.1D
    3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1 -source pb02.265001.r01.despike+orig -nwarp anatQQ.SSW_WARP.nii mat.r01.warp.aff12.1D -prefix rm.epi.nomask.r01
    ++ 3dNwarpApply: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]
    ++ Authored by: Zhark the Warped
    ++ -master dataset is ‘anatQQ.SSW+tlrc’
    ++ output grid size = 1 mm
    e[7m*+ WARNING:e[0m If you are performing spatial transformations on an oblique dset,
    such as ./pb02.265001.r01.despike+orig.BRIK,
    or viewing/combining it with volumes of differing obliquity,
    you should consider running:
    3dWarp -deoblique
    on this and other oblique datasets in the same session.
    See 3dWarp -help for details.
    ++ Oblique dataset:./pb02.265001.r01.despike+orig.BRIK is 0.799908 degrees from plumb.
    ++ opened source dataset ‘pb02.265001.r01.despike+orig’
    ++ Processing -nwarp
    ++ Warping:…Killed

A technician came out and replaced old RAM with new ones. My machine seems to be working fine as I notice the difference in processing speed.

However, when I ran my script again, I got an error message as follows:

*** failure while opening brick file ./pb05.265001.r02.scale+tlrc.BRIK - do you have permission?
*** Unix error message: Cannot allocate memory
** Memory usage: chunks=6464 bytes=6143425630
THD_load_datablock
THD_load_tcat
THD_load_datablock
read_input_data
initialize_program
3dDeconvolve main
** Command line was:
3dDeconvolve -input pb05.265001.r01.scale+tlrc.HEAD pb05.265001.r02.scale+tlrc.HEAD pb05.265001.r03.scale+tlrc.HEAD pb05.265001.r04.scale+tlrc.HEAD pb05.265001.r05.scale+tlrc.HEAD pb05.265001.r06.scale+tlrc.HEAD -censor motion_265001_censor.1D -ortvec mot_demean.r01.1D mot_demean_r01 -ortvec mot_demean.r02.1D mot_demean_r02 -ortvec mot_demean.r03.1D mot_demean_r03 -ortvec mot_demean.r04.1D mot_demean_r04 -ortvec mot_demean.r05.1D mot_demean_r05 -ortvec mot_demean.r06.1D mot_demean_r06 -ortvec mot_deriv.r01.1D mot_deriv_r01 -ortvec mot_deriv.r02.1D mot_deriv_r02 -ortvec mot_deriv.r03.1D mot_deriv_r03 -ortvec mot_deriv.r04.1D mot_deriv_r04 -ortvec mot_deriv.r05.1D mot_deriv_r05 -ortvec mot_deriv.r06.1D mot_deriv_r06 -polort 3 -num_stimts 1 -stim_times 1 stimuli/Timing_CGE.txt GAM -stim_label 1 Timing_CGE.txt -jobs 4 -GOFORIT 5 -fout -tout -x1D X.xmat.1D -xjpeg X.jpg -x1D_uncensored X.nocensor.xmat.1D -errts errts.265001 -bucket stats.265001
** FATAL ERROR: Can’t load dataset ‘./tcat+tlrc.BRIK’: is it complete?

I searched the AFNI board, but could find a thread related tho this topic. Can you make a suggestion? I also copy my scripte for further review.

Sungjin

==========================================================================

script generated by the command:

afni_proc.py -subj_id 265001 -script proc.265001 -scr_overwrite -blocks \

tshift despike align tlrc volreg blur mask scale regress -copy_anat \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatSS.SSW.nii \

-anat_has_skull no -dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest1.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.pv.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.14p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.6p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.flk.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest2.s1+orig.HEAD \

-tcat_remove_first_trs 0 -align_opts_aea -cost lpc+ZZ -volreg_align_to \

MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -tlrc_base \

MNI152_2009_template_SSW.nii.gz -tlrc_NL_warp -tlrc_NL_warped_dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.nii \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.aff12.1D \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW_WARP.nii \

-blur_size 6.0 -regress_stim_times \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt -regress_stim_labels \

Timing_CGE.txt -regress_basis ‘BLOCK(6,1)’ -regress_censor_motion 0.5 \

-regress_apply_mot_types demean deriv -regress_motion_per_run \

-regress_opts_3dD -jobs 4 -GOFORIT 5 -regress_reml_exec \

-regress_compute_fitts -regress_make_ideal_sum sum_ideal.1D \

-regress_est_blur_epits -regress_est_blur_errts

#!/bin/tcsh -xef

echo “auto-generated by afni_proc.py, Tue Oct 15 15:34:29 2019”
echo “(version 6.32, February 22, 2019)”
echo “execution started: date

to execute via tcsh:

tcsh -xef proc.265001 |& tee output.proc.265001

to execute via bash:

tcsh -xef proc.265001.s1 2>&1 | tee output.proc.265001.s1

=========================== auto block: setup ============================

script setup

take note of the AFNI version

afni -ver

check that the current AFNI version is recent enough

afni_history -check_date 17 Jan 2019
if ( $status ) then
echo “** this script requires newer AFNI binaries (than 17 Jan 2019)”
echo " (consider: @update.afni.binaries -defaults)"
exit
endif

the user may specify a single subject to run with

if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = 265001
endif

set sNUM = 1

assign output directory name

set output_dir = ${subj}.s$sNUM.results

verify that the results directory does not yet exist

if ( -d $output_dir ) then
echo output dir “$subj.results” already exists
exit
endif

set list of runs

set runs = (count -digits 2 1 6)

create results and stimuli directories

mkdir $output_dir
mkdir $output_dir/stimuli

copy stim files into stimulus directory

cp /home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt $output_dir/stimuli

copy anatomy to results dir

3dcopy CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/anatSS.SSW.nii
$output_dir/anatSS.SSW

copy external -tlrc_NL_warped_dsets datasets

3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW.nii
$output_dir/anatQQ.SSW
3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW.aff12.1D
$output_dir/anatQQ.SSW.aff12.1D
3dcopy CGE_Raw_Data/Session$sNUM/${subj}_S$sNUM/subject_raw/anatQQ.SSW_WARP.nii
$output_dir/anatQQ.SSW_WARP.nii

============================ auto block: tcat ============================

apply 3dTcat to copy input dsets to results dir,

while removing the first 0 TRs

3dTcat -prefix $output_dir/pb00.$subj.r01.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/raw.rest1.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r02.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/raw.pv.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r03.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/raw.14p.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r04.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/raw.6p.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r05.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/raw.flk.s1+orig’[0…$]’
3dTcat -prefix $output_dir/pb00.$subj.r06.tcat
CGE_Raw_Data/Session1/${subj}_S$sNUM/subject_raw/raw.rest2.s1+orig’[0…$]’

and make note of repetitions (TRs) per run

set tr_counts = ( 180 150 150 150 150 180 )

-------------------------------------------------------

enter the results directory (can begin processing data)

cd $output_dir

========================== auto block: outcount ==========================

data check: compute outlier fraction for each volume

touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 3 -legendre
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D

# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
    echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
        >> out.pre_ss_warn.txt
endif

end

catenate outlier counts into a single time series

cat outcount.r*.1D > outcount_rall.1D

get run number and TR index for minimum outlier volume

set minindex = 3dTstat -argmin -prefix - outcount_rall.1D\'
set ovals = ( 1d_tool.py -set_run_lengths $tr_counts \ -index_to_run_tr $minindex )

save run and TR indices for extraction of vr_base_min_outlier

set minoutrun = $ovals[1]
set minouttr = $ovals[2]
echo “min outlier: run $minoutrun, TR $minouttr” | tee out.min_outlier.txt

================================= tshift =================================

time shift data so all slice timing is the same

foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb01.$subj.r$run.tshift
pb00.$subj.r$run.tcat+orig
end

================================ despike =================================

apply 3dDespike to each run

foreach run ( $runs )
3dDespike -NEW -nomask -prefix pb02.$subj.r$run.despike
pb01.$subj.r$run.tshift+orig
end

--------------------------------

extract volreg registration base

3dbucket -prefix vr_base_min_outlier
pb02.$subj.r$minoutrun.despike+orig"[$minouttr]"

================================= align ==================================

for e2a: compute anat alignment transformation to EPI registration base

(new anat will be current anatSS.SSW+orig)

align_epi_anat.py -anat2epi -anat anatSS.SSW+orig
-suffix _al_junk
-epi vr_base_min_outlier+orig -epi_base 0
-epi_strip 3dAutomask
-anat_has_skull no
-cost lpc+ZZ
-volreg off -tshift off

================================== tlrc ==================================

nothing to do: have external -tlrc_NL_warped_dsets

warped anat : anatQQ.SSW+tlrc

affine xform : anatQQ.SSW.aff12.1D

non-linear warp : anatQQ.SSW_WARP.nii

================================= volreg =================================

align each dset to base volume, to anat, warp to tlrc space

verify that we have a +tlrc warp dataset

if ( ! -f anatQQ.SSW+tlrc.HEAD ) then
echo “** missing +tlrc warp dataset: anatQQ.SSW+tlrc.HEAD”
exit
endif

register and warp

foreach run ( $runs )
# register each volume to the base image
3dvolreg -verbose -zpad 1 -base vr_base_min_outlier+orig
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run
-cubic
-1Dmatrix_save mat.r$run.vr.aff12.1D
pb02.$subj.r$run.despike+orig

# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb02.$subj.r$run.despike+orig -expr 1        \
       -prefix rm.epi.all1

# catenate volreg/epi2anat/tlrc xforms
cat_matvec -ONELINE                                               \
           anatQQ.SSW.aff12.1D                                    \
           anatSS.SSW_al_junk_mat.aff12.1D -I                     \
           mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D

# apply catenated xform: volreg/epi2anat/tlrc/NLtlrc
# then apply non-linear standard-space warp
3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1                      \
             -source pb02.$subj.r$run.despike+orig                \
             -nwarp "anatQQ.SSW_WARP.nii mat.r$run.warp.aff12.1D" \
             -prefix rm.epi.nomask.r$run

# warp the all-1 dataset for extents masking 
3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1                      \
             -source rm.epi.all1+orig                             \
             -nwarp "anatQQ.SSW_WARP.nii mat.r$run.warp.aff12.1D" \
             -interp cubic                                        \
             -ainterp NN -quiet                                   \
             -prefix rm.epi.1.r$run

# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc

end

make a single file of registration params

cat dfile.r*.1D > dfile_rall.1D

----------------------------------------

create the extents mask: mask_epi_extents+tlrc

(this is a mask of voxels that have valid data at every TR)

3dMean -datum short -prefix rm.epi.mean rm.epi.min.r*.HEAD
3dcalc -a rm.epi.mean+tlrc -expr ‘step(a-0.999)’ -prefix mask_epi_extents

and apply the extents mask to the EPI data

(delete any time series with missing data)

foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc
-expr ‘a*b’ -prefix pb03.$subj.r$run.volreg
end

warp the volreg base EPI dataset to make a final version

cat_matvec -ONELINE
anatQQ.SSW.aff12.1D
anatSS.SSW_al_junk_mat.aff12.1D -I > mat.basewarp.aff12.1D

3dNwarpApply -master anatQQ.SSW+tlrc -dxyz 1
-source vr_base_min_outlier+orig
-nwarp “anatQQ.SSW_WARP.nii mat.basewarp.aff12.1D”
-prefix final_epi_vr_base_min_outlier

create an anat_final dataset, aligned with stats

3dcopy anatQQ.SSW+tlrc anat_final.$subj

record final registration costs

3dAllineate -base final_epi_vr_base_min_outlier+tlrc -allcostX
-input anat_final.$subj+tlrc |& tee out.allcostX.txt

================================== blur ==================================

blur each volume of each run

foreach run ( $runs )
3dmerge -1blur_fwhm 6.0 -doall -prefix pb04.$subj.r$run.blur
pb03.$subj.r$run.volreg+tlrc
end

================================== mask ==================================

create ‘full_mask’ dataset (union mask)

foreach run ( $runs )
3dAutomask -prefix rm.mask_r$run pb04.$subj.r$run.blur+tlrc
end

create union of inputs, output type is byte

3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj

---- create subject anatomy mask, mask_anat.$subj+tlrc ----

(resampled from tlrc anat)

3dresample -master full_mask.$subj+tlrc -input anatQQ.SSW+tlrc
-prefix rm.resam.anat

convert to binary anat mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc
-prefix mask_anat.$subj

compute tighter EPI mask by intersecting with anat mask

3dmask_tool -input full_mask.$subj+tlrc mask_anat.$subj+tlrc
-inter -prefix mask_epi_anat.$subj

compute overlaps between anat and EPI masks

3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_overlap.txt

note Dice coefficient of masks, as well

3ddot -dodice full_mask.$subj+tlrc mask_anat.$subj+tlrc
|& tee out.mask_ae_dice.txt

---- create group anatomy mask, mask_group+tlrc ----

(resampled from tlrc base anat, MNI152_2009_template_SSW.nii.gz)

3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group
-input /home/sungjin/abin/MNI152_2009_template_SSW.nii.gz

convert to binary group mask; fill gaps and holes

3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc
-prefix mask_group

================================= scale ==================================

scale each voxel time series to have a mean of 100

(be sure no negatives creep in)

(subject to a range of [0,200])

foreach run ( $runs )
3dTstat -prefix rm.mean_r$run pb04.$subj.r$run.blur+tlrc
3dcalc -a pb04.$subj.r$run.blur+tlrc -b rm.mean_r$run+tlrc
-c mask_epi_extents+tlrc
-expr ‘c * min(200, a/b*100)*step(a)*step(b)’
-prefix pb05.$subj.r$run.scale
end

================================ regress =================================

compute de-meaned motion parameters (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-demean -write motion_demean.1D

compute motion parameter derivatives (for use in regression)

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-derivative -demean -write motion_deriv.1D

convert motion parameters for per-run regression

1d_tool.py -infile motion_demean.1D -set_run_lengths 180 150 150 150 150 180
-split_into_pad_runs mot_demean

1d_tool.py -infile motion_deriv.1D -set_run_lengths 180 150 150 150 150 180
-split_into_pad_runs mot_deriv

create censor file motion_${subj}_censor.1D, for censoring motion

1d_tool.py -infile dfile_rall.1D -set_run_lengths 180 150 150 150 150 180
-show_censor_count -censor_prev_TR
-censor_motion 0.5 motion_${subj}

note TRs that were not censored

set ktrs = 1d_tool.py -infile motion_${subj}_censor.1D \ -show_trs_uncensored encoded

------------------------------

run the regression analysis

3dDeconvolve -input pb05.$subj.r*.scale+tlrc.HEAD
-censor motion_${subj}_censor.1D
-ortvec mot_demean.r01.1D mot_demean_r01
-ortvec mot_demean.r02.1D mot_demean_r02
-ortvec mot_demean.r03.1D mot_demean_r03
-ortvec mot_demean.r04.1D mot_demean_r04
-ortvec mot_demean.r05.1D mot_demean_r05
-ortvec mot_demean.r06.1D mot_demean_r06
-ortvec mot_deriv.r01.1D mot_deriv_r01
-ortvec mot_deriv.r02.1D mot_deriv_r02
-ortvec mot_deriv.r03.1D mot_deriv_r03
-ortvec mot_deriv.r04.1D mot_deriv_r04
-ortvec mot_deriv.r05.1D mot_deriv_r05
-ortvec mot_deriv.r06.1D mot_deriv_r06
-polort 3
-num_stimts 1
-stim_times 1 stimuli/Timing_CGE.txt ‘GAM’
-stim_label 1 Timing_CGE.txt
-jobs 4
-GOFORIT 5
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg
-x1D_uncensored X.nocensor.xmat.1D
-errts errts.${subj}
-bucket stats.$subj

if 3dDeconvolve fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dDeconvolve error, failing…’
echo ’ (consider the file 3dDeconvolve.err)’
exit
endif

display any large pairwise correlations from the X-matrix

1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt

display degrees of freedom info from X-matrix

1d_tool.py -show_df_info -infile X.xmat.1D |& tee out.df_info.txt

– execute the 3dREMLfit script, written by 3dDeconvolve –

tcsh -x stats.REML_cmd

if 3dREMLfit fails, terminate the script

if ( $status != 0 ) then
echo ‘---------------------------------------’
echo ‘** 3dREMLfit error, failing…’
exit
endif

create an all_runs dataset to match the fitts, errts, etc.

3dTcat -prefix all_runs.$subj pb05.$subj.r*.scale+tlrc.HEAD

--------------------------------------------------

create a temporal signal to noise ratio dataset

signal: if ‘scale’ block, mean should be 100

noise : compute standard deviation of errts

3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc"[$ktrs]"
3dTstat -stdev -prefix rm.noise.all errts.${subj}_REML+tlrc"[$ktrs]"
3dcalc -a rm.signal.all+tlrc
-b rm.noise.all+tlrc
-c full_mask.$subj+tlrc
-expr ‘c*a/b’ -prefix TSNR.$subj

---------------------------------------------------

compute and store GCOR (global correlation average)

(sum of squares of global mean of unit errts)

3dTnorm -norm2 -prefix rm.errts.unit errts.${subj}_REML+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc
> gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D' > out.gcor.1D
echo “-- GCOR = cat out.gcor.1D

---------------------------------------------------

compute correlation volume

(per voxel: average correlation across masked brain)

(now just dot product with average unit time series)

3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr ‘a*b’ -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc

create fitts dataset from all_runs and errts

3dcalc -a all_runs.$subj+tlrc -b errts.${subj}+tlrc -expr a-b
-prefix fitts.$subj

create fitts from REML errts

3dcalc -a all_runs.$subj+tlrc -b errts.${subj}_REML+tlrc -expr a-b
-prefix fitts.$subj_REML

create ideal files for fixed response stim types

1dcat X.nocensor.xmat.1D’[24]’ > ideal_Timing_CGE.txt.1D

--------------------------------------------------------

compute sum of non-baseline regressors from the X-matrix

(use 1d_tool.py to get list of regressor colums)

set reg_cols = 1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"

also, create a stimulus-only X-matrix, for easy review

1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D

============================ blur estimation =============================

compute blur estimates

touch blur_est.$subj.1D # start with empty file

create directory for ACF curve files

mkdir files_ACF

– estimate blur for each run in epits –

touch blur.epits.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.epits.r$run.1D
all_runs.$subj+tlrc"[$trs]" >> blur.epits.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{0..$(2)}'\' )
echo average epits FWHM blurs: $blurs
echo “$blurs # epits FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.epits.1D'{1..$(2)}'\' )
echo average epits ACF blurs: $blurs
echo “$blurs # epits ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in errts –

touch blur.errts.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.errts.r$run.1D
errts.${subj}+tlrc"[$trs]" >> blur.errts.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{0..$(2)}'\' )
echo average errts FWHM blurs: $blurs
echo “$blurs # errts FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.errts.1D'{1..$(2)}'\' )
echo average errts ACF blurs: $blurs
echo “$blurs # errts ACF blur estimates” >> blur_est.$subj.1D

– estimate blur for each run in err_reml –

touch blur.err_reml.1D

restrict to uncensored TRs, per run

foreach run ( $runs )
set trs = 1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \ -show_trs_run $run
if ( $trs == “” ) continue
3dFWHMx -detrend -mask full_mask.$subj+tlrc
-ACF files_ACF/out.3dFWHMx.ACF.err_reml.r$run.1D
errts.${subj}_REML+tlrc"[$trs]" >> blur.err_reml.1D
end

compute average FWHM blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.err_reml.1D'{0..$(2)}'\' )
echo average err_reml FWHM blurs: $blurs
echo “$blurs # err_reml FWHM blur estimates” >> blur_est.$subj.1D

compute average ACF blur (from every other row) and append

set blurs = ( 3dTstat -mean -prefix - blur.err_reml.1D'{1..$(2)}'\' )
echo average err_reml ACF blurs: $blurs
echo “$blurs # err_reml ACF blur estimates” >> blur_est.$subj.1D

add 3dClustSim results as attributes to any stats dset

mkdir files_ClustSim

run Monte Carlo simulations using method ‘ACF’

set params = ( grep ACF blur_est.$subj.1D | tail -n 1 )
3dClustSim -both -mask full_mask.$subj+tlrc -acf $params[1-3]
-cmd 3dClustSim.ACF.cmd -prefix files_ClustSim/ClustSim.ACF

run 3drefit to attach 3dClustSim results to stats

set cmd = ( cat 3dClustSim.ACF.cmd )
$cmd stats.$subj+tlrc stats.${subj}_REML+tlrc

================== auto block: generate review scripts ===================

generate a review script for the unprocessed EPI data

gen_epi_review.py -script @epi_review.$subj
-dsets pb00.$subj.r*.tcat+orig.HEAD

generate scripts to review single subject results

(try with defaults, but do not allow bad exit status)

gen_ss_review_scripts.py -mot_limit 0.5 -exit0
-ss_review_dset out.ss_review.$subj.txt
-write_uvars_json out.ss_review_uvars.json

========================== auto block: finalize ==========================

remove temporary files

\rm -f rm.*

if the basic subject review script is here, run it

(want this to be the last text output)

if ( -e @ss_review_basic ) then
./@ss_review_basic |& tee out.ss_review.$subj.txt

# generate html ss review pages
# (akin to static images from running @ss_review_driver)
apqc_make_tcsh.py -review_style basic -subj_dir . \
    -uvar_json out.ss_review_uvars.json
tcsh @ss_review_html |& tee out.review_html
apqc_make_html.py -qc_dir QC_$subj

echo "\nconsider running: \n\n    afni_open -b $subj.results/QC_$subj/index.html\n"

endif

return to parent directory (just in case…)

cd …

echo “execution finished: date

==========================================================================

script generated by the command:

afni_proc.py -subj_id 265001 -script proc.265001 -scr_overwrite -blocks \

tshift despike align tlrc volreg blur mask scale regress -copy_anat \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatSS.SSW.nii \

-anat_has_skull no -dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest1.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.pv.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.14p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.6p.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.flk.s1+orig.HEAD \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/deob.rest2.s1+orig.HEAD \

-tcat_remove_first_trs 0 -align_opts_aea -cost lpc+ZZ -volreg_align_to \

MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -tlrc_base \

MNI152_2009_template_SSW.nii.gz -tlrc_NL_warp -tlrc_NL_warped_dsets \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.nii \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW.aff12.1D \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Session1/265001_S1/subject_raw/anatQQ.SSW_WARP.nii \

-blur_size 6.0 -regress_stim_times \

/home/sungjin/fMRI/CGE/CGE_Raw_Data/Timing_CGE.txt -regress_stim_labels \

Timing_CGE.txt -regress_basis ‘BLOCK(6,1)’ -regress_censor_motion 0.5 \

-regress_apply_mot_types demean deriv -regress_motion_per_run \

-regress_opts_3dD -jobs 4 -GOFORIT 5 -regress_reml_exec \

-regress_compute_fitts -regress_make_ideal_sum sum_ideal.1D \

-regress_est_blur_epits -regress_est_blur_errts

Hi Sungjin,

This looks like there were problems writing the previous files. Plus, the latest “Cannot allocate memory” error suggests you still do not have enough RAM. That is also why your computer starts getting very slow. The RAM is probably not bad, it is just swapping memory because it is running out, which makes everything much slower.

It would be nice to include output from the commands that I mentioned, which might get expanded a bit:

ls -lh pb0*BRIK*
3dinfo -n4 -datum -prefix pb05.*.r01.scale+tlrc.HEAD
3dinfo pb05.*.r01.scale+tlrc.HEAD | head -n 30
free -h

That this produces 160 GB of output suggests these are large datasets.

  • rick

Sorry! I must have misread your previous message. Here is the output.

As for the data size, I previously used 3dresample to change orientation, which changed the grid and thus file size. I didn’t use it this time, and the whole folder size is about 7BG. FYI.

sungjin@SMHCAS211AL01:~/fMRI/CGE/265001.s1.results.v1$ ls -lh pb0BRIK
-rw-r–r-- 1 sungjin sungjin 50M Oct 26 12:44 pb00.265001.r01.tcat+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:44 pb00.265001.r02.tcat+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:44 pb00.265001.r03.tcat+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:44 pb00.265001.r04.tcat+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:44 pb00.265001.r05.tcat+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 50M Oct 26 12:44 pb00.265001.r06.tcat+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 50M Oct 26 12:45 pb01.265001.r01.tshift+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:45 pb01.265001.r02.tshift+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:45 pb01.265001.r03.tshift+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:45 pb01.265001.r04.tshift+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 42M Oct 26 12:45 pb01.265001.r05.tshift+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 50M Oct 26 12:45 pb01.265001.r06.tshift+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 99M Oct 26 12:45 pb02.265001.r01.despike+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 83M Oct 26 12:45 pb02.265001.r02.despike+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 83M Oct 26 12:45 pb02.265001.r03.despike+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 83M Oct 26 12:45 pb02.265001.r04.despike+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 83M Oct 26 12:45 pb02.265001.r05.despike+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 99M Oct 26 12:45 pb02.265001.r06.despike+orig.BRIK
-rw-r–r-- 1 sungjin sungjin 151M Oct 26 16:09 pb03.265001.r01.volreg+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 126M Oct 26 16:10 pb03.265001.r02.volreg+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 126M Oct 26 16:11 pb03.265001.r03.volreg+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 126M Oct 26 16:13 pb03.265001.r04.volreg+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 126M Oct 26 16:14 pb03.265001.r05.volreg+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 150M Oct 26 16:15 pb03.265001.r06.volreg+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 354M Oct 26 16:17 pb04.265001.r01.blur+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 295M Oct 26 16:18 pb04.265001.r02.blur+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 295M Oct 26 16:19 pb04.265001.r03.blur+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 295M Oct 26 16:20 pb04.265001.r04.blur+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 295M Oct 26 16:22 pb04.265001.r05.blur+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 354M Oct 26 16:23 pb04.265001.r06.blur+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 145M Oct 26 16:28 pb05.265001.r01.scale+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 121M Oct 26 16:30 pb05.265001.r02.scale+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 121M Oct 26 16:32 pb05.265001.r03.scale+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 121M Oct 26 16:34 pb05.265001.r04.scale+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 121M Oct 26 16:36 pb05.265001.r05.scale+tlrc.BRIK.gz
-rw-r–r-- 1 sungjin sungjin 145M Oct 26 16:39 pb05.265001.r06.scale+tlrc.BRIK.gz
sungjin@SMHCAS211AL01:~/fMRI/CGE/265001.s1.results.v1$ 3dinfo -n4 -datum -prefix pb05..r01.scale+tlrc.HEAD
193 229 193 180 float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float|float pb05.265001.r01.scale
sungjin@SMHCAS211AL01:~/fMRI/CGE/265001.s1.results.v1$ 3dinfo pb05.
.r01.scale+tlrc.HEAD | head -n 30
++ 3dinfo: AFNI version=AFNI_19.0.26 (Mar 20 2019) [64-bit]

Dataset File: pb05.265001.r01.scale+tlrc
Identifier Code: AFN_CLiLFuBLihjL1q0LTbCOPQ Creation Date: Sat Oct 26 16:26:53 2019
Template Space: MNI
Dataset Type: Echo Planar (-epan)
Byte Order: LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode: BRIK
Storage Space: 6,141,615,120 (6.1 billion) bytes
Geometry String: “MATRIX(-1,0,0,96,0,-1,0,132,0,0,1,-78):193,229,193”
Data Axes Tilt: Plumb
Data Axes Orientation:
first (x) = Left-to-Right
second (y) = Posterior-to-Anterior
third (z) = Inferior-to-Superior [-orient LPI]
R-to-L extent: -96.000 [R] -to- 96.000 [L] -step- 1.000 mm [193 voxels]
A-to-P extent: -96.000 [A] -to- 132.000 [P] -step- 1.000 mm [229 voxels]
I-to-S extent: -78.000 [I] -to- 114.000 [S] -step- 1.000 mm [193 voxels]
Number of time steps = 180 Time step = 2.00000s Origin = 0.00000s
– At sub-brick #0#0’ datum type is float: 0 to 138.931
– At sub-brick #1#1’ datum type is float: 0 to 160.561
– At sub-brick #2#2’ datum type is float: 0 to 147.413
** For info on all 180 sub-bricks, use ‘3dinfo -verb’ **

----- HISTORY -----
[sungjin@SMHCAS211AL01: Sat Oct 26 16:26:53 2019] ===================================
[sungjin@SMHCAS211AL01: Sat Oct 26 16:26:53 2019] === History of inputs to 3dcalc ===
[sungjin@SMHCAS211AL01: Sat Oct 26 16:26:53 2019] === Input a:
[sungjin@SMHCAS211AL01: Sat Oct 26 16:26:53 2019] [sungjin@SMHCAS211AL01: Sat Oct 26 16:08:14 2019] ===================================
[sungjin@SMHCAS211AL01: Sat Oct 26 16:08:14 2019] === History of inputs to 3dcalc ===
[sungjin@SMHCAS211AL01: Sat Oct 26 16:08:14 2019] === Input a:

Fatal Signal 13 (SIGPIPE) received
3dinfo main
Bottom of Debug Stack
** AFNI version = AFNI_19.0.26 Compile date = Mar 20 2019
** [[Precompiled binary linux_ubuntu_16_64: Mar 20 2019]]
** Program Death **
** If you report this crash to the AFNI message board,
** please copy the error messages EXACTLY, and give
** the command line you used to run the program, and
** any other information needed to repeat the problem.
** You may later be asked to upload data to help debug.
** Crash log is appended to file /home/sungjin/.afni.crashlog
sungjin@SMHCAS211AL01:~/fMRI/CGE/265001.s1.results.v1$ free -h
total used free shared buff/cache available
Mem: 15G 1.1G 7.7G 450M 6.8G 13G
Swap: 2.0G 780K 2.0G

If I am reading that correctly, each input to 3dDeconvolve should be about 6 GB (if you uncompressed a .gz file). Since you have 6 runs, that is 36 GB just to hold the input, plus at least another 36 GB for processing and output. You would need between 70 and 100 GB of RAM to process it.

Your computer has 15 GB of RAM, which is not remotely close.

Are the voxels originally 1mm in size? Is this 7T data?

Or perhaps is it partial coverage at a high resolution, which blows up to a big volume? The original BRIK files are not so big.

  • rick

Hello Rick,

The original voxel size is 3mm, not 1mm. When I reviewed original EPI data over anatomical data that was skull-stripped and transformed to standard space in AFNI, it looked fine. However, after applying align_epi_anat.py, EPI data didn’t overlap with anatomical data at all (see attached). I tried the giant_move option, but this didn’t make any difference, either. So I guess the issue is alignment. Can you make any suggestions?

Sungjin

pb03.volreg.jpg

Hi Sungjin,

Sorry for being slow. We have finished a bootcamp and are still in middle of a subsequent hackathon, and I am losing track of the details here…

Ooooh! Going back I just noticed you have a comment about using 3dresample to change the orientation and grid size, but now you are not doing that.

Have you re-run afni_proc.py using the original 3mm data? The 3dNwarpApply command should not be using “-dxyz 1” anymore. But to be sure, what is the output of:

3dinfo -d3 -n4 -prefix pb00.265001.r01.tcat+orig

Getting to your alignment question, what is the output of:

3dinfo -extents -prefix DATASET

run on the anat and EPI DATASETs?

And to be sure, have you run 3dWarp or 3dresample on anything before running afni_proc.py?

Thanks,

  • rick