Hey,
I’m trying to run an analysis script resulting from an afni_proc invocation (see below) but cat_matvec is crashing. Any idea what could be causing this problem?
I’m happy to upload the offending files if it would help.
This is using afni version Precompiled binary linux_openmp_64: Apr 12 2017 (Version AFNI_17.1.01).
afni_proc.py -subj_id CMIT_01A \
-script ${preprocessingScript} \
-out_dir ${outputDir} \
-blocks despike tshift align tlrc volreg blur mask regress \
-copy_anat /data/jain/preApril/CMIT_01A/MPRAGE.nii \
-dsets /data/jain/preApril/CMIT_01A/NT.nii \
-tcat_remove_first_trs 3 \
-tlrc_base MNI_caez_N27+tlrc \
-volreg_align_to MIN_OUTLIER \
-volreg_tlrc_warp -tlrc_NL_warp \
-tlrc_NL_warped_dsets /data/jain/preApril/CMIT_01A/afniGriefPreprocessed.NL/MPRAGE_al_keep+tlrc.HEAD \
/data/jain/preApril/CMIT_01A/afniGriefPreprocessed.NL/anat.un.aff.Xat.1D \
/data/jain/preApril/CMIT_01A/afniGriefPreprocessed.NL/anat.un.aff.qw_WARP.nii.gz \
-blur_size 8 \
-blur_to_fwhm \
-blur_opts_B2FW "-ACF -rate 0.2 -temper" \
-mask_apply group \
-mask_segment_anat yes \
-mask_segment_erode yes \
-regress_anaticor \
-regress_bandpass 0.01 0.1 \
-regress_apply_mot_types demean deriv \
-regress_ROI WMe \
-regress_censor_motion $motionThreshold \
-regress_censor_outliers $outlierThreshold \
-regress_run_clustsim no \
-regress_est_blur_errts
cat_matvec crashes as follows:
cat_matvec -ONELINE anat.un.aff.qw_WARP.nii.gz anat.un.aff.Xat.1D MPRAGE_al_keep_mat.aff12.1D
Fatal Signal 11 (SIGSEGV) received
mri_read_double_ascii
THD_read_dvecmat
cat_matvec
Bottom of Debug Stack
** AFNI version = AFNI_17.1.01 Compile date = Apr 12 2017
** [[Precompiled binary linux_openmp_64: Apr 12 2017]]
** Program Death **
** If you report this crash to the AFNI message board,
** please copy the error messages EXACTLY, and give
** the command line you used to run the program, and
** any other information needed to repeat the problem.
** You may later be asked to upload data to help debug.
** Crash log is appended to file /home/colmconn/.afni.crashlog
I’ve tried running the cat_matvec from the command line with an uncompressed version of anat.un.aff.qw_WARP.nii.gz. With the same results. The crash log from the run of cat_matvec with the uncompressed anat file is as follows:
*********------ CRASH LOG ------------------------------***********
Fatal Signal 11 (SIGSEGV) received
.......... recent internal history .........................................
++cat_matvec [2]: {ENTRY (file=cat_matvec.c line=150) from Bottom of Debug Stack
+++AFNI_process_environ [3]: {ENTRY (file=afni_environ.c line=112) from cat_matvec
++++AFNI_suck_file [4]: {ENTRY (file=afni_environ.c line=25) from AFNI_process_environ
----AFNI_suck_file [4]: EXIT} (file=afni_environ.c line=30) to AFNI_process_environ
---AFNI_process_environ [3]: EXIT} (file=afni_environ.c line=132) to cat_matvec
+++AFNI_process_environ [3]: {ENTRY (file=afni_environ.c line=112) from cat_matvec
++++AFNI_suck_file [4]: {ENTRY (file=afni_environ.c line=25) from AFNI_process_environ
----AFNI_suck_file [4]: EXIT} (file=afni_environ.c line=30) to AFNI_process_environ
---AFNI_process_environ [3]: EXIT} (file=afni_environ.c line=132) to cat_matvec
+++AFNI_process_environ [3]: {ENTRY (file=afni_environ.c line=112) from cat_matvec
++++AFNI_suck_file [4]: {ENTRY (file=afni_environ.c line=25) from AFNI_process_environ
----AFNI_suck_file [4]: EXIT} (file=afni_environ.c line=30) to AFNI_process_environ
---AFNI_process_environ [3]: EXIT} (file=afni_environ.c line=132) to cat_matvec
+++THD_read_dvecmat [3]: {ENTRY (file=thd_read_vecmat.c line=16) from cat_matvec
++++mri_read_double_ascii [4]: {ENTRY (file=mri_read.c line=2663) from THD_read_dvecmat
+++++AFNI_process_environ [5]: {ENTRY (file=afni_environ.c line=112) from mri_read_double_ascii
++++++AFNI_suck_file [6]: {ENTRY (file=afni_environ.c line=25) from AFNI_process_environ
------AFNI_suck_file [6]: EXIT} (file=afni_environ.c line=30) to AFNI_process_environ
-----AFNI_process_environ [5]: EXIT} (file=afni_environ.c line=132) to mri_read_double_ascii
............................................................................
mri_read_double_ascii
THD_read_dvecmat
cat_matvec
** AFNI compile date = Apr 12 2017
** [[Precompiled binary linux_openmp_64: Apr 12 2017]]
** Program Crash **
rickr
April 25, 2017, 6:51pm
2
Hi Colm,
There should not be any WARP.nii.gz in a cat_matvec command.
Is that coming from the original proc script produced by that
afni_proc.py command?
Yep.
Attached is the whole csh script generated by afni_proc
I can provide the log file produced by this script as well if you’d like.
#!/bin/tcsh -xef
echo "auto-generated by afni_proc.py, Mon Apr 24 17:44:50 2017"
echo "(version 5.14, April 11, 2017)"
echo "execution started: `date`"
# execute via :
# tcsh -xef CMIT_01A.afniNeedlePreprocess.NL.csh |& tee output.CMIT_01A.afniNeedlePreprocess.NL.csh
# =========================== auto block: setup ============================
# script setup
# take note of the AFNI version
afni -ver
# check that the current AFNI version is recent enough
afni_history -check_date 23 Sep 2016
if ( $status ) then
echo "** this script requires newer AFNI binaries (than 23 Sep 2016)"
echo " (consider: @update.afni.binaries -defaults)"
exit
endif
# the user may specify a single subject to run with
if ( $#argv > 0 ) then
set subj = $argv[1]
else
set subj = CMIT_01A
endif
# assign output directory name
set output_dir = afniNeedlePreprocessed.NL
# verify that the results directory does not yet exist
if ( -d $output_dir ) then
echo output dir "$subj.results" already exists
exit
endif
# set list of runs
set runs = (`count -digits 2 1 1`)
# create results and stimuli directories
mkdir $output_dir
mkdir $output_dir/stimuli
# copy anatomy to results dir
3dcopy MPRAGE.nii $output_dir/MPRAGE
# copy external -tlrc_NL_warped_dsets datasets
3dcopy afniGriefPreprocessed.NL/MPRAGE_al_keep+tlrc $output_dir/MPRAGE_al_keep
3dcopy afniGriefPreprocessed.NL/anat.un.aff.Xat.1D \
$output_dir/anat.un.aff.Xat.1D
3dcopy afniGriefPreprocessed.NL/anat.un.aff.qw_WARP.nii.gz \
$output_dir/anat.un.aff.qw_WARP.nii.gz
# ============================ auto block: tcat ============================
# apply 3dTcat to copy input dsets to results dir, while
# removing the first 3 TRs
3dTcat -prefix $output_dir/pb00.$subj.r01.tcat NT.nii'[3..$]'
# and make note of repetitions (TRs) per run
set tr_counts = ( 189 )
# -------------------------------------------------------
# enter the results directory (can begin processing data)
cd $output_dir
# ========================== auto block: outcount ==========================
# data check: compute outlier fraction for each volume
touch out.pre_ss_warn.txt
foreach run ( $runs )
3dToutcount -automask -fraction -polort 4 -legendre \
pb00.$subj.r$run.tcat+orig > outcount.r$run.1D
# censor outlier TRs per run, ignoring the first 0 TRs
# - censor when more than 0.1 of automask voxels are outliers
# - step() defines which TRs to remove via censoring
1deval -a outcount.r$run.1D -expr "1-step(a-0.1)" > rm.out.cen.r$run.1D
# outliers at TR 0 might suggest pre-steady state TRs
if ( `1deval -a outcount.r$run.1D"{0}" -expr "step(a-0.4)"` ) then
echo "** TR #0 outliers: possible pre-steady state TRs in run $run" \
>> out.pre_ss_warn.txt
endif
end
# catenate outlier counts into a single time series
cat outcount.r*.1D > outcount_rall.1D
# catenate outlier censor files into a single time series
cat rm.out.cen.r*.1D > outcount_${subj}_censor.1D
# get run number and TR index for minimum outlier volume
set minindex = `3dTstat -argmin -prefix - outcount_rall.1D\'`
set ovals = ( `1d_tool.py -set_run_lengths $tr_counts \
-index_to_run_tr $minindex` )
# save run and TR indices for extraction of vr_base_min_outlier
set minoutrun = $ovals[1]
set minouttr = $ovals[2]
echo "min outlier: run $minoutrun, TR $minouttr" | tee out.min_outlier.txt
# ================================ despike =================================
# apply 3dDespike to each run
foreach run ( $runs )
3dDespike -NEW -nomask -prefix pb01.$subj.r$run.despike \
pb00.$subj.r$run.tcat+orig
end
# ================================= tshift =================================
# time shift data so all slice timing is the same
foreach run ( $runs )
3dTshift -tzero 0 -quintic -prefix pb02.$subj.r$run.tshift \
pb01.$subj.r$run.despike+orig
end
# --------------------------------
# extract volreg registration base
3dbucket -prefix vr_base_min_outlier \
pb02.$subj.r$minoutrun.tshift+orig"[$minouttr]"
# ================================= align ==================================
# a2e: align anatomy to EPI registration base
# (new anat will be aligned and stripped, MPRAGE_al_keep+orig)
align_epi_anat.py -anat2epi -anat MPRAGE+orig \
-suffix _al_keep \
-epi vr_base_min_outlier+orig -epi_base 0 \
-epi_strip 3dAutomask \
-volreg off -tshift off
# ================================== tlrc ==================================
# nothing to do: have external -tlrc_NL_warped_dsets
# warped anat : MPRAGE_al_keep+tlrc
# affine xform : anat.un.aff.Xat.1D
# non-linear warp : anat.un.aff.qw_WARP.nii.gz
# ================================= volreg =================================
# align each dset to base volume, warp to tlrc space
# verify that we have a +tlrc warp dataset
if ( ! -f MPRAGE_al_keep+tlrc.HEAD ) then
echo "** missing +tlrc warp dataset: MPRAGE_al_keep+tlrc.HEAD"
exit
endif
# register and warp
foreach run ( $runs )
# register each volume to the base
3dvolreg -verbose -zpad 1 -base vr_base_min_outlier+orig \
-1Dfile dfile.r$run.1D -prefix rm.epi.volreg.r$run \
-cubic \
-1Dmatrix_save mat.r$run.vr.aff12.1D \
pb02.$subj.r$run.tshift+orig
# create an all-1 dataset to mask the extents of the warp
3dcalc -overwrite -a pb02.$subj.r$run.tshift+orig -expr 1 \
-prefix rm.epi.all1
# catenate volreg/tlrc xforms
cat_matvec -ONELINE \
anat.un.aff.Xat.1D \
mat.r$run.vr.aff12.1D > mat.r$run.warp.aff12.1D
# apply catenated xform: volreg/tlrc
# then apply non-linear standard-space warp
3dNwarpApply -master MPRAGE_al_keep+tlrc -dxyz 3 \
-source pb02.$subj.r$run.tshift+orig \
-nwarp "anat.un.aff.qw_WARP.nii.gz mat.r$run.warp.aff12.1D" \
-prefix rm.epi.nomask.r$run
# warp the all-1 dataset for extents masking
3dNwarpApply -master MPRAGE_al_keep+tlrc -dxyz 3 \
-source rm.epi.all1+orig \
-nwarp "anat.un.aff.qw_WARP.nii.gz mat.r$run.warp.aff12.1D" \
-interp cubic \
-ainterp NN -quiet \
-prefix rm.epi.1.r$run
# make an extents intersection mask of this run
3dTstat -min -prefix rm.epi.min.r$run rm.epi.1.r$run+tlrc
end
# make a single file of registration params
cat dfile.r*.1D > dfile_rall.1D
# ----------------------------------------
# create the extents mask: mask_epi_extents+tlrc
# (this is a mask of voxels that have valid data at every TR)
# (only 1 run, so just use 3dcopy to keep naming straight)
3dcopy rm.epi.min.r01+tlrc mask_epi_extents
# and apply the extents mask to the EPI data
# (delete any time series with missing data)
foreach run ( $runs )
3dcalc -a rm.epi.nomask.r$run+tlrc -b mask_epi_extents+tlrc \
-expr 'a*b' -prefix pb03.$subj.r$run.volreg
end
# warp the volreg base EPI dataset to make a final version
cat_matvec -ONELINE anat.un.aff.Xat.1D > mat.basewarp.aff12.1D
3dNwarpApply -master MPRAGE_al_keep+tlrc -dxyz 3 \
-source vr_base_min_outlier+orig \
-nwarp "anat.un.aff.qw_WARP.nii.gz mat.basewarp.aff12.1D" \
-prefix final_epi_vr_base_min_outlier
# create an anat_final dataset, aligned with stats
3dcopy MPRAGE_al_keep+tlrc anat_final.$subj
# record final registration costs
3dAllineate -base final_epi_vr_base_min_outlier+tlrc -allcostX \
-input anat_final.$subj+tlrc |& tee out.allcostX.txt
# -----------------------------------------
# warp anat follower datasets (affine)
# catenate all transformations
cat_matvec -ONELINE \
anat.un.aff.qw_WARP.nii.gz \
anat.un.aff.Xat.1D \
MPRAGE_al_keep_mat.aff12.1D > warp.all.anat.aff12.1D
3dAllineate -source MPRAGE+orig \
-master anat_final.$subj+tlrc \
-final wsinc5 -1Dmatrix_apply warp.all.anat.aff12.1D \
-prefix anat_w_skull_warped
# ================================== blur ==================================
# blur each volume of each run
foreach run ( $runs )
3dBlurToFWHM -FWHM 8 -mask mask_epi_extents+tlrc \
-ACF -rate 0.2 -temper \
-input pb03.$subj.r$run.volreg+tlrc \
-prefix pb04.$subj.r$run.blur
end
# ================================== mask ==================================
# create 'full_mask' dataset (union mask)
foreach run ( $runs )
3dAutomask -dilate 1 -prefix rm.mask_r$run pb04.$subj.r$run.blur+tlrc
end
# create union of inputs, output type is byte
3dmask_tool -inputs rm.mask_r*+tlrc.HEAD -union -prefix full_mask.$subj
# ---- create subject anatomy mask, mask_anat.$subj+tlrc ----
# (resampled from tlrc anat)
3dresample -master full_mask.$subj+tlrc -input MPRAGE_al_keep+tlrc \
-prefix rm.resam.anat
# convert to binary anat mask; fill gaps and holes
3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.anat+tlrc \
-prefix mask_anat.$subj
# compute overlaps between anat and EPI masks
3dABoverlap -no_automask full_mask.$subj+tlrc mask_anat.$subj+tlrc \
|& tee out.mask_ae_overlap.txt
# note Dice coefficient of masks, as well
3ddot -dodice full_mask.$subj+tlrc mask_anat.$subj+tlrc \
|& tee out.mask_ae_dice.txt
# ---- create group anatomy mask, mask_group+tlrc ----
# (resampled from tlrc base anat, MNI_caez_N27+tlrc)
3dresample -master full_mask.$subj+tlrc -prefix ./rm.resam.group \
-input /data/software/afni/MNI_caez_N27+tlrc
# convert to binary group mask; fill gaps and holes
3dmask_tool -dilate_input 5 -5 -fill_holes -input rm.resam.group+tlrc \
-prefix mask_group
# ---- segment anatomy into classes CSF/GM/WM ----
3dSeg -anat anat_final.$subj+tlrc -mask AUTO -classes 'CSF ; GM ; WM'
# copy resulting Classes dataset to current directory
3dcopy Segsy/Classes+tlrc .
# make individual ROI masks for regression (CSF GM WM and CSFe GMe WMe)
foreach class ( CSF GM WM )
# unitize and resample individual class mask from composite
3dmask_tool -input Segsy/Classes+tlrc"<$class>" \
-prefix rm.mask_${class}
3dresample -master pb04.$subj.r01.blur+tlrc -rmode NN \
-input rm.mask_${class}+tlrc -prefix mask_${class}_resam
# also, generate eroded masks
3dmask_tool -input Segsy/Classes+tlrc"<$class>" -dilate_input -1 \
-prefix rm.mask_${class}e
3dresample -master pb04.$subj.r01.blur+tlrc -rmode NN \
-input rm.mask_${class}e+tlrc -prefix mask_${class}e_resam
end
# ================================ regress =================================
# compute de-meaned motion parameters (for use in regression)
1d_tool.py -infile dfile_rall.1D -set_nruns 1 \
-demean -write motion_demean.1D
# compute motion parameter derivatives (for use in regression)
1d_tool.py -infile dfile_rall.1D -set_nruns 1 \
-derivative -demean -write motion_deriv.1D
# create censor file motion_${subj}_censor.1D, for censoring motion
1d_tool.py -infile dfile_rall.1D -set_nruns 1 \
-show_censor_count -censor_prev_TR \
-censor_motion 0.2 motion_${subj}
# combine multiple censor files
1deval -a motion_${subj}_censor.1D -b outcount_${subj}_censor.1D \
-expr "a*b" > censor_${subj}_combined_2.1D
# create bandpass regressors (instead of using 3dBandpass, say)
1dBport -nodata 189 2.5 -band 0.01 0.1 -invert -nozero > bandpass_rall.1D
# create ROI regressor: WMe
# (get each ROI average time series and remove resulting mean)
foreach run ( $runs )
3dmaskave -quiet -mask mask_WMe_resam+tlrc \
pb03.$subj.r$run.volreg+tlrc \
| 1d_tool.py -infile - -demean -write rm.ROI.WMe.r$run.1D
end
# and catenate the demeaned ROI averages across runs
cat rm.ROI.WMe.r*.1D > ROI.WMe_rall.1D
# note TRs that were not censored
set ktrs = `1d_tool.py -infile censor_${subj}_combined_2.1D \
-show_trs_uncensored encoded`
# ------------------------------
# run the regression analysis
3dDeconvolve -input pb04.$subj.r*.blur+tlrc.HEAD \
-mask mask_group+tlrc \
-censor censor_${subj}_combined_2.1D \
-ortvec bandpass_rall.1D bandpass \
-ortvec ROI.WMe_rall.1D ROI.WMe \
-polort 4 \
-num_stimts 12 \
-stim_file 1 motion_demean.1D'[0]' -stim_base 1 -stim_label 1 roll_01 \
-stim_file 2 motion_demean.1D'[1]' -stim_base 2 -stim_label 2 pitch_01 \
-stim_file 3 motion_demean.1D'[2]' -stim_base 3 -stim_label 3 yaw_01 \
-stim_file 4 motion_demean.1D'[3]' -stim_base 4 -stim_label 4 dS_01 \
-stim_file 5 motion_demean.1D'[4]' -stim_base 5 -stim_label 5 dL_01 \
-stim_file 6 motion_demean.1D'[5]' -stim_base 6 -stim_label 6 dP_01 \
-stim_file 7 motion_deriv.1D'[0]' -stim_base 7 -stim_label 7 roll_02 \
-stim_file 8 motion_deriv.1D'[1]' -stim_base 8 -stim_label 8 pitch_02 \
-stim_file 9 motion_deriv.1D'[2]' -stim_base 9 -stim_label 9 yaw_02 \
-stim_file 10 motion_deriv.1D'[3]' -stim_base 10 -stim_label 10 dS_02 \
-stim_file 11 motion_deriv.1D'[4]' -stim_base 11 -stim_label 11 dL_02 \
-stim_file 12 motion_deriv.1D'[5]' -stim_base 12 -stim_label 12 dP_02 \
-fout -tout -x1D X.xmat.1D -xjpeg X.jpg \
-x1D_uncensored X.nocensor.xmat.1D \
-fitts fitts.$subj \
-errts errts.${subj} \
-x1D_stop \
-bucket stats.$subj
# -- use 3dTproject to project out regression matrix --
3dTproject -polort 0 -input pb04.$subj.r*.blur+tlrc.HEAD \
-mask mask_group+tlrc \
-censor censor_${subj}_combined_2.1D -cenmode ZERO \
-ort X.nocensor.xmat.1D -prefix errts.${subj}.tproject
# if 3dDeconvolve fails, terminate the script
if ( $status != 0 ) then
echo '---------------------------------------'
echo '** 3dDeconvolve error, failing...'
echo ' (consider the file 3dDeconvolve.err)'
exit
endif
# display any large pairwise correlations from the X-matrix
1d_tool.py -show_cormat_warnings -infile X.xmat.1D |& tee out.cormat_warn.txt
# create an all_runs dataset to match the fitts, errts, etc.
3dTcat -prefix all_runs.$subj pb04.$subj.r*.blur+tlrc.HEAD
# --------------------------------------------------
# generate ANATICOR result: errts.$subj.anaticor+tlrc
# --------------------------------------------------
# ANATICOR: generate local WMe time series averages
# create catenated volreg dataset
3dTcat -prefix rm.all_runs.volreg pb03.$subj.r*.volreg+tlrc.HEAD
3dLocalstat -stat mean -nbhd 'SPHERE(45)' -prefix Local_WMe_rall \
-mask mask_WMe_resam+tlrc -use_nonmask \
rm.all_runs.volreg+tlrc
# -- use 3dTproject to project out regression matrix --
3dTproject -polort 0 -input pb04.$subj.r*.blur+tlrc.HEAD \
-mask mask_group+tlrc \
-censor censor_${subj}_combined_2.1D -cenmode ZERO \
-dsort Local_WMe_rall+tlrc \
-ort X.nocensor.xmat.1D -prefix errts.$subj.anaticor
# --------------------------------------------------
# create a temporal signal to noise ratio dataset
# signal: if 'scale' block, mean should be 100
# noise : compute standard deviation of errts
3dTstat -mean -prefix rm.signal.all all_runs.$subj+tlrc"[$ktrs]"
3dTstat -stdev -prefix rm.noise.all errts.$subj.anaticor+tlrc"[$ktrs]"
3dcalc -a rm.signal.all+tlrc \
-b rm.noise.all+tlrc \
-c mask_group+tlrc \
-expr 'c*a/b' -prefix TSNR.$subj
# ---------------------------------------------------
# compute and store GCOR (global correlation average)
# (sum of squares of global mean of unit errts)
3dTnorm -norm2 -prefix rm.errts.unit errts.$subj.anaticor+tlrc
3dmaskave -quiet -mask full_mask.$subj+tlrc rm.errts.unit+tlrc \
> gmean.errts.unit.1D
3dTstat -sos -prefix - gmean.errts.unit.1D\' > out.gcor.1D
echo "-- GCOR = `cat out.gcor.1D`"
# ---------------------------------------------------
# compute correlation volume
# (per voxel: average correlation across masked brain)
# (now just dot product with average unit time series)
3dcalc -a rm.errts.unit+tlrc -b gmean.errts.unit.1D -expr 'a*b' -prefix rm.DP
3dTstat -sum -prefix corr_brain rm.DP+tlrc
# --------------------------------------------------------
# compute sum of non-baseline regressors from the X-matrix
# (use 1d_tool.py to get list of regressor colums)
set reg_cols = `1d_tool.py -infile X.nocensor.xmat.1D -show_indices_interest`
3dTstat -sum -prefix sum_ideal.1D X.nocensor.xmat.1D"[$reg_cols]"
# also, create a stimulus-only X-matrix, for easy review
1dcat X.nocensor.xmat.1D"[$reg_cols]" > X.stim.xmat.1D
# ============================ blur estimation =============================
# compute blur estimates
touch blur_est.$subj.1D # start with empty file
# create directory for ACF curve files
mkdir files_ACF
# -- estimate blur for each run in errts --
touch blur.errts.1D
# restrict to uncensored TRs, per run
foreach run ( $runs )
set trs = `1d_tool.py -infile X.xmat.1D -show_trs_uncensored encoded \
-show_trs_run $run`
if ( $trs == "" ) continue
3dFWHMx -detrend -mask mask_group+tlrc \
-ACF files_ACF/out.3dFWHMx.ACF.errts.r$run.1D \
errts.$subj.anaticor+tlrc"[$trs]" >> blur.errts.1D
end
# compute average FWHM blur (from every other row) and append
set blurs = ( `3dTstat -mean -prefix - blur.errts.1D'{0..$(2)}'\'` )
echo average errts FWHM blurs: $blurs
echo "$blurs # errts FWHM blur estimates" >> blur_est.$subj.1D
# compute average ACF blur (from every other row) and append
set blurs = ( `3dTstat -mean -prefix - blur.errts.1D'{1..$(2)}'\'` )
echo average errts ACF blurs: $blurs
echo "$blurs # errts ACF blur estimates" >> blur_est.$subj.1D
# ================== auto block: generate review scripts ===================
# generate a review script for the unprocessed EPI data
gen_epi_review.py -script @epi_review.$subj \
-dsets pb00.$subj.r*.tcat+orig.HEAD
# generate scripts to review single subject results
# (try with defaults, but do not allow bad exit status)
gen_ss_review_scripts.py -mot_limit 0.2 -out_limit 0.1 \
-errts_dset errts.$subj.anaticor+tlrc.HEAD -exit0
# ========================== auto block: finalize ==========================
# remove temporary files
\rm -fr rm.* Segsy
# if the basic subject review script is here, run it
# (want this to be the last text output)
if ( -e @ss_review_basic ) ./@ss_review_basic |& tee out.ss_review.$subj.txt
# return to parent directory
cd ..
echo "execution finished: `date`"
# ==========================================================================
# script generated by the command:
#
# afni_proc.py -subj_id CMIT_01A -script CMIT_01A.afniNeedlePreprocess.NL.csh \
# -out_dir afniNeedlePreprocessed.NL -blocks despike tshift align tlrc \
# volreg blur mask regress -copy_anat \
# /data/jain/preApril/CMIT_01A/MPRAGE.nii -dsets \
# /data/jain/preApril/CMIT_01A/NT.nii -tcat_remove_first_trs 3 -tlrc_base \
# MNI_caez_N27+tlrc -volreg_align_to MIN_OUTLIER -volreg_tlrc_warp \
# -tlrc_NL_warp -tlrc_NL_warped_dsets \
# /data/jain/preApril/CMIT_01A/afniGriefPreprocessed.NL/MPRAGE_al_keep+tlrc.HEAD \
# /data/jain/preApril/CMIT_01A/afniGriefPreprocessed.NL/anat.un.aff.Xat.1D \
# /data/jain/preApril/CMIT_01A/afniGriefPreprocessed.NL/anat.un.aff.qw_WARP.nii.gz \
# -blur_size 8 -blur_to_fwhm -blur_opts_B2FW '-ACF -rate 0.2 -temper' \
# -mask_apply group -mask_segment_anat yes -mask_segment_erode yes \
# -regress_anaticor -regress_bandpass 0.01 0.1 -regress_apply_mot_types \
# demean deriv -regress_ROI WMe -regress_censor_motion 0.2 \
# -regress_censor_outliers 0.1 -regress_run_clustsim no \
# -regress_est_blur_errts
rickr
April 25, 2017, 7:58pm
4
Ah, this is another case of automatic compression
of NIFTI datasets biting us. For now, uncompress
those WARP datasets and generate a new proc
script. I will try to catch this, sorry for the trouble.
Thanks,
Thanks for taking a look. I’ll work around it.
rickr
April 26, 2017, 1:30am
6
I did get a little fix in for that today, which will be
available next time we update the binaries.
Please let me know if it would be helpful to get
that tomorrow, say.
Thanks for bringing it up!
Hi Rick,
I’m having the same problem as Colm, as far as the errors listed above, and I’m wondering, when you say “For now, uncompress
those WARP datasets and generate a new proc scripts”, what do you mean by uncompress those WARP datasets? I’m inputting them as they exist in my directory, as standard nii files that aren’t compressed in any way. Is there some way I can go into the proc script itself to specify whatever uncompression is needed to get cat_matvec to run?
Thanks!
Mikey
rickr
June 24, 2020, 6:03pm
8
Hi Mikey,
Can I just be positive about some things?
What does “afni -ver” show?
What is the -tlrc_NL_warped_dsets option that you are applying?
Can you provide the failing command and error message?
If this doesn’t clarify this, I might ask you to mail or upload more details.
Thanks,