AFNI_proc – 1 of 25 subjects shows no visual .errts results


I am having a small problem here with only 1 subject out of a dataset that contains 25 subjects. I ran the following preprocessing script for a resting-state session (one run per subject).
-subj_id ${subject}_Rest
-out_dir $directory/Results
-blocks despike ricor tshift align tlrc volreg mask blur regress
-copy_anat $directory_SSwarper/anatSS.$subject.nii
-anat_has_skull no
-ricor_regs $directory/Physiological_Regressors.slibase.1D
-ricor_regress_method per-run
-tcat_remove_first_trs 4
-align_opts_aea -cost lpa -big_move
-volreg_align_to MIN_OUTLIER
-volreg_tlrc_warp -tlrc_base MNI152_2009_template_SSW.nii.gz
-volreg_post_vr_allin yes
-volreg_pvra_base_index MIN_OUTLIER
-mask_segment_anat yes
-mask_segment_erode yes
-regress_ROI WMe
-regress_apply_mot_types demean deriv
-regress_censor_motion 0.2
-regress_censor_outliers 0.05
-blur_size 6.0
-html_review_style pythonic

The results are good, beside for one single subject (6). When I open the .errts output file by AFNI proc to inspect the time-series and visual results, all possible images (axial, sagittal, and coronal) remain fully black on all slices. I just realized this by accident because further measurements (like the power-law exponent) showed extremely low values for this subject. Only then I rechecked the errts file for this subject and realized the problem.

Some more information for subject 6 (the one with the problem)

  • The raw data of the functional and anatomical scans look good.
  • Results of SSwarper and all anatomical files look good.
  • All previous steps of the preprocessing (volreg, ricor, blur, etc.) look good.
  • Only the .errts (both anaticor and tproject) is then suddenly fully “gone”.
  • The time-series of the .errts file is there though!

What could the problem be? I already ran the preprocessing again, just in case something went wrong with my computer. But the problem remains. I would like to use this subject instead of exluding it from the study.

Thanks for any input,

Hi, Philipp–

In most of our examples for running, we make a text file that is a copy of all the terminal text during the execution of the script. This is generally done with something like the following “tee” command:

# tcsh syntax
tcsh -xef proc.sub-001 |& tee output.ap.cmd.sub-001

# bash syntax
tcsh -xef proc.sub-001 2>&1 | tee output.ap.cmd.sub-001

You can look through that output* text file for the first error/failure that occurs. If it isn’t clear, you can email it in (it is a bit long, typically, to paste in these help messages, but we can continue the discussion about it here).


Hi Philipp,

It sounds like time point #0 was censored for that subject, leaving an all-zero volume there. Move forward in time, or open a graph window as a guide.

Does that seem like what happened?

  • rick


thanks for the responses (also via email). Rick, you are right. I thought that censored volumes/TRs will also be censored from the errts file (visually in the AFNI GUI!), so that I would see a “string” of the remaining volumes in the errts file after preprocessing in AFNIs GUI.

One more question came up. Out of my 25 subjects 2 subjects showed heavy head motion.

  • Subject 6: Censored 108 of 356 total time points: 30.3%
  • Subject 21: Censored 187 of 356 total time points: 52.5%

I think that I should exclude subject 21 from the study, since I already lost 52% DOF after preprocessing. But what about subject 6? Would you recommend to exclude it too based on the loss of 30% of the DOF, or is that percentage of 356 time points still kind of ok? Is there a rule of thumb?


Hi Philipp,

The censored volumes were originally not kept in the 3dTproject output, but then the errts would not be the same as the fitts and all_runs datasets, so it was changed to leave the zero volumes in. They do not affect voxelwise correlations.

Regarding the high-motion subjects, it depends on how much of an outlier a subject’s censoring is. And it depends on how many degrees of freedom are remaining in the result. Censoring 30% might be a droppable offense, but that depends on the other subjects. But depending on the type of subjects (kids? healthy adults?), these numbers tend to be fairly stable.

You don’t really want to look over all the subjects and then make a choice. Optimally this is the sort of thing decided before any subjects are acquired. That is not always feasible in practice, but it might not require so many subjects to come up with a reasonable choice.

  • rick
1 Like