3dttest++ -Clustsim complains it can't read *minmax.1D, but continues happily

Dear gurus,

I’m running the following 3dttest++ command


3dttest++ \
  -setA $INDIR/sub-*/*MNI*deconvolve*.nii.gz'[$BRICK]' \
  -mask $BASEDIR/derivatives/unionmask/group/group_space-MNI152NLin2009cAsym_unionbrainmask.nii.gz \
  -Clustsim \
  -prefix $OUTDIR/clust_brick$BRICK\.nii.gz \
  -prefix_clustsim clust_brick$BRICK

it finishes happily, and when I load the results in the AFNI GUI I can find all the p-values, etc. However, if I inspect the logs, I see these errors:


...
 + 3dttest++ ===== simulation jobs have finished (414.4 s elapsed)
** ERROR: Can't read file ./clust_brick122.0001.minmax.1D
** ERROR: Can't read file ./clust_brick122.0003.minmax.1D
** ERROR: Can't read file ./clust_brick122.0004.minmax.1D
** ERROR: Can't read file ./clust_brick122.0005.minmax.1D
** ERROR: Can't read file ./clust_brick122.0006.minmax.1D
** ERROR: Can't read file ./clust_brick122.0008.minmax.1D
** ERROR: Can't read file ./clust_brick122.0009.minmax.1D
** ERROR: Can't read file ./clust_brick122.0011.minmax.1D
** ERROR: Can't read file ./clust_brick122.0015.minmax.1D
 + 3dttest++ ===== starting 3dClustSim A: elapsed = 437.9 s
++ 3dClustSim: AFNI version=AFNI_17.3.09 (Dec 22 2017) [64-bit]
++ Authored by: RW Cox and BD Ward
++ Loading -insdat datasets
!
++ saving main effect t-stat MIN/MAX values in ./clust_brick122.0004.minmax.1D
!
++ saving main effect t-stat MIN/MAX values in ./clust_brick122.0001.minmax.1D
1++ output short-ized file ./clust_brick122.0004.sdat
++ output short-ized file ./clust_brick122.0001.sdat
!
++ saving main effect t-stat MIN/MAX values in ./clust_brick122.0008.minmax.1D
++ output short-ized file ./clust_brick122.0008.sdat
...

When the program is done, some *minmax.1D files remain (which interestingly are those that 3dttest++ complained it could not open):


[castello@discovery group]$ ls *minmax* | nl
     1  clust_brick122.0001.minmax.1D
     2  clust_brick122.0003.minmax.1D
     3  clust_brick122.0004.minmax.1D
     4  clust_brick122.0005.minmax.1D
     5  clust_brick122.0006.minmax.1D
     6  clust_brick122.0008.minmax.1D
     7  clust_brick122.0009.minmax.1D
     8  clust_brick122.0011.minmax.1D
     9  clust_brick122.0015.minmax.1D

and they do contain stuff


[castello@discovery group]$ head clust_brick122.0001.minmax.1D
       -4.27043        4.81231
       -4.35104        3.87422
       -4.16067          3.788
       -3.69115        3.74613
       -4.45349        4.16302
       -4.63851        4.21939
       -5.09003        4.42739
        -4.3851        4.33245
       -3.89163        4.43233
       -4.07312        3.62941

I wonder if it’s somehow a file system problem on our side…however I’m surprised that 3dttest++ continues with these errors. What should I do?

Thanks!
Matteo

P.S. here’s the full log of 3dttest++. Apologies for the verbosity, but I’m running everything in a singularity container and I’m paranoid: logttest.txt

I’ve seen this happen a couple times, and don’t know what it is. I have only seen it when using a networked filesystem, so it is possible that there is something that can happen there – a timeout when the filesystem is busy?

The program can proceed because the minmax.1D files are a side-calculation to compute the single-voxel threshold that would give a 5% (say) false positive rate, globally. I only put this feature in there because someone (whose initials are TN) asked for this – and as far as I know, never used it. If it fails to read the minmax.1D files, it just skips this calculation and goes on to the main Clustsim work.

To be clear, the threshold computed this way (if it had actually worked) would be the z-statistic to threshold per-voxel such that there would be a 5% chance of ANY false positive voxel inside the brain mask. This value is somewhat below that which you would get from applying the Bonferroni correction – choosing a per-voxel z threshold such that the corresponding per-voxel p-value is 0.05/Number_of_brain_voxels. The threshold this way is still very strict, and so probably not useful unless you really ARE interested in single voxel results – and have really good data.

Thanks for the explanation, Bob. The filesystem on our cluster is indeed mounted with NFS. I will happily ignore these errors then.