Error reading brick on biowulf but not locally

When running abids_tool.py, an error is triggered on biowulf but not locally:
++ WARNING: nifti_read_buffer(/gpfs/gsfs12/users/tevesjb/Frontiers_QC_2022/osfstorage-archive/fmri-open-qc-task/sub-001/func/sub-001_task-pamenc_bold.nii.gz):
data bytes needed = 278528
data bytes input = 216336
number missing = 62192 (set to 0)
** NIFTI load bricks: cannot read brick 19 from ‘/gpfs/gsfs12/users/tevesjb/Frontiers_QC_2022/osfstorage-archive/fmri-open-qc-task/sub-001/func/sub-001_task-pamenc_bold.nii.gz’
** NIFTI load bricks: cannot read brick 19 from ‘/gpfs/gsfs12/users/tevesjb/Frontiers_QC_2022/osfstorage-archive/fmri-open-qc-task/sub-001/func/sub-001_task-pamenc_bold.nii.gz’
** ERROR: Can’t write NIfTI file since dataset isn’t loaded: /gpfs/gsfs12/users/tevesjb/Frontiers_QC_2022/osfstorage-archive/fmri-open-qc-task/sub-001/func/sub-001_task-pamenc_bold.nii.gz

Whereas locally, no such errors are present using a MacOS build and the exact same file (confirmed by copying and sftping). Any thoughts on how to diagnose this?

Addendum: what is happening is not quite what I reported. What seems to be happening is that because I’m using abids_tool, if there is a read error from the filesystem it consequently writes out a corrupted file in the same spot (the error does not cause the program to terminate, so it continues on by writing the existing, corrupted, data back to disk in the same filename). Using pigz does seem to decrease the chances of this happening quite dramatically but it appears to be a problem nonetheless on biowulf, which I imagine is tied to network drives because I simply cannot replicate the issue using local storage.

Hi, Josh-

Thanks for pointing this out. If I understand correctly, we should edit abids_tool.py to terminate when this error occurs—is that right? That would make things more consistent, and just be nicer behavior when a dataset runs into a problem.

I might ping you for the dset+exact command being run, in that case…

thanks,
pt

The thing is that the error happens intermittently, I didn’t think to save a copy of the dset that was corrupted, but simply replacing it with the original file did the trick, and re-running caused the error to go away. I agree that it would be could for abids_tool.py to terminate, but poking around it looks like the issue is that 3drefit is not actually giving a non-zero exit, so abids_tool.py couldn’t detect a failure unless it scanned the output for the word “ERROR.” Then going further down the rabbit hole, it looks like 3drefit does not given an error because the nifti reader is not giving an error. In fact, in thd_niftiwrite.c, all error messages return an exit code of 0.

A useful rabbit hole of information—thanks, we will look into this at that higher upstream point.

I usually copy a file before 3drefitting it, because it is changed in place. I guess abids_tool.py should do the same, have a correct error code to interpret, and then it can copy back to replace, say.

What is the command you were using, just so I can play around with that functionality?

–pt

Sure,
3drefit -Tslices 0.0 0.05319 0.10638 0.15957 0.21277 0.26596 0.31915 0.37234 0.42553 0.47872 0.53191 0.58511 0.6383 0.69149 0.74468 0.79787 0.85106 0.90426 0.95745 1.01064 1.06383 1.11702 1.17021 1.2234 1.2766 1.32979 1.38298 1.43617 1.48936 1.54255 1.59574 1.64894 1.70213 1.75532 1.80851 1.8617 1.91489 1.96809 2.02128 2.07447 2.12766 2.18085 2.23404 2.28723 2.34043 2.39362 2.44681 /gpfs/gsfs12/users/tevesjb/Frontiers_QC_2022/raw/fmri-open-qc-rest/sub-101/ses-01/func/sub-101_ses-01_task-rest_run-01_bold.nii.gz

or the appropriate abids_tool call:
abids_tool.py -input sub-101_ses-01_task-rest_run-01_bold.nii.gz -add_slice_times

From my POV it would be useful to allow copying the newly refitted data with the slice times elsewhere as a user option, as I imagine many users would prefer to do something like write-protect the raw data and then copy data which can be edited for processing. Perhaps an option like -prefix which would allow copy + the other operations?

A suspicious-looking dataset, but OK…

And I agree about a “-prefix …” option.

thanks,
pt

Yeah, some joker on the internet gave it to me! You can trust these things without looking at them at all.

It’s not too complex to add one, I could do that today if I can remember how to set up the dev environment correctly for the python scripts.

Well, on second thought, that “-prefix …” idea is a bit tricky, because BIDS-formatted data have rigidly encoded filenames. So, by definition, the input dset will have to be replaced by the output, to maintain BIDSiness.

Will ponder a bit more.

–pt

I would argue that it’s acceptable to let somebody copy it out of the BIDS tree. Another alternative would be a -debids_to option where as much information as possible from the sidecar is put into the AFNI extension and a new dataset that is NOT intended to be BIDS-compliant output.