I’m receiving an error with the -Rwherr option to 3dREMLfit.
The program seems to run OK, but the residual dataset is not readable and there are the following messages in the program output:
NIML: write abort P
NIML: write abort s
** failed to write NIML output file './err.niml.dset'
** write_niml_file failed for './err.niml.dset'
++ Output dataset ./err.niml.dset
The resulting err.niml.dset file is 2897399968 bytes.
The same problem occurs with -Rerrts, and also occurs when using -usetemp.
However, it works OK (no error) if I reduce the number of -input runs (from 29 to 9). Is there a memory or storage constraint in the program? There shouldn’t be either on the system side.
So it stops AFTER writing 2.8 GB of the output file?
How big is a completed file with 9 input runs? Then multiply by 29/9 to estimate the size of the desired file?
The result has been computed in memory, and the problem is converting it to the .niml.dset format and writing that to the output.
One possibility is that the output function is keeping track of bytes output into a 32 bit integer. Your number of bytes exceeds 2^31, so the output function will return a negative value to the caller. That is the flag for error. That is, you are getting the correct output file, but getting an incorrect error message.
If I am correct, then this is not a 3dREMLfit problem per se, but rather a failure in the NIML library. Frankly, it never occurred to me 19 years ago – when I created NIML – that anyone would write a 2+ GB file. Lack of imagination, I guess. Stick a fork in me, I’m done.
I can change this bookkeeping, but have to look the code over to see if there are other obvious places where something like this could happen. I’ll post back here when a new version with this fix is available – later this week.
So it stops AFTER writing 2.8 GB of the output file?
Yes, that’s correct.
How big is a completed file with 9 input runs?
With 9 runs, it is 899607339 bytes.
Then multiply by 29/9 to estimate the size of the desired file?
That gives 2898734759 bytes.
The result has been computed in memory, and the problem is converting it to the .niml.dset format and writing that to the output.
One possibility is that the output function is keeping track of bytes output into a 32 bit integer. Your number of bytes exceeds 2^31, so the output function will return a negative value to the caller. That is the flag for error. That is, you are getting the correct output file, but getting an incorrect error message.
I think this explanation is correct. With my dataset, the boundary between being approximately within or outside of a 32 bit integer byte count is when there are 21 and 22 runs. That coincides with when I don’t and do receive the error message (i.e., no error message with 21 runs, error message with 22 runs).
I can change this bookkeeping, but have to look the code over to see if there are other obvious places where something like this could happen. I’ll post back here when a new version with this fix is available – later this week.
OK, the error is fixed (I hope) in the latest release of AFNI = AFNI_21.2.02 (just released now). There was a problem reading files larger than 2 GB – in a different place, this time.
It takes about 30s to read your 2+ GB file, which has the data stored in binary format. I converted it to a text-only file, at 5+ GB, and that took 150s to read into memory – showing the increased efficiency of binary storage for large datasets.
What you will DO with such a big dataset is a different story, of course. I did not test your file in SUMA, just tested that it can be read successfully.
OK, the error is fixed (I hope) in the latest
release of AFNI = AFNI_21.2.02 (just released
now). There was a problem reading files larger
than 2 GB – in a different place, this time.
It all works well now - thanks!
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.