`-Rwherr` file output error with 3dREMLfit

Hi,

I’m receiving an error with the -Rwherr option to 3dREMLfit.

The program seems to run OK, but the residual dataset is not readable and there are the following messages in the program output:


NIML: write abort P
NIML: write abort s
** failed to write NIML output file './err.niml.dset'
** write_niml_file failed for './err.niml.dset'
++ Output dataset ./err.niml.dset

The resulting err.niml.dset file is 2897399968 bytes.

The same problem occurs with -Rerrts, and also occurs when using -usetemp.

However, it works OK (no error) if I reduce the number of -input runs (from 29 to 9). Is there a memory or storage constraint in the program? There shouldn’t be either on the system side.

Thanks for any help.

Damien.

So it stops AFTER writing 2.8 GB of the output file?
How big is a completed file with 9 input runs? Then multiply by 29/9 to estimate the size of the desired file?

The result has been computed in memory, and the problem is converting it to the .niml.dset format and writing that to the output.

One possibility is that the output function is keeping track of bytes output into a 32 bit integer. Your number of bytes exceeds 2^31, so the output function will return a negative value to the caller. That is the flag for error. That is, you are getting the correct output file, but getting an incorrect error message.

If I am correct, then this is not a 3dREMLfit problem per se, but rather a failure in the NIML library. Frankly, it never occurred to me 19 years ago – when I created NIML – that anyone would write a 2+ GB file. Lack of imagination, I guess. Stick a fork in me, I’m done.

I can change this bookkeeping, but have to look the code over to see if there are other obvious places where something like this could happen. I’ll post back here when a new version with this fix is available – later this week.

Thanks for looking into it!

So it stops AFTER writing 2.8 GB of the output file?

Yes, that’s correct.

How big is a completed file with 9 input runs?

With 9 runs, it is 899607339 bytes.

Then multiply by 29/9 to estimate the size of the desired file?

That gives 2898734759 bytes.

The result has been computed in memory, and the problem is converting it to the .niml.dset format and writing that to the output.

One possibility is that the output function is keeping track of bytes output into a 32 bit integer. Your number of bytes exceeds 2^31, so the output function will return a negative value to the caller. That is the flag for error. That is, you are getting the correct output file, but getting an incorrect error message.

I think this explanation is correct. With my dataset, the boundary between being approximately within or outside of a 32 bit integer byte count is when there are 21 and 22 runs. That coincides with when I don’t and do receive the error message (i.e., no error message with 21 runs, error message with 22 runs).

I can change this bookkeeping, but have to look the code over to see if there are other obvious places where something like this could happen. I’ll post back here when a new version with this fix is available – later this week.

Great, thank you!

Howdy-

Bob made some updates yesterday, which should be available now if you update your AFNI:


@update.afni.binaries -d

—> getting ver 21.1.20 (or any later version, if you are reading this in The Future).

Please let us know how running your code works with that newer version.

–pt

Hi,

Great, thanks for the update.

I no longer receive the error message in the output of 3dREMLfit, but the resulting file still does not seem readable:


> 3dinfo err.niml.dset
++ 3dinfo: AFNI version=AFNI_21.1.20 (Jun 28 2021) [64-bit]
** FATAL ERROR: Can't open dataset err.niml.dset
** Program compile date = Jun 28 2021

The file is 2897399991 bytes in size.

If I try to load it in SUMA, I get:


Note SUMA_LoadNimlDset: err.niml.dset has no element and no group. 
Perhaps it is a .1D read in as a niml dset.

The start of the file seems as usual:


> head -8 err.niml.dset
<AFNI_dataset
  dset_type="Node_Bucket"
  self_idcode="XYZ_NwV4LJN2QcDA0pidqxumVg"
  filename="./err.niml.dset"
  ni_form="ni_group" >
<AFNI_atr
  ni_type="String"
  ni_dimen="1"

OK, the error is fixed (I hope) in the latest release of AFNI = AFNI_21.2.02 (just released now). There was a problem reading files larger than 2 GB – in a different place, this time.

It takes about 30s to read your 2+ GB file, which has the data stored in binary format. I converted it to a text-only file, at 5+ GB, and that took 150s to read into memory – showing the increased efficiency of binary storage for large datasets.

What you will DO with such a big dataset is a different story, of course. I did not test your file in SUMA, just tested that it can be read successfully.

OK, the error is fixed (I hope) in the latest
release of AFNI = AFNI_21.2.02 (just released
now). There was a problem reading files larger
than 2 GB – in a different place, this time.

It all works well now - thanks!