**Error: Line too long for buffer of 5048576 chars.** (3dmaskdump)

Hi,

I am trying extract voxel-based numeric values into .1D and .txt files using 3dmaskdump.

The extraction works fine for all ROIs except for one. The ROI where 3dmaskdump fails also has the highest voxel number (compared to the other ROIs), and I assume this is the reason why 3dmaskdump fails with the following message.


**Error: Line too long for buffer of 5048576 chars.** ERROR: mri_read_ascii: can't read any valid data from file XYZ

I have 32 GB of RAM, so I assume that this is rather a bug, and not a RAM limitation of my computer. 23 subjects are part of the script. Interestingly, when I remove one subject, the code works fine without the error message shown above (due to the slightly lower number of voxels after one subject was removed).

Here is the relevant part of my AFNI script:


3dmaskdump \
-mask $directory_ROIs/ROI.nii \
-noijk \
$directory_PD/subject1/XYZ_file+tlrc \
$directory_PD/subject2/XYZ_file+tlrc \
# ... until Subject 23

> $directory_results/AllSubjects_ROI.1D

1dcat \
$directory_results/AllSubjects_ROI.1D'[0]'\' \
$directory_results/AllSubjects_ROI.1D'[1]'\' \
# ... until Subject 23 ([22])
> $directory_results/Transpose_ROI.1D

1dcat \
$directory_results/Transpose_ROI.1D\' > $directory_results/ROI.txt


The script fails at the very last step, that is:


1dcat \
$directory_results/Transpose_ROI.1D\' > $directory_results/ROI.txt

Is this indeed a bug? Please let me know what you think, and if there is even a solution to “save” the last subject.

Update: I just realized that I can simply process the “last” subject manually, i.e., using 3dmaskdump again just for this subject. Then, in a second step, I can simply add the results of this subject into the .1D or .txt result file that contains the results of all subjects.
Of course, it would still be nicer and cleaner if the script just runs for all subjects.

Thanks,
Philipp

Hi, Philipp-

Well, it’s not a bug, it’s a max buffer length defined in the code. It determines the maximum number of columns a *.1D file can have and still be read in.

This means that you have a single line with over 5M values—I guess each of your datasets is a time series of considerable length. Can I ask what you are aiming to do with this gargantuan line? Is there a way to compress it earlier in your calcs? For example, you could use a for-loop and 1dcat/1dtranspose them separately.

Also, I don’t think this would be affected by the number of voxels in your ROI mask—this is really about the number of columns you are attempting to put into the file (not rows—that can be larger than this buffer max). Or is the 1dcat command that is producing the error, not 3dmaskdump?

–pt

Hi Paul,

oh yes, you are right. The error should stem from 1dcat and not from 3dmaskdump.
As I understand you, the problem stems from a limitation of the code for handling .1D files. It is not a bug, just a “limitation” so to speak.

You asked for the reason why I play around with such a massive amount of voxels. I have two ROIs which span across the whole cerebral cortex. Consequently, the ROIs are very huge and contain a massive amount of voxels.

I computed two variables/measurements on a voxel-based level. My aim is to compare these voxel-based variables across all subjects using a Pearson correlation. Thats why I dealt with two .1D or .txt files where each file contains a massive number of numeric values.

But you are right, extracing the values for each subject individually to subsequently load these files into Python via a for loop (and finally add them all together into a list or array) solves the problem too.

Philipp

Hi Philipp,

I do not see where the correlations are being computed here. Are the correlations across subjects, and one correlation per voxel, or is this a spatial correlation (e.g. 3ddot -demean)?
Either way, it might make sense to do this in volume space.

  • rick

Hi Rick,

the Pearson correlation is computed in Python using the two extracted AFNI data lists that stem from the two computed variables.
This means that two giant lists of voxels are correlated against each other, yielding one correlation result.

You are right insofar as one could also correlate the data already in AFNI using 3dTcorrelate. The number of voxels (numeric values) per file would nonetheless remain the same.

I think that using a for loop, as suggested by Paul, (probably already in the shell script for AFNI) to extract the values per subject (instead of extracting the values from all subjects into one file) is a good solution.

Anyway, my problem is solved!

Philipp