out of memory during afni_proc.py

Hi AFNI experts.
I have run the afni_proc.py, then it created the proc.sub. Then I try to run this by tcsh -xef proc.sub. However, it echo this message (see 1.png) below after several steps.
my afni is AFNI_17.3.01, under CENTOS 5.11 final system (x64). And, the sever has ~40G memory.

What is the size of the input? While 1200 is many time points,
3dDespike is a fairly low overhead program.

In any case, 40 GB or not, you certainly seem to be running
out of memory. What is the output from these commands?

ls -l pb00*.BRIK
free -m

Note that the amount of memory in use will fluctuate. If there
are lots of other processes running on that machine, it may
fail at any point. The ‘top’ command might give you an idea
what is going on (with other programs).

  • rick

The size of input is ~1.2G resting data with 1200 volumes.
The output can be in 2.png.

Have you tried just running it again? And to be sure,
are you running just one subject at a time?

I don’t see any reason for a malloc failure here, so it
seems likely that there were just other things going
on. If you run it again does it get farther?

  • rick

I have tried it for several times.
One time one subject.
It is always terminated in that step…

I have split my resting data into two parts, then the program worked. The two preprocessed data will be 3dTcat for the functional connectivity analysis. Is that OK?

In the case of resting state, that should be mostly okay, with
just a question of how the 2 runs are registered. Right now,
you are probably relying on registration to the anatomy (which
might then be to the template). It is very strange that you would
get malloc errors for a 1 GB dataset on a system with 40 GB,
though. I wonder if you need AFNI_NOMMAP set to YES…

  • rick