I’m doing seed correlation analysis of resting state fMRI data. I’m having problems with the bandpass filtering for some of my subjects… My command is :
3dBandpass -prefix vol0000_warp_merged_bp.nii.gz -mask <MASK_FILE> -despike -ort <NUISANCE_TABLE> 0.005000 0.200000 <INPUT_DATA>
and I get the following error message :
++ 3dBandpass: AFNI version=AFNI_16.0.00 (Jan 1 2016) [64-bit]
++ Authored by: RW Cox
++ Number of voxels in mask = 2143696
++ Data length = 1058 FFT length = 1080
- bandpass: ntime=1058 nFFT=1080 dt=0.85 dFreq=0.00108932 Nyquist=0.588235 passband indexes=5…184
++ Loading input dataset time series
++ Checking dataset for initial transients [use ‘-notrans’ to skip this test]
- No widespread initial positive transient detected
Fatal Signal 11 (SIGSEGV) received
Bottom of Debug Stack
** AFNI version = AFNI_16.0.00 Compile date = Jan 1 2016
** [[Precompiled binary linux_openmp_64: Jan 1 2016]]
** Program Death **
** If you report this crash to the AFNI message board,
** please copy the error messages EXACTLY, and give
** the command line you used to run the program, and
** any other information needed to repeat the problem.
** You may later be asked to upload data to help debug.
** Crash log is appended to file /homes_unix/pepe/.afni.crashlog
Return code: 1
Interface Bandpass failed to run.
I’m not very familiar with AFNI and I’m using the command that a colleague gave me. I have processed about 350 subjects and only get this error for 5 of them
Thanks in advance,
It is hard to tell without the data in hand. But let
me offer some random comments…
Your AFNI version is almost 2 years old. I guess if
you are just running 3dBandpass, that does not matter.
I see you are including a -ort option. That is good.
How many terms are in that?
Band passing (either before or after regression of
signals of no interest) does not work well with censoring.
They should be done at the same time, as afni_proc.py
Your TR seems to be 0.85, is that right? Using a
high pass cutoff of 0.2 keeps 0.2/0.588235 = .34 of your
original DoF, which isn’t too bad (losing 66% of them).
But to be clear, out of 1058 time points, that is like
using 700 regressors of no interest for band passing.
Perhaps my main question is, are you censoring before
(or even after) this?
Would it be possible for you to upload the files needed
to duplicate the failed command (the mask, nuisance file
and EPI input)?
Thanks for your answer, I have uploaded the input files on my dropbox (EPI file was too large), you should be able to access it - let me know if it’s not the case
So yes I’m only running 3dBanpass
I have 26x1058 parameters (24 for motion + 1 for WM + 1 for CSF)
I’m not doing any censoring before or after this band-passing …
Yes my TR = 0.85
Ok, it’s still a lot, I should really keep that in mind
Thanks, Marie. I will give that a try. At a glance though,
that is a big dataset. Perhaps you are just running out of
RAM. Was it really acquired at 1x1x1? Maybe it does
not need to be so big.
Anyway, I will try it out.
Yes, I thought about that too and tried to run it on the most powerful computer I have (the one with the most RAM) and it did not work.
The data were acquired at 2x2x2 and upsampled to the T1 resolution at some point. I will try to recompute the data with the original resolution to see if it solves the problem with the resources I have
Thank you very much !
It ran successfully on my computer, using about 30 GB of RAM.
I also ran it via 3dTproject to see if that might be better. But they
both use the same amount of memory (converting the results to
float, so 10 GB for input, 20 GB for output).
In general, we (e.g. via afni_proc.py) do not upsample data when
going to standard space (or when just aligning to the anat). The
approximate original resolution is kept. If you ran this with 2 mm^3
data, it would only need about 4 GB RAM, and would probably
finish quickly (especially using 3dTproject, though that has no
-despike preprocessing option).
I ran it again to check how many RAM I am using, and it’s actually 20GB (then it crashes) while I reserved 30GB on my computer clusters. So I’m not sure if it’s really a memory problem.
I could check anyway with the 2x2x2mm data by next week to see what happens, but I’m not not the only one working with those data so I won’t probably be able to change the procedure and work with 2x2x2mm data - I will ask anyway to be sure we really need to work at 1x1x1mm !
How much RAM is there on the computer that is crashing?
If it only has maybe 24 GB, with other programs running,
it would make sense to crash at 20 GB. Even 30 GB might
be slightly not enough.
Did your computer cluster crash at 30?
You could also try running a version that is a little newer.
But it sounds like you may still just not have enough RAM.
Sorry for my late answer I was not able to work on this sooner
I have 120GB on my computer. I reserved 30GB only for this process (to be sure I don’t have conflicts with other programs) and it crashes. It actually only used 20GB. I tried also to reserved larger amount of memory for this process (up to 60GB) with the same result. I also tried on the 2 other computers I have access to with the same results (I reserved 40GB for band-passing)
I tried with a newer version (17.2.16) and it succeeds, so I guess whatever (memory?) problem I had was solved with the new version
Thank you very much for all the suggestions it helps me a lot
That is great, Marie, thanks for the update!