I am noticing some warning message that I don’t fully understand, and am worried are affecting the output. I run 3dDeconvolve after using 3dTcat to combine the functional runs (following intensity normalization) into one file. The 3dDeconvolve command I’m using is:
3dDeconvolve
-input $data_dir/$s/fmriprep/sub-${s}/func/smooth_scale_all
-polort A
-jobs 1
-ortvec $data_dir/$s/fmriprep/sub-${s}/func/sub-${s}*confounds_noheader_all.1D
-local_times
-concat ‘1D: 0 870 1740 2610 3480 4350 5220 5513’
-num_stimts 2
-stim_times_AM2 1 $data_dir/$s/derivatives/Timing_Files/AM/mut.txt ‘dmBLOCK(1)’ -stim_label 1 mut
-stim_times_AM2 2 $data_dir/$s/derivatives/Timing_Files/AM/neu.txt ‘dmBLOCK(1)’ -stim_label 2 neu
-num_glt 1
-gltsym ‘SYM: mut -neu’ -glt_label 1 M-N
-fout -tout
-x1D $data_dir/$s/fmriprep/sub-${s}/func/zsub-${s}.xmat.1D
-x1D_uncensored $data_dir/$s/fmriprep/sub-${s}/func/zsub-${s}.uncensor.xmat.1D
-errts $data_dir/$s/fmriprep/sub-${s}/func/zsub-${s}_errts
-fitts $data_dir/$s/fmriprep/sub-${s}/func/zsub-${s}_fitts
-cbucket $data_dir/$s/fmriprep/sub-${s}/func/zsub-${s}_Decon_betas.nii.gz
-bucket $data_dir/$s/fmriprep/sub-${s}/func/zsub-${s}_Decon.nii.gz
This is some of the output I’m getting on the command line:
++ ‘-stim_times_AM2 1 /N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/mut.txt’ has 1 auxiliary values per time point
++ ‘-stim_times_AM2 1’: basis function model ‘dmBLOCK(1)’ uses 1 parameters,
out of the 1 found in timing file ‘/N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/mut.txt’
*+ WARNING: ‘-stim_times_AM2 1 /N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/mut.txt’ has 0 amplitude modulation parameters; ==> converting to be like -stim_times_AM1
++ ‘-stim_times_AM2 1 /N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/mut.txt’ will have 1 regressors
++ ‘-stim_times_AM2 2 /N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/neu.txt’ has 1 auxiliary values per time point
++ ‘-stim_times_AM2 2’: basis function model ‘dmBLOCK(1)’ uses 1 parameters,
out of the 1 found in timing file ‘/N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/neu.txt’
*+ WARNING: ‘-stim_times_AM2 2 /N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/neu.txt’ has 0 amplitude modulation parameters; ==> converting to be like -stim_times_AM1
++ ‘-stim_times_AM2 2 /N/dc2/scratch/dlevitas/fMRI_data/std/002/derivatives/Timing_Files/AM/neu.txt’ will have 1 regressors
++ 3dDeconvolve extending num_stimts from 2 to 31 due to -ortvec
++ 3dDeconvolve: AFNI version=AFNI_18.0.12 (Feb 5 2018) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ current memory malloc-ated = 3,528,121 bytes (about 3.5 million [mega])
++ loading dataset /N/dc2/scratch/dlevitas/fMRI_data/std/002/fmriprep/sub-002/func/sub-002_task-std_bold_space-MNI152NLin2009cAsym_preproc_smooth_scale_all.nii.gz
++ current memory malloc-ated = 3,528,121 bytes (about 3.5 million [mega])
++ STAT automask has 325321 voxels (out of 325325 = 100.0%)
++ Skipping check for initial transients
++ Imaging duration=365.4 s; Automatic polort=3
++ -stim_times using TR=0.42 s for stimulus timing conversion
++ -stim_times using TR=0.42 s for any -iresp output datasets
++ [you can alter the -iresp TR via the -TR_times option]
++ ** -stim_times NOTE ** guessing GLOBAL times if 1 time per line; LOCAL otherwise
++ ** GUESSED ** -stim_times_AM2 1 using LOCAL times
*+ WARNING: ‘-stim_times_AM2 1’ (LOCAL) run#7 has 5 times outside range 0 … 122.64 [PSFB syndrome]
dataset TR being used is 0.42 s – unusable times follow
135.32 194.16 211.6 255.4 289.36
++ ** GUESSED ** -stim_times_AM2 2 using LOCAL times
*+ WARNING: ‘-stim_times_AM2 2’ (LOCAL) run#7 has 4 times outside range 0 … 122.64 [PSFB syndrome]
dataset TR being used is 0.42 s – unusable times follow
124.36 172.48 231.33 318.64
The first set of warnings are in relation to stim_times_AM2. I am interested in both duration and amplitude modulation (hence why I’m using stim_times_AM2); however, the warning message says that the program will use stim_times_AM1 instead. Is there a way to prevent this?
The second set of warning pertains to the timing in my run #7 that are apparently outside range 0…122.64. This is confusing to me because my run #7 contains the full number of volumes (870) as the previous ones, so unless there is an issue with my timing files, I was wondering if there is something else I need to do?
In order to evaluate the timing files, it would help to see
at least part of one. 3dDeconvolve is not finding any
amplitude modulation parameters, so there is likely a problem
there.
Regarding run 7, the last -concat parameter, 5513, is not a
multiple of 8. Run 7 is 293 time points long (5513-5220),
which is 123 s (given that TR=0.42 s). If run 7 is complete,
then run 8 should begin at index 6090.
The reason why I’m also interested in amplitude modulation is because my task involves presenting neutral and aversive images, and I worry that habituation to the aversive stimuli over the course of the experiment will attenuate the response amplitude. Assuming this logic is correct, it looks like I would need to adjust my timing files to include the AM parameter(s). In one of your responses, you provided an example of ‘10.151.83:16.78’ , and the 3dDeconvolve help file provides an example of '305,3:12’. My question is, is there a mathematical formula or rule of thumb for determining the AM parameter(s)? Or would I be better off just focusing on duration modulation instead?
I have one last issue that I forgot to include in the initial post. Since my input dataset is very large (6383 sub-bricks), I’m getting the following memory error:
++ total shared memory needed = 16,784,167,400 bytes (about 17 billion [giga])
++ current memory malloc-ated = 4,223,042 bytes (about 4.2 million [mega])
** FATAL ERROR: Can’t create shared mmap() segment
** Unix message: Cannot allocate memory
I’m running this on my university’s HPC, where I requested 20gb of virtual memory, but the error still persists.
If you are just looking for attenuation, it is a little hard to model,
since the modulator would just increase (or decrease) and the
only question would be the shape. Since the exact shape is
very hard to predict, and since the modulator is just linear, it
is hard to capture.
You might break it into early and late attenuation cases, which
is much easier to model. But the model still depends on when
you think the attenuation effect is changing most rapidly. That
might be good to save for a little later in any case.
Regarding memory, it looks like you are not even getting off
the ground with memory mapped data. Try running it again
with no memory mapping (set AFNI_NOMMAP to YES).
Also, it is not necessary to have 3dDeconvolve output the
fitts data. It is easy to compute that afterwards, and it drops
the memory requirement by about 1/3.
Since it seems that modeling attenuation is a bit difficult (particularly since I don’t have a great idea of when or how it would affect the response amplitude), would it be more sensible to focus on duration modulation instead? If so, I can switch the stim_times lines to the following:
If I do it this way I assume my timing files are now properly formatted.
Regarding the memory mapping,I tried adding this in the command line:
AFNI_NOMMAP=YES ; export AFNI_NOMMAP
before getting this error…
*** MCW_malloc(1301300) from 3dDeconvolve.c#5606 FAILS!
*** current total usage=221187614 bytes
++ current memory malloc-ated = 221,187,614 bytes (about 221 million [mega])
** ERROR: Memory allocation for output sub-bricks fails!
** ERROR: Have allocated 216,015,800 bytes (about 216 million [mega]) for output, up to now
** ERROR: Potential lenitives or palliatives:
++ Use 3dZcutup to cut input dataset into
smaller volumes, then 3dZcat to put
the results datasets back together.
++ Reduce the number of output sub-bricks.
++ Use a system with more memory and/or swap space.
** FATAL ERROR: Alas, 3dDeconvolve cannot continue under these circumstances.
** Program compile date = Feb 5 2018
Am I able to make this change directly in the command line (using bash shell)? I’m using the AFNI version on my university’s server, so I don’t have permission to alter the .afnirc file.
Stick with _AM1, this is not an IM case (where you would
want a beta estimate for every event).
That memory error is disturbing. Memory allocation is
failing at only just of 200 MB. The error in your last
post showed 3dDeconvolve expecting to need 17 GB.
How many jobs is this using? You might want to keep
that a small (1?), since it requires so much memory.
I re-ran everything where I requested one core (job), 20gb of virtual memory, and a wall time of 2 hours. I was able to get the output from the -x1D, -x1D_uncensored, -cbucket, and -bucket commands, but not the -errts output due to this error:
Fatal Signal 15 (SIGTERM) received
Last STATUS: convert to p-value
mri_fdrize
mri_fdr_curve
THD_create_one_fdrcurve
THD_create_all_fdrcurves
3dDeconvolve main
Bottom of Debug Stack
** Command line was:
3dDeconvolve -input /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/sub-002_task-std_bold_space-MNI152NLin2009cAsym_preproc_smooth_scale_all.nii.gz -polort A -jobs 1 -ortvec /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/sub-002_task-std_bold_confounds_noheader_all.1D -local_times -concat ‘1D: 0 870 1740 2610 3480 4350 5220 6090’ -num_stimts 2 -stim_times_AM1 1 /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/derivatives/Timing_Files/DM/mut.txt ‘dmBLOCK(1)’ -stim_label 1 mut -stim_times_AM1 2 /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/derivatives/Timing_Files/DM/neu.txt ‘dmBLOCK(1)’ -stim_label 2 neu -num_glt 1 -gltsym ‘SYM: mut -neu’ -glt_label 1 M-N -fout -tout -x1D /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/zsub-002.xmat.1D -x1D_uncensored /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/zsub-002.uncensor.xmat.1D -errts /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/zsub-002_errts -cbucket /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/zsub-002_Decon_betas.nii.gz -bucket /N/dc2/scratch/dlevitas/fMRI_data/std/002_noica/fmriprep/sub-002/func/zsub-002_Decon.nii.gz
** AFNI version = AFNI_18.0.12 Compile date = Feb 5 2018
** [[Precompiled binary linux_openmp_64: Feb 5 2018]]
** Program Death **
I assume this was because I ran out of wall time, therefore the job aborted prematurely. I’ll try again and request more time for processing.
Moving forward, I’ll probably want to try 3dMEMA, which suggests using 3dREMLfit instead, so if I pursue that route, would incorporating the -x1D_stop option reduce the amount of processing in 3dDeconvolve since it’s only generating the matrix file?
Yes, 3dDeconvolve -x1D_stop will progress far enough to
run 3dREMLfit. It will create both an X-matrix and a usable
REML command script.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.