"Killed" during 3dTrackID

Dear experts,

I am running deterministic tractography with a netrois file that has 318 rois. I am getting the “Killed” message and everything stops abruptly.

Some possibly relevant info: I’m running AFNI on a virtual machine (8 core / 32GB RAM) off Linux. I haven’t had memory issues nor have I received the “Killed” message before, but perhaps I was not performing computationally expensive tasks.

Below you will see the commands and the output before it stops abruptly. Could you let me know if this issue is related to my computer or the commands? Many thanks!

eji@nu1:~/projects/spins/preproc/fsl/subjects/w0/SPN01_CMH_0001_01/afni$ 3dTrackID \

-mode DET \
-netrois $dtipath/SPN01_CMH_0001_01/mri/parc5002dwi.nii.gz \
-prefix 3dTrackID_det \
-dti_in 3dDWItoDT_ \
-logic OR \

++ ROI logic type is: OR
++ Tracking mode: DET

++ Number of ROIs in netw[0] = 318
++ No refset labeltable for naming things.
++ SEARCHING for vector files with prefix ‘3dDWItoDT_':
FINDING: ‘V1’ ‘V2’ ‘V3’
++ SEARCHING for scalar files with prefix '3dDWItoDT_
FINDING: not:‘DT’ ‘FA’ ‘L1’ not:‘L2’ not:‘L3’ ‘MD’ ‘RD’
++ Done with scalar search, found: 4 parameters
→ so will have 15 output data matrices.
++ With ‘-logic OR’, the ‘-cut_at_rois’ option will be automically turned off (-> have ‘-uncut_at_rois’).
++ Tracking progress count: start …
++ Done tracking, tidying up outputs…
++ From tracking, net[0] has 245598 tracks.
++ Network [0]: [21075]th WM bundle has only one tract!
++ Network [0]: [43818]th WM bundle has only one tract!


“Killed” specifically means that your computer has reached a memory limit, and your OS has stopped the process. Things that increase the memory demands are: larger number of ROIs, N_roi (memory using goes up (N_roi)^2), and having more voxels (either via having a larger mask, or smaller voxels). But memory is a fairly hard-and-fast limitation on the computer.

I don’t see a mask used in your analysis, so you could try adding that and see if it helps.

What voxelsize do you have?

I would also use this option, which should save memory:

-no_indipair_out  :Switch off outputting *INDIMAP* and *PAIRMAP* volumes.
                     This is probably just if you want to save file space;
                     also, for connectome-y studies with many (>100) target
                     regions, the output INDI and PAIR maps can be quite
                     large and/or difficult to write out. In some cases, it
                     might be better to just use '-dump_rois AFNI' instead.
                     Default is to output the INDI and PAIR map files.

You might also want to add this option, to output volumetric datasets of each WM connection ROI that you find (as compressed NIFTI):

-nifti -dump_rois AFNI

(Try this after you get the memory problem solved; each output file is small, so this shouldn’t create a bit memory burden for your computer.)

On a tracking note, would expect that with a network, you would want “-logic AND”, to find connections between pairs of regions?

It is possible that if you want to track among so many ROIs at once that you would need to use a machine with larger memory, but there may be ways to reduce the memory burden.


Thanks Paul! This is all very helpful. A couple questions before I try your suggestions:

Regarding the addition of a mask, I’m trying to understand what type of mask what be most appropriate. Do you mean a white matter skeleton? Or perhaps a whole brain mask of that subject? Any hints would be helpful.

(Regarding voxel size you asked, it is 2x2x2.)

Regarding -logic (AND/OR), I’m interested in the FA of any tracts passing through the rois and not looking at connectivity between regions. So I think I should keep it as -logic OR?

OK, sure:

Re. “Regarding the addition of a mask, I’m trying to understand what type of mask what be most appropriate. Do you mean a white matter skeleton? Or perhaps a whole brain mask of that subject?”
I think just a whole brain mask will be best to start with. If we can’t get the memory usage down low enough, a possible way to go would be to:

  1. Make a mask where FA>0.2 (assuming FA>0.2 is your tracking threshold, which is standard for human subjects > 5yrs old), and then
  2. Use 3dmask_tool or 3dROIMaker to inflate it by 1 voxel.
    Why would this second approach be useful? It would be the tightest mask that still contains aaalllll the trackable WM volume (where tracts can go to/from), as well as aaallll the bordering regions where your trackable target ROIs are (only targets bordering the trackable WM can be reached by tracts).

Re. 2 mm isotropic voxel size: OK, that seems quite standard. Fine.

Re. “Regarding -logic (AND/OR), I’m interested in the FA of any tracts passing through the rois and not looking at connectivity between regions. So I think I should keep it as -logic OR?”
Oh, OK, then, then you would want “OR”, if you aren’t interested in pairwise connections.
Note: this fact opens up another possibility: You don’t need to track all target regions at once, since the results of any individual tracking will be independent of having other targets around. Soooo, one way to lower the memory load on each individual tracking would be to track subsets of target regions at a time; and I think you could actually do this all in a single 3dTrackID run, but since deterministic is so fast, that shouldn’t matter very much. For example, you could do this (I am assuming that your target ROIs are numbered 1-318-- they don’t have to be, you could adjust the ROI selection ranges below (replace DSET_OF_ALL_TARGETS with your actual file name):


set dset_targets = DSET_OF_ALL_TARGETS  # assume this is single vol
set dset_pref    = o.subset    # some prefix for new ROI dsets

# what is max integer value in target dset
set max_int = `3dinfo -max ${dset_targets}`

# what is that divided into 4 (and round up a bit)
set nsubset   = 4
@ nsubsetm1   = ${nsubset} - 1
set step_size = `echo "scale=3; 1/${nsubset}" | bc`

set all_rbot = ( 1 ) 
set all_rtop = ( )
foreach ii ( `seq 1 1 ${nsubsetm1}` )
    set val  = `echo "scale=0; ${step_size} * ${ii} * ${max_int}/1.0" | bc`
    @ valm1  = ${val} - 1
    set all_rbot = ( ${all_rbot} ${val} )
    set all_rtop = ( ${all_rtop} ${valm1} )
set all_rtop = ( ${all_rtop} ${max_int} )

echo "++ Max integer is    :  ${max_int}"
echo "++ Step size will be : ${step_size}"
echo "++ All lower range values: ${all_rbot}"
echo "++ All upper range values: ${all_rtop}"

# get labeltable info from original dset
set lab_tab = `3dinfo -labeltable "${dset_targets}"`

foreach idx ( `seq  1 1 ${nsubset}` )

    set rbot = ${all_rbot[${idx}]}
    set rtop = ${all_rtop[${idx}]}

    set nn = `printf "%05d" ${idx}`               # zeropadded counters
    set dset_subset = ${dset_pref}_${nn}.nii.gz   # new target dset filename

    # NB: using double quotes matters here!
    3dcalc                                             \
        -a       "${dset_targets}"                     \
        -expr    "within(a,${rbot},${rtop})"           \
        -prefix  "${dset_subset}"                      \
        -datum   short                                 \

    # copy labeltable from original dset, if it exists
    if ( "${lab_tab}" != "NO_LABEL_TABLE" ) then
        3drefit -copytables "${dset_targets}" "${dset_subset}"
    # make auto-integer colorbar in AFNI GUI, for easier visualization
    3drefit -cmap INT_CMAP "${dset_subset}"

    # Could put 3dTrackID command for ${dset_subset} here


Note that you can put your 3dTrackID command in the loop here; for slower tracking types, like full probabilistic, you could concatenate the dset of targets to just need to do one tracking run… but this is fine here?


Hi Paul,

I tried the first suggestion of using a whole brain mask. That didn’t solve the memory problem.

I’m now onto your second suggestion of creating an FA mask and extending it by 1 voxel. It seems my mask doesn’t look correct and it’s likely due to the fact that I have misunderstood the commands to get there. Perhaps you could help identify where I’ve gone wrong? big thank you!

-prefix 3dDWItoDT.nii.gz
-max_iter 10
-max_iter_rw 10
-bmatrix_FULL bvecs_matA.txt

Inflate by
-inset 3dDWItoDT_FA.nii.gz
-wm_skel 3dDWItoDT_FA.nii.gz
-skel_thr 0.2
-inflate 1
-prefix mask/FA02


It would actually be a 2 step process to make the inflated mask:

  • binarize where FA>0.2
  • inflate that binary mask:

Using your file names, it would be:

# make binary mask where FA>0.2
3dcalc  \
    -a 3dDWItoDT_FA.nii.gz \
    -expr 'step(a-0.2)'  \
    -prefix  3dDWItoDT_FA_02.nii.gz

# inflate FA>0.2 mask by 1 voxel, using face-only voxelwise expansion
3dROIMaker    \
    -nifti \
    -inset 3dDWItoDT_FA_02.nii.gz \
    -refset 3dDWItoDT_FA_02.nii.gz \
    -inflate 1 \
    -neigh_face_only \
    -prefix 3dDWItoDT_FA_02_infl1_FaceEdge 

# ... or inflate FA>0.2 mask by 1 voxel, using either face or face+edge voxelwise expansion
3dROIMaker    \
    -nifti \
    -inset 3dDWItoDT_FA_02.nii.gz \
    -refset 3dDWItoDT_FA_02.nii.gz \
    -inflate 1 \
    -neigh_face_edge \
    -prefix 3dDWItoDT_FA_02_infl1_FaceEdge 

… where the “-refset …” is necessary to keep the inflated, output GMI file being all unit-valued.

Now, the interesting thing about this is, on the test dataset where I did this on my computer, the number of voxels of the inflated WM mask was 139811 and 174266 voxels, respectively (face+edge expansion made the larger inflated mask), while the number of voxels in the whole brain mask was 168681. That is, the face+edge expansion edged up being larger than the original mask. That last thing could be protected against by adding “-mask …” to the 3dROIMaker command, so that the expansion can’t push out larger than the brain mask. Note that this probably happened because the dataset I used had FA>0.2 voxels around the edge of the brain, which would not always happen, depending on processing. But importantly, this idea prooobably won’t end up saving a huge amount of memory, in practice: the WM mask is typicall about 30-40% of the brain volume, and inflating it is really going to lead to a very large fraction of the whole brain being included.

So, you can try this and see if it helps with your memory limits, but particularly since you are only using “OR-logic” connections (I’m curious-- why just these? Most often people are interested in the more constrained AND-logic ones), you could use my suggestion of tracking some target ROIs in separate groupings.


Great, thank you for detailing these steps! I did manage to make a mask.

In the meantime, the good news is that we increased the RAM on our virtual machine from 32GB to 128GB. That definitely solved the memory issue and I was able to run 3dTrackID successfully.

I forgot to ask earlier: should results differ if skull-stripping was not performed versus if it was performed? I realized I omitted this step and am wondering how that may affect my results.

Regarding the OR-logic: our goal is to create a morphometric similarity network similar to Seidlitz et al. Neuron 2018 that integrates several indices in gm and wm within hundreds of ROIs of approx equal size. In our case, for each of the 318 ROIs, we already extracted 7 gm features from FreeSurfer (cortical thickness, gm volume, etc) and we would like to extract wm features (specifically FA and MD) using AFNI tracking. These 9 indices will be compiled to form a morphometric similarity matrix, which will hopefully be more informative than either index alone. If you notice any flaw in using the OR-logic in this case, feel free to let me know :slight_smile:

Finally, I do have some questions about the output, but perhaps it’s better to post this in a new thread with a different subject name, as we have solved the memory issue?

Hi, Ellen-

That is great about getting an upgrade of memory on your computer.

Re. masking/skullstripping the data: it shouldn’t really matter in practice for your results; there shouldn’t be target ROIs outside the brain.
However one thing to check: there can be FA>0.2 at the edge/outside of the brain, due to noise. However, since those areas are so noisy, one typically doesn’t see tract-like things there. But this might be something to watch out for in your case, because you are using OR-logic tracking, which is not constrained by having 2 target ROI endpoints. This is something to be aware of, as you visually check some results of tracking.

Re. the purpose of OR-logic: OK, that is a fairly special case, and different than most standard tracking purpose (which is to parcellate the WM skeleton similarly across subjects). OR-logic might work for that purpose. Some things to note about tracking in general: AND-logic often tends to be more constrained and specific, and as such is it probably more consistent across a group study. Additionally, combining AND-logic with probabilistic tracking provides the most consistent results (results are less susceptible to the quirks of DWI noise). I haven’t looked a lot at OR-logic with mini-probabilistic or full-probabilistic tracking. It might be worth considering using probabilistic tracking still, with the OR-logic, perhaps with a reasonably high fractional threshold (say, 0.1 or 0.2). That might help with having more consistent results.
Additionally, you mention a “network”-- to me, this means edges between nodes, which could be tractographic connections between target ROIs. That would make AND-logic tracking seem to be useful?

Re. output questions: yes, probably having another thread would be useful. We have already started discussing some study design questions here, so things are a bit mixed… perhaps you could link to this discussion in the new thread.