3dLMEr trial-wise memory error

Hello - I am trying to use 3dLMEr to run a trial-wise LME analysis (N=90 x 60 trials extracted using IM model)
Whenever I try to do this I get a memory error while reading in the images.
“Error: vector memory exhausted (limit reached?)”
Any suggestions? Or way to tell how much might be needed?

way to tell how much might be needed?

Trial-level modeling is quite demanding on memory size. You should be able to estimate the full memory size based on each subject’s file size.

Any suggestions?

Are the input data stored in the output from subject-level regression analysis? Estimate the total input size, and see if your computer has enough memory to handle that. If so, one possible solution is to extract those individual betas using 3dbucket and then use those individual files as input.

I'm in a similar situation. I notice that OOM event can happen even after AFNI outputs that "Reading input files for effect estimates: Done!".

When adding together all the trial-level beta-maps file sizes I'd like to analyze, it's between 10-25 GB depending on the contrast, but this is assuming the whole brain map. I assume that employing a smaller mask size also factors into it? Is there rough estimate on how much RAM one needs given avg_effectmap_size X mask X other_memory_requirements_by_lme?

Relatedly, does the increasing the number of threads (~jobs) factor into the total RAM requirements?

Also, I noticed that I tend to get segfault error after the 3dLMEr estimation finishes during what I assume is outputting the residual, is there anything that can be done about that other than further increasing memory?

Here are a couple of suggestions:

  • Eliminating the option for computing residuals would significantly reduce memory usage. Furthermore, these residuals aren't particularly useful.

  • Divide each of your input files into multiple segments along the Z-axis using 3dZcutup and then run each segment separately. Reassemble them using 3dZcat at the end.

Gang

Hi Gang,

Thank you!

  • Eliminating the option for computing residuals would significantly reduce memory usage. Furthermore, these residuals aren't particularly useful.

I thought we needed the residuals for estimating -acf via 3dfwhmx and do 3dClustsim? If I wanted to estimate noise-only clusters is there a better way to do it for the 3dlmer output?

Current methods employing spatial cluster thresholding are excessively conservative, often resulting in substantial information loss, as explained in this article. You might want to explore an alternative visualization technique akin to what's presented in Fig. 1F of the paper, or as detailed in another paper.

Gang

1 Like
  1. Thank you. I read and agree with the conclusion of both papers. My plan was to use the 3dLMEr, calculate the cluster fpr via clustsim and use the translucent statistical thresholding advocated in the "Highlight, not hide" paper - but for this approach I would still need a way to estimate the -acf.

  2. My other option - if I understand correctly - would be to abandon the voxelwise analysis plan, focus on the mean ROI signal for each trial and do a multilevel bayesian regression?

Voxel-level analysis offers computational convenience and avoids the complexities of region delineation. However, the modeling approach lacks efficiency because it does not take into consideration the hierarchical structure in the brain. As a result, stringent clustering is not a suitable methodology for reporting results. There are several reasons for this.

Firstly, clusters often tend to be excessively conservative due to the questionable assumptions inherent in mass univariate modeling. Second, their boundaries can be arbitrary, lacking direct neurological relevance. And third, a cluster can extend across multiple anatomical regions, which can lead to issues when relying solely on a peak voxel to represent the entire cluster. This can result in significant information loss.

In common practice, statistical models tend to dominate the processes of model construction, result interpretation, and reporting, while the influence of domain knowledge is often downplayed. For a more appropriate approach, consider using a voxel-level threshold (e.g., 0.02) and a minimum cluster size (e.g., 20 voxels), as shown in Fig. 1F of the aforementioned paper. When combined with the translucent approach, this method helps maintain information continuity and highlights regions with strong evidence. Instead of allowing statistical evidence to unilaterally dictate reporting criteria, it's crucial to consider the continuum of statistical evidence in conjunction with domain-specific knowledge, including previous research findings and anatomical structures. This approach leads to more well-informed and robust conclusions.

Gang

1 Like