3dMVM -residuals error

Hi again,

Running 3dMVM on a 500 GB RAM system. I can’t tell if this is a memory (RAM) issue or something else.

3dMVM with -resid has worked before on this server with about 1800 subjects. Now I’m running with 2700 subjects. The results files are generated but not the residuals. I get this error:


*** caught segfault ***
address 0x7f79f8cbf820, cause 'memory not mapped'

Traceback:
 1: write.c.AFNI(filename, dset = brk, label = label, space = space,     note = note, origin = origin, delta = delta, orient = orient,     idcode = idcode, defhead = defhead, verb = verb, maskinf = maskinf,     scale = scale, addFDR = addFDR, statsym = statsym, view = view,     com_hist = com_hist, type = type, TR = TR, overwrite = overwrite)
 2: write.AFNI(lop$resid, out[, , , (NoBrick + 1):(NoBrick + (!is.null(lop$resid)) *     nrow(lop$dataStr)), drop = FALSE], label = NULL, defhead = head,     idcode = newid.AFNI(), com_hist = lop$com_history, type = "MRI_float",     scale = FALSE)
An irrecoverable exception occurred. R is aborting now ...
Segmentation fault

We are trying an analysis with quite some covariates that we want to correct for, so this is the command:


3dMVM -prefix results  -jobs 15 -dataTable @table.txt -mask ../GM_mask.nii.gz -resid residuals \
-bsVars "PRS*MDDstatus*array+age+drinking+smoking+PC1+PC2+PC3+PC4+PC5+PC6+X_loc+Y_loc+Z_loc+X_fov+Y_fov+Z_fov" \
-qVars "age,drinking,smoking,PC1,PC2,PC3,PC4,PC5,PC6,X_loc,Y_loc,Z_loc,X_fov,Y_fov,Z_fov"

Where we are interested in PRS and MDDstatus
array is a categorical variable that we want to “correct” for.
The following are all quantitative variables we want to “correct” for as well.
age+drinking+smoking+PC1+PC2+PC3+PC4+PC5+PC6+X_loc+Y_loc+Z_loc+X_fov+Y_fov+Z_fov

This is a snipped from the table:


Subj PRS MDDstatus age drinking smoking array PC1 PC2 PC3 PC4 PC5 PC6 X_loc Y_loc Z_loc X_fov Y_fov Z_fov InputFile
1001008 HighPRS cMDD 49 4 0 one -12.0098 4.05069 -2.28523 1.3164200000000001 -4.06671 -1.6778 1.9178400000000002 58 -44.1087 167 222 210 ../../struc/s_2_T1_1001008_20252_2_0_HighPRS_cMDD_female_struc_GM_to_template_GM_mod.nii.gz
1001794 HighPRS CTL 77 3 0 one -14.285 4.48078 -3.0890400000000002 3.11305 11.2623 0.775404 -2.2406 74 -55.1268 177 240 228 ../../struc/s_2_T1_1001794_20252_2_0_HighPRS_CTL_female_struc_GM_to_template_GM_mod.nii.gz
1004009 HighPRS rMDD 56 4 0 one -12.4654 3.3837300000000003 -1.7773700000000001 1.47738 -8.0986 -3.11829 0.25755500000000003 64 -22.6047 161 222 181 ../../struc/s_2_T1_1004009_20252_2_0_HighPRS_rMDD_female_struc_GM_to_template_GM_mod.nii.gz

My installation looks good:


afni_system_check.py -check_all
-------------------------------- general ---------------------------------
architecture:         64bit 
system:               Linux
release:              3.10.0-1160.45.1.el7.x86_64
version:              #1 SMP Wed Oct 13 17:20:51 UTC 2021
distribution:         CentOS Linux 7.9.2009 Core
number of CPUs:       72
apparent login shell: bash
shell RC file:        .bashrc (exists)

--------------------- AFNI and related program tests ---------------------
which afni           : /usr/local/afni/afni
afni version         : Precompiled binary linux_centos_7_64: Oct 12 2021 
                     : AFNI_21.3.01 'Trajan'
AFNI_version.txt     : AFNI_21.3.01, linux_centos_7_64, Oct 12 2021
which python         : /opt/conda/3/2019.10/bin/python
python version       : 3.7.4
which R              : /bin/R
R version            : R version 3.6.0 (2019-04-26) -- "Planting of a Tree"
which tcsh           : /bin/tcsh

instances of various programs found in PATH:
    afni    : 1   (/usr/local/afni/afni)
    R       : 1   (/usr/bin/R)
    python  : 2 
      /opt/conda/3/2019.10/bin/python3.7
      /usr/bin/python2.7
    python2 : 1   (/usr/bin/python2.7)
    python3 : 2 
      /opt/conda/3/2019.10/bin/python3.7
      /usr/bin/python3.6


testing ability to start various programs...
    afni                 : success
    suma                 : success
    3dSkullStrip         : success
    uber_subject.py      : success
    3dAllineate          : success
    3dRSFC               : success
    SurfMesh             : success
    3dClustSim           : success
    3dMVM                : success

checking for R packages...
    rPkgsInstall -pkgs ALL -check : success

R RHOME : /usr/lib64/R

checking for $HOME files...
    .afnirc                   : missing
    .sumarc                   : missing
    .afni/help/all_progs.COMP : missing

------------------------------ python libs -------------------------------
** failed to load module PyQt4
-- PyQt4 is no longer needed for an AFNI bootcamp

++ module loaded: matplotlib.pyplot
   module file : /opt/conda/3/2019.10/lib/python3.7/site-packages/matplotlib/pyplot.py

-------------------------------- env vars --------------------------------
PATH = /usr/local/cuda-10.2/bin:/usr/local/cuda-10.2/bin:/opt/conda/3/2019.10/bin:/opt/conda/3/2019.10/condabin:/usr/lib/qt-3.3/bin:/bin:/usr/bin:/opt/thinlinc/bin:/usr/local/bin:/usr/bin/X11:/sbin:/usr/sbin:/usr/local/sbin:/opt/puppetlabs/bin:/usr/local/afni:/usr/local/afni

PYTHONPATH = 
R_LIBS = /home/robka95/R/x86_64-redhat-linux-gnu-library/3.6/

LD_LIBRARY_PATH = /usr/local/cuda-10.2/lib64

DYLD_LIBRARY_PATH = 
DYLD_FALLBACK_LIBRARY_PATH = 

------------------------------ data checks -------------------------------
data dir : missing AFNI_data6
data dir : missing AFNI_demos
data dir : missing suma_demo
data dir : missing afni_handouts
atlas    : found TT_N27+tlrc  under /usr/local/afni

------------------------------ OS specific -------------------------------
which yum            : /bin/yum
yum version          : 3.4.3


=========================  summary, please fix:  =========================
*  just be aware: login shell 'bash', but our code examples use 'tcsh'
*  shell bash: MISSING login setup file, e.g. .bash_profile
*  shell bash: consider sourcing (non-login) .bashrc from (login) .bash_profile
*  please run: cp /usr/local/afni/AFNI.afnirc ~/.afnirc
*  please run: "suma -update_env" for .sumarc
*  please run: apsearch -update_all_afni_help
*  insufficient data for AFNI bootcamp

Is this simply running out of memory or is it something else?

Bonus question:
In the results file, I see the expected main effects of PRS, MDD, array and their interactions. I also see the result from each q-var. Are the results from the q-vars also taking the full model into account? I.e. if a q-var, like drinking, is signficiant, that’s not biased due to drinkers also are smoking, since we also have the smoking qvar in there?

Thanks!

I can’t tell if this is a memory (RAM) issue or something else.

Most likely it’s a memory issue.

Are the results from the q-vars also taking the full model into account?

It assesses a “marginal effect” in the sense all other variables are either averaged (across levels for factors) or controlled (for quantitative variables).

Thanks Gang!

I’m just thinking out loud, maybe this is unreasonable. Big data is getting more and more popular. It’s not rare to run statistics on between 2000-10000 subjects. 3dMVM seems to able to run huge datasets, but the internal function -resid can not. And this is even on a 500 GB RAM system. Larger systems are rare.

Are you planning on updating the functionalities in the future to support big-datasets? Maybe make them create temporary files on the hard-drive instead of keeping it all in the RAM? I have no idea of what I’m talking about. But I think this is a relevant topic for the future =).
fsl-randomize also has an issue with memory if you want to parallelize it (randomize_parallel). And running it normally with 6k subjects takes days for a single contrast. Big data has its issues…

Another 3dMVM -resid question:
When running a standard AFNI-pipeline analysis with group analysis in 3dMVM the process has been to use the residuals from the pre-proc to calculate the smoothness of the data → 3dFWHMx -acf → 3dCluststim.
In this standard approach, would it also be OK to use the 3dMVM residuals instead of the pre-proc residuals?
Is this reasonable for
task: input is a a stats-file sub-brik
rest: input is e.g. an R-map based on the residuals (errts)

Thanks!

Robin,

Are you planning on updating the functionalities in the future to support big-datasets? Maybe make
them create temporary files on the hard-drive instead of keeping it all in the RAM?

It seems that the memory crash happened when 3dMVM was trying to save the results to the output file. One easy solution is to break the input files into a few chunks. For example, suppose there are 90 slices along the Z-axis, use 3dZcutup to create 3 separate files: one for slices 1-30, one for 31-60, and one for 61-90. Do this in a loop for all the input files. Then run 3dMVM for each of the three chunks separately. In the end glue them back with 3dZcat.

In this standard approach, would it also be OK to use the 3dMVM residuals instead of the pre-proc residuals?

FWIW, there is no such an approach that could be considered “standard”. You’re trying to draw a rigorous line in the sand, but there are so many factors up for debate.

It seems that the memory crash happened when 3dMVM was trying to save the results to the output file. One easy solution is to break the input files into a few >>chunks. For example, suppose there are 90 slices along the Z-axis, use 3dZcutup to create 3 separate files: one for slices 1-30, one for 31-60, and one for >>61-90. Do this in a loop for all the input files. Then run 3dMVM for each of the three chunks separately. In the end glue them back with 3dZcat.

Thanks good idea! If you mean that it crashes when trying to save the resulting residuals. I do get the output files, but not the output-residuals files.

FWIW, there is no such an approach that could be considered “standard”. You’re trying to draw a rigorous line in the sand, but there are so many factors up >>for debate.

I understand your position here and I agree. And from previous debates I think I got that your approach (which I like) is to report “everything” that is reasonable and interesting and be open about the stats (publish all scores, effect sizes, p-values).

But, AFNI, SPM and FSL are used by researchers who want’s to publish. And you want to communicate your results to the journal that you submit to. When using SPM you can check a box and it automatically gives you results that are FWE corrected (usually feels less conservative but that’s another discussion). Then you simply write, these results came form SPM, they are FWE corrected/adjusted. And the reviewer will understand this and see the results as “significant”. We have to calculate smoothness (or residuals) and run simulations and them look at a cluster table. Then read some papers and documentation to realize that the cluster sizes are only reasonable with p-values below 0.002 (from the FMRI Clustering in AFNI: False Positive Rates Redux paper).

ACF parameters (which are computed by the standard afni_proc.py pipeline), and a voxelwise p=0.001 or p=0.002 is reasonably safe with this method

So, in order to be able to write that the p-values are adjusted/corrected for multiple-comparison / FWE errors you have to do this. Or the permutaiton-testing in 3dttest++ which in my experience is super conservative). The reviwer want’s to know (right or wrong) if the findings are significant.

And don’t get me wrong, I love AFNI and it feels so much more honest and in control this way. But I also hope you understand that it is kind of frustrating to not have an “offical way” on how to do these things =).

I just hope reviewers are getting a bit more open minded about what a signficiant finding is.

Thanks again Gang!

AFNI, SPM and FSL are used by researchers who want’s to publish. And you want to communicate your results to the journal
that you submit to. When using SPM you can check a box and it automatically gives you results that are FWE corrected (usually
feels less conservative but that’s another discussion). Then you simply write, these results came form SPM, they are FWE
corrected/adjusted. And the reviewer will understand this and see the results as “significant”…

I also hope you understand that it is kind of frustrating to not have an “offical way” on how to do these things =).

In general, it is to some extent desirable to standardize a modeling/processing streamline for easy communications as well as for reproducibility. However, when the modeling/processing approach under discussion remains fluid and has not yet reached maturation, it is questionable and unproductive to enforce an imprimatur through drawing a line in the sand. For this reason, would it be more beneficial to the field to adopt and nurture an open culture as you said below?

I got that your approach (which I like) is to report “everything” that is reasonable and interesting and be open about the stats (publish all scores, effect sizes, p-values).

I just hope reviewers are getting a bit more open minded about what a signficiant finding is.

It is hard to swim against the waves, but maybe it is also worth turning the tide?

Hi again Gang!

I wanted to try this. We are using a different dataset but again, it is quite large (7450 dual-regression maps). I’m using MVM and to find out significant cluster sizes vs per-voxel-pvalues. I need the residuals to be able to run 3dclustsim. 3dMVM finishes but fails at creating the the residuals (could not map memory).

I wanted to split the the images, like you suggested, using 3dZcutup.

I can’t get it to work though. My idea was to loop through all the input files and create sub01_cut0.nii.gz, sub01_cut1.nii.gz and sub01_cut2.nii.gz and run 3 separate 3dMVMs and then glue them together.

When looking at the documentation I would think this is the way to do it:


3dZcutup -prefix test0 -keep 0 16 1017436_20227_2_0_25_dr_stage2.nii.gz

This, however, gives me:


++ 3dZcutup: AFNI version=AFNI_21.3.01 (Oct 12 2021) [64-bit]
*** Nonsense values after -keep!

It works when I just want it to give me one slice:


3dZcutup -prefix test0 -keep 5 5 1017436_20227_2_0_25_dr_stage2.nii.gz 
++ 3dZcutup: AFNI version=AFNI_21.3.01 (Oct 12 2021) [64-bit]
++ Output dataset ./test0+tlrc.BRIK

When looking at examples, I don’t see why the first command would fail. The only thing I can think of is that these are 3D volumes, not 4D volumes. They are 3D correlation maps from a dual regression process.

I can solve this by looping through all 91 slices and merging them into 3 chunks but that is a lot of for-looping for 7450 subjects with 91 slices each.
3dinfo on the data:


Dataset File:    1017436_20227_2_0_25_dr_stage2.nii.gz
Identifier Code: NII_F-0DxhtkHnSPAt0_jFec6Q  Creation Date: Wed May  4 13:38:28 2022
Template Space:  MNI
Dataset Type:    Anat Bucket (-abuc)
Byte Order:      LSB_FIRST {assumed} [this CPU native = LSB_FIRST]
Storage Mode:    NIFTI
Storage Space:   3,610,516 (3.6 million) bytes
Geometry String: "MATRIX(2,0,0,-90,0,-2,0,126,0,0,2,-72):91,109,91"
Data Axes Tilt:  Plumb
Data Axes Orientation:
  first  (x) = Right-to-Left
  second (y) = Posterior-to-Anterior
  third  (z) = Inferior-to-Superior   [-orient RPI]
R-to-L extent:   -90.000 [R] -to-    90.000 [L] -step-     2.000 mm [ 91 voxels]
A-to-P extent:   -90.000 [A] -to-   126.000 [P] -step-     2.000 mm [109 voxels]
I-to-S extent:   -72.000 [I] -to-   108.000 [S] -step-     2.000 mm [ 91 voxels]
Number of values stored at each pixel = 1
  -- At sub-brick #0 '?' datum type is float

Thanks!

Hi Robin,

There was a goofy mistake with that -keep test that was fixed this past Feb.
Are you able to update the binaries? It might be worth doing anyway, though maybe you want to stick with one version for this analysis.

Please let me know which way you want to go.

  • rick

Thanks!

This was on a computer that I’m not managing, made them update the binaries and now it works! Thanks RIck!

I have a related question, I will try to make it clear - It might by difficult, haha.

Task: Run 3dMVM on a huge number of datasets. Getting the stats: No problem, it does them slice by slice. Problem: using the -resid option. It keeps all images (4D) in memory.

The solution: Use 3dZcutup and split the data into 9 stacks of about 10 slices each. Run 9 instances of 3dMVM and glue the stats and residuals together afterwards.

New Problem:
This works for all stacks apart from the most buttom and top ones (slices0-9) and slices 82-91) - Then I get the classic 3dMVM error where it suggests that the model might be wrong or the table might be wrongly formatted. This is run via a script and it works for all other stacks. My guess is that there are a lot of 0 voxels and very little brain at the top 10 and bottom 10 slcies.

If I increase the bottom and top stacks (e.g. merge stack0,1,2), then the stats work again - But I reach the breaking point of memory, it can’t make the residuals.

My question:
If I my end goal is to the the stats + residuals for the bottom 10 slices. Can I make a “fake” dataset, containing 20 slices: first 10 are bottom slices that I’m interested in and the next 10 could be from the middle of the brain, just to trick MVM to think that the model/stats are correct?

Or does this screw up the residuals? Do they depend on the adjesent slices? If the residuals are independent of adjecent slices, then I can probably do this.

Thanks!

Robin,

If I my end goal is to the the stats + residuals for the bottom 10 slices. Can I make a “fake” dataset,
containing 20 slices: first 10 are bottom slices that I’m interested in and the next 10 could be from the
middle of the brain, just to trick MVM to think that the model/stats are correct?

Yes, that should be fine. Alternatively, in this case, you can temporarily comment out those few lines in 3dMVM.R that performs model checking, and then run 3dMVM on those first and last 10 slices.

Thansk Gang,

I liked this:

Alternatively, in this case, you can temporarily comment out those few lines in 3dMVM.R that performs model checking, and then run 3dMVM on those first and last 10 slices

Do you mind helping me identifying those lines?

It was not these (line 1558-1569)


# pick up a test voxel
if(!is.na(lop$maskFN)) {
  idx <- which(lop$maskData == 1, arr.ind = T)
  idx <- idx[floor(dim(idx)[1]/2),1:3]
  xinit <- idx[1]; yinit <- idx[2]; zinit <- idx[3]
  ii <- xinit; jj <- yinit; kk <- zinit
} else {
   xinit <- dimx%/%3
   if(dimy==1) {xinit <-1; yinit <- 1} else yinit <- dimy%/%2
   if(dimz==1) {xinit <-1; zinit <- 1} else zinit <- dimz%/%2
   ii <- xinit; jj <- yinit; kk <- zinit
}

It complained about not finding variable jj.

Instead, i tried removing the while-loop containing the model warning strings:


while(is.null(fm)) {
   fm<-NULL
   if (all(abs(inData[ii, jj, kk,]) < 10e-8)) fm<-NULL else {
   lop$dataStr$Beta<-inData[ii, jj, kk,1:lop$NoFile]
   if(any(!is.null(lop$vVars))) {
      #lop$dataStr <- assVV(lop$dataStr, lop$vQV[1], vQV[ii,jj,kk,])
      lop$dataStr <- assVV(lop$dataStr, lop$vQV[1], inData[ii,jj,kk,(lop$NoFile+1):(lop$NoFile+lop$nSubj)], lop$vVarCenters[1])
      #if(all(is.na(lop$vVarCenters)))
      #   lop$dataStr[,lop$QV] <- scale(lop$dataStr[,lop$QV], center=TRUE, scale=F) else
      #   lop$dataStr[,lop$QV] <- scale(lop$dataStr[,lop$QV], center=lop$vVarCenters, scale=F)
   }
   #options(warn=-1)
   if(lop$robust) {
       suppressMessages(try(fm <- lmrob(lop$ModelForm, data=lop$dataStr), silent=TRUE))
       if(!fm$converged) fm <- NULL
   } else {
   if(lop$afex_new) suppressMessages(try(fm <- aov_car(ModelForm, data=lop$dataStr, factorize=FALSE, type=lop$SS_type), silent=TRUE)) else
   suppressMessages(try(fm <- aov.car(ModelForm, data=lop$dataStr, factorize=FALSE, type=lop$SS_type, return='full'), silent=TRUE)) }

   if(!is.null(fm)) {
      if(lop$robust) uvfm <- Anova(fm, test.statistic="F", type=3) else {
         if(lop$afex_new) {
            uvfm  <- summary(fm)
            uvfm0 <- anova(fm, intercept=T) # contains intercept when no within-subject factors involved, and provides GES
         } else uvfm <- univ(fm$Anova)  # univariate modeling
          if(!is.na(lop$mVar)) if(is.na(lop$wsVars)) mvfm <- Anova(fm$lm, type=lop$SS_type, test='Pillai')
      }
   }

   if(!is.null(fm)) if((lop$num_glt > 0) | (lop$num_glf > 0)) if(lop$robust) {
      gltIn <- fm; iData <- NULL } else {
      gltIn <- fm$lm; if(lop$afex_new) iData <- fm$data$idata else iData <- fm$idata
   }

   if(!is.null(fm)) if (lop$num_glt > 0) {
      n <- 1
      while(!is.null(fm) & (n <= lop$num_glt)) {
         if(all(is.na(lop$gltList[[n]]))) {  # Covariate testing only without factors involved
            gltRes[[n]] <- tryCatch(testInteractions(gltIn, pairwise=NULL,
               covariates=lop$covValList[[n]], slope=lop$slpList[[n]], adjustment="none", idata = iData),
               error=function(e) NA) } else {     # Involving factors
            tryCatch(gltRes[[n]] <- testInteractions(gltIn, custom=lop$gltList[[n]], slope=lop$slpList[[n]],
               covariates=lop$covValList[[n]], adjustment="none", idata = iData), error=function(e) NULL)
         }
         if(any(is.null(gltRes[[n]]))) {
            fm <- NULL
            errex.AFNI(paste("Failed at GLT No. ", n, "! Make sure that the model or GLT specification syntax is correct.", sep=''))
         }
         n <- n+1
      }
   }

   if(!is.null(fm)) if (lop$num_glf > 0) {
      n <- 1
      while(!is.null(fm) & (n <= lop$num_glf)) { # this part may need fixes!
         if(all(is.na(lop$glfList[[n]]))) { # Covariate testing only without factors involved: possible?
            glfRes[[n]] <- tryCatch(testFactors(gltIn, pairwise=NULL,
               covariates=lop$covValListF[[n]], slope=lop$slpListF[[n]], adjustment="none", idata = iData)$terms$`(Intercept)`$test,
               error=function(e) NA) } else {  # Involving factors
            glfRes[[n]] <- tryCatch(testFactors(gltIn, levels=lop$glfList[[n]], slope=lop$slpListF[[n]],
               covariates=lop$covValListF[[n]], adjustment="none", idata = iData)$terms$`(Intercept)`$test, error=function(e) NULL)
         }
         if(any(is.null(glfRes[[n]]))) fm <- NULL
         n <- n+1
      }
   }

   }
   if(!is.null(fm))  {
      print(sprintf("Great, test run passed at voxel (%i, %i, %i)!", ii, jj, kk))
   } else if(ii<dimx) ii<-ii+1 else if(jj<dimy) {ii<-xinit; jj <- jj+1} else if(kk<dimz) {
      ii<-xinit; jj <- yinit; kk <- kk+1 } else {
      cat('~~~~~~~~~~~~~~~~~~~ Model test failed! ~~~~~~~~~~~~~~~~~~~\n')
      cat('Possible reasons:\n\n')
      cat('0) Make sure that R packages afex and phia have been installed. See the 3dMVM\n')
      cat('help documentation for more details.\n\n')
      cat('1) Inappropriate model specification with options -bsVars, -wsVars, or -qVars.\n')
      cat('Note that within-subject or repeated-measures variables have to be declared\n')
      cat('with -wsVars.\n\n')
      cat('2) Incorrect specifications in general linear test coding with -gltCode.\n\n')
      cat('3) Mistakes in data table. Check the data structure shown above, and verify\n')
      cat('whether there are any inconsistencies.\n\n')
      cat('4) Inconsistent variable names which are case sensitive. For example, factor\n')
      cat('named Group in model specification and then listed as group in the table header\n')
      cat('would cause grief for 3dMVM.\n\n')
      cat('5) Not enough number of subjects. This may happen when there are two or more\n')
      cat('withi-subject factors. For example, a model with two within-subject factors with\n')
      cat('m and n levels respectively requires more than (m-1)*(n-1) subjects to be able to\n')
      cat('model the two-way interaction with the multivariate approach.\n\n')
      errex.AFNI("Quitting due to model test failure...")
   }
}

That did not work either.
Then I instead commented the errex.AFNI parts in the while loop above. Then I got stuck in the loop.

Can you point me to the lines? =)

Afni-version:


afni version         : Precompiled binary linux_centos_7_64: May 10 2022 
                     : AFNI_22.1.09 'Antoninus Pius'

It looks like lop$outInit needs to be initialized through the testing process among some voxels. Since in this case you don’t have any meaningful voxels among those initial slices, add the following line

lop$outInit ← rep(0, NoBrick)
(replace NoBrick with the total number of sub-bricks in the output – you can figure this out, right?)

before the following lines:

print(sprintf(“Start to compute %s slices along Z axis. You can monitor the progress”, dimz))
print(“and estimate the total run time as shown below.”)
print(format(Sys.time(), “%D %H:%M:%OS3”))

Good luck!

Thanks Gang!
Is it the number of sub-briks in the results output (i.e. depending on the stats used in 3dMVM) or is it the number or sub-briks in the residuals (number of subjects)?

In any case, I tried it and it does not work.
I installed a fresh AFNI install, to avoid any of my old edits.
I then found the lines:


print(sprintf("Start to compute %s slices along Z axis. You can monitor the progress", dimz))
print("and estimate the total run time as shown below.")
print(format(Sys.time(), "%D %H:%M:%OS3"))

And I added this above it (tried 10 and 7350, depedning on the answer to my first question above).


lop$outInit <- rep(0, 10)

And I still get:


~~~~~~~~~~~~~~~~~~~ Model test failed! ~~~~~~~~~~~~~~~~~~~
Possible reasons:

0) Make sure that R packages afex and phia have been installed. See the 3dMVM
help documentation for more details.

1) Inappropriate model specification with options -bsVars, -wsVars, or -qVars.
Note that within-subject or repeated-measures variables have to be declared
with -wsVars.

2) Incorrect specifications in general linear test coding with -gltCode.

3) Mistakes in data table. Check the data structure shown above, and verify
whether there are any inconsistencies.

4) Inconsistent variable names which are case sensitive. For example, factor
named Group in model specification and then listed as group in the table header
would cause grief for 3dMVM.

5) Not enough number of subjects. This may happen when there are two or more
withi-subject factors. For example, a model with two within-subject factors with
m and n levels respectively requires more than (m-1)*(n-1) subjects to be able to
model the two-way interaction with the multivariate approach.


** Error: 
   Quitting due to model test failure...

Did you mean that I should remove the whole while-loop and then add this line, or just add this line?
My single code edit:


...
...
   lop$outInit <- rep(0, NoBrick)  # initialization for the voxel-wise output
   lop$crit    <- 0.75  # switch between GG and HF
}
[b]lop$outInit <- rep(0, 10)[/b] 
print(sprintf("Start to compute %s slices along Z axis. You can monitor the progress", dimz))
print("and estimate the total run time as shown below.")
print(format(Sys.time(), "%D %H:%M:%OS3"))

EDIT:
You don’t have to answer this, I went the other way instead. Using a middle slice toghether with the edge slices to trick the program that there are good voxels.

The situation is more complex than I originally realized. 3dMVM needs to get a workable voxel to extract a few parameters such as degrees of freedom. It might better to go back to your original plan by making a “fake” dataset.