3dLME output looks like "the mask"

Hello,

I ran 3dLME and I got the following log message, as well as the outcome image that looks exactly the same as the group mask that I used.

*+ WARNING: Smallest FDR q [3 cond:group F] = 0.466427 ==> few true single voxel detections

Does it simply mean that there was no significant result (e.g. interaction effect)? I thought that the output image/map represents F-score for each voxel, irrespective of whether significant or not.

Sophia

Sounds like something may not be quite right in your 3dLME command. Can you please post the full command and output?

Also the info from:


afni_system_check.py -check_all

Thank you for your quick reply, Peter.

Here is the full command and output: (I tried to examine condition*group interaction: 2 groups with ag and ctr; 2 conditions, repeated, with r1 and r2), and input files are resting-state fMRI dataset. I was puzzled when I saw the output file/image, which looks (is) the group mask. Thank you! Sophia

sh -x 3dLME.txt

  • 3dLME -prefix DBD_adult_28 -model ‘condgroup’ -mask group_mask_to_standard_2mm_gm_25pc_all_28_R1.nii.gz -jobs 2 -ranEff ‘~1’ -SS_type 3 -num_glt 2 -gltLabel 1 r1-r2 -gltCode 1 'cond : 1r1 -1r2’ -gltLabel 2 ag_r1-r2 -gltCode 2 'cond : 1r1 -1r2 group : 1ag’ -num_glf 1 -glfLabel 1 r1-r2 -glfCode 1 ‘group : 1ctr & 1ag cond : 1r1 -1r2’ -dataTable Subj cond group InputFile DBD_A00 r1 ag DBD_A00_r1.nii.gz DBD_A00 r2 ag DBD_A00_r2.nii.gz DBD_A03 r1 ag DBD_A03_r1.nii.gz DBD_A03 r2 ag DBD_A03_r2.nii.gz DBD_A05 r1 ag DBD_A05_r1.nii.gz DBD_A05 r2 ag DBD_A05_r2.nii.gz DBD_A06 r1 ag DBD_A06_r1.nii.gz DBD_A06 r2 ag DBD_A06_r2.nii.gz DBD_A07 r1 ag DBD_A07_r1.nii.gz DBD_A07 r2 ag DBD_A07_r2.nii.gz DBD_A08 r1 ag DBD_A08_r1.nii.gz DBD_A08 r2 ag DBD_A08_r2.nii.gz DBD_A10 r1 ag DBD_A10_r1.nii.gz DBD_A10 r2 ag DBD_A10_r2.nii.gz DBD_A11 r1 ag DBD_A11_r1.nii.gz DBD_A11 r2 ag DBD_A11_r2.nii.gz DBD_A15 r1 ag DBD_A15_r1.nii.gz DBD_A15 r2 ag DBD_A15_r2.nii.gz DBD_A20 r1 ag DBD_A20_r1.nii.gz DBD_A20 r2 ag DBD_A20_r2.nii.gz DBD_A21 r1 ag DBD_A21_r1.nii.gz DBD_A21 r2 ag DBD_A21_r2.nii.gz DBD_A22 r1 ag DBD_A22_r1.nii.gz DBD_A22 r2 ag DBD_A22_r2.nii.gz DBD_A23 r1 ag DBD_A23_r1.nii.gz DBD_A23 r2 ag DBD_A23_r2.nii.gz DBD_A24 r1 ag DBD_A24_r1.nii.gz DBD_A24 r2 ag DBD_A24_r2.nii.gz DBD_A25 r1 ag DBD_A25_r1.nii.gz DBD_A25 r2 ag DBD_A25_r2.nii.gz DBD_A26 r1 ag DBD_A26_r1.nii.gz DBD_A26 r2 ag DBD_A26_r2.nii.gz DBD_A27 r1 ag DBD_A27_r1.nii.gz DBD_A27 r2 ag DBD_A27_r2.nii.gz DBD_A28 r1 ag DBD_A28_r1.nii.gz DBD_A28 r2 ag DBD_A28_r2.nii.gz DBD_A01 r1 ctr DBD_A01_r1.nii.gz DBD_A01 r2 ctr DBD_A01_r2.nii.gz DBD_A02 r1 ctr DBD_A02_r1.nii.gz DBD_A02 r2 ctr DBD_A02_r2.nii.gz DBD_A04 r1 ctr DBD_A04_r1.nii.gz DBD_A04 r2 ctr DBD_A04_r2.nii.gz DBD_A09 r1 ctr DBD_A09_r1.nii.gz DBD_A09 r2 ctr DBD_A09_r2.nii.gz DBD_A12 r1 ctr DBD_A12_r1.nii.gz DBD_A12 r2 ctr DBD_A12_r2.nii.gz DBD_A14 r1 ctr DBD_A14_r1.nii.gz DBD_A14 r2 ctr DBD_A14_r2.nii.gz DBD_A16 r1 ctr DBD_A16_r1.nii.gz DBD_A16 r2 ctr DBD_A16_r2.nii.gz DBD_A17 r1 ctr DBD_A17_r1.nii.gz DBD_A17 r2 ctr DBD_A17_r2.nii.gz DBD_A18 r1 ctr DBD_A18_r1.nii.gz DBD_A18 r2 ctr DBD_A18_r2.nii.gz DBD_A19 r1 ctr DBD_A19_r1.nii.gz DBD_A19 r2 ctr DBD_A19_r2.nii.gz
    Loading required package: nlme
    Package nlme loaded successfully!

Loading required package: phia
Loading required package: car
Package phia loaded successfully!

++++++++++++++++++++++++++++++++++++++++++++++++++++
***** Summary information of data structure *****
28 subjects : DBD_A00 DBD_A01 DBD_A02 DBD_A03 DBD_A04 DBD_A05 DBD_A06 DBD_A07 DBD_A08 DBD_A09 DBD_A10 DBD_A11 DBD_A12 DBD_A14 DBD_A15 DBD_A16 DBD_A17 DBD_A18 DBD_A19 DBD_A20 DBD_A21 DBD_A22 DBD_A23 DBD_A24 DBD_A25 DBD_A26 DBD_A27 DBD_A28
56 response values
2 levels for factor cond : r1 r2
2 levels for factor group : ag ctr
2 post hoc tests

Contingency tables of subject distributions among the categorical variables:

Tabulation of subjects against all categorical variables

Subj vs cond:
         
          r1 r2
  DBD_A00  1  1
  DBD_A01  1  1
  DBD_A02  1  1
  DBD_A03  1  1
  DBD_A04  1  1
  DBD_A05  1  1
  DBD_A06  1  1
  DBD_A07  1  1
  DBD_A08  1  1
  DBD_A09  1  1
  DBD_A10  1  1
  DBD_A11  1  1
  DBD_A12  1  1
  DBD_A14  1  1
  DBD_A15  1  1
  DBD_A16  1  1
  DBD_A17  1  1
  DBD_A18  1  1
  DBD_A19  1  1
  DBD_A20  1  1
  DBD_A21  1  1
  DBD_A22  1  1
  DBD_A23  1  1
  DBD_A24  1  1
  DBD_A25  1  1
  DBD_A26  1  1
  DBD_A27  1  1
  DBD_A28  1  1

Subj vs group:

      ag ctr

DBD_A00 2 0
DBD_A01 0 2
DBD_A02 0 2
DBD_A03 2 0
DBD_A04 0 2
DBD_A05 2 0
DBD_A06 2 0
DBD_A07 2 0
DBD_A08 2 0
DBD_A09 0 2
DBD_A10 2 0
DBD_A11 2 0
DBD_A12 0 2
DBD_A14 0 2
DBD_A15 2 0
DBD_A16 0 2
DBD_A17 0 2
DBD_A18 0 2
DBD_A19 0 2
DBD_A20 2 0
DBD_A21 2 0
DBD_A22 2 0
DBD_A23 2 0
DBD_A24 2 0
DBD_A25 2 0
DBD_A26 2 0
DBD_A27 2 0
DBD_A28 2 0
***** End of data structure information *****
++++++++++++++++++++++++++++++++++++++++++++++++++++

Reading input files now…

** AFNI converts NIFTI_datatype=64 (FLOAT64) in file DBD_A00_r1.nii.gz to FLOAT32
Warnings of this type will be muted for this session.
Set AFNI_NIFTI_TYPE_WARN to YES to see them all, NO to see none.
Reading input files: Done!

If the program hangs here for more than, for example, half an hour,
kill the process because the model specification or the GLT coding
is likely inappropriate.

[1] “Great, test run passed at voxel (30, 54, 45)!”
[1] “Start to compute 91 slices along Z axis. You can monitor the progress”
[1] “and estimate the total run time as shown below.”
[1] “05/30/17 08:18:19.119”
Loading required package: snow
Package snow loaded successfully!

Z slice 1 done: 05/30/17 08:18:21.556
Z slice 2 done: 05/30/17 08:18:21.662
Z slice 3 done: 05/30/17 08:18:21.769
Z slice 4 done: 05/30/17 08:18:21.875
Z slice 5 done: 05/30/17 08:18:21.997
Z slice 6 done: 05/30/17 08:18:22.202
Z slice 7 done: 05/30/17 08:18:22.298
Z slice 8 done: 05/30/17 08:18:22.400
Z slice 9 done: 05/30/17 08:18:22.497
Z slice 10 done: 05/30/17 08:18:25.435
Z slice 11 done: 05/30/17 08:18:33.180
Z slice 12 done: 05/30/17 08:18:48.088
Z slice 13 done: 05/30/17 08:19:07.090
Z slice 14 done: 05/30/17 08:19:29.185
Z slice 15 done: 05/30/17 08:19:54.980
Z slice 16 done: 05/30/17 08:20:23.455
Z slice 17 done: 05/30/17 08:20:52.848
Z slice 18 done: 05/30/17 08:21:24.617
Z slice 19 done: 05/30/17 08:21:57.767
Z slice 20 done: 05/30/17 08:22:41.384
Z slice 21 done: 05/30/17 08:23:19.419
Z slice 22 done: 05/30/17 08:23:52.841
Z slice 23 done: 05/30/17 08:24:25.709
Z slice 24 done: 05/30/17 08:24:59.024
Z slice 25 done: 05/30/17 08:25:33.097
Z slice 26 done: 05/30/17 08:26:08.041
Z slice 27 done: 05/30/17 08:26:45.467
Z slice 28 done: 05/30/17 08:27:26.569
Z slice 29 done: 05/30/17 08:28:12.389
Z slice 30 done: 05/30/17 08:29:02.163
Z slice 31 done: 05/30/17 08:29:55.729
Z slice 32 done: 05/30/17 08:30:53.142
Z slice 33 done: 05/30/17 08:31:52.559
Z slice 34 done: 05/30/17 08:32:53.215
Z slice 35 done: 05/30/17 08:33:54.616
Z slice 36 done: 05/30/17 08:34:55.184
Z slice 37 done: 05/30/17 08:35:53.571
Z slice 38 done: 05/30/17 08:36:50.664
Z slice 39 done: 05/30/17 08:37:48.548
Z slice 40 done: 05/30/17 09:10:17.537
Z slice 41 done: 05/30/17 09:50:12.618
Z slice 42 done: 05/30/17 09:51:30.868
Z slice 43 done: 05/30/17 09:52:42.245
Z slice 44 done: 05/30/17 09:53:47.710
Z slice 45 done: 05/30/17 09:54:40.091
Z slice 46 done: 05/30/17 09:55:30.858
Z slice 47 done: 05/30/17 09:56:20.685
Z slice 48 done: 05/30/17 09:57:09.979
Z slice 49 done: 05/30/17 09:57:57.863
Z slice 50 done: 05/30/17 09:58:44.115
Z slice 51 done: 05/30/17 09:59:28.897
Z slice 52 done: 05/30/17 10:00:12.713
Z slice 53 done: 05/30/17 10:00:58.083
Z slice 54 done: 05/30/17 10:01:45.855
Z slice 55 done: 05/30/17 10:02:34.650
Z slice 56 done: 05/30/17 10:03:23.399
Z slice 57 done: 05/30/17 10:06:09.674
Z slice 58 done: 05/30/17 10:06:59.327
Z slice 59 done: 05/30/17 10:07:48.152
Z slice 60 done: 05/30/17 10:08:39.180
Z slice 61 done: 05/30/17 10:09:37.058
Z slice 62 done: 05/30/17 10:10:21.446
Z slice 63 done: 05/30/17 10:11:03.414
Z slice 64 done: 05/30/17 10:11:43.502
Z slice 65 done: 05/30/17 10:12:21.494
Z slice 66 done: 05/30/17 10:12:56.267
Z slice 67 done: 05/30/17 10:13:29.747
Z slice 68 done: 05/30/17 10:14:00.211
Z slice 69 done: 05/30/17 10:14:27.549
Z slice 70 done: 05/30/17 10:14:52.128
Z slice 71 done: 05/30/17 10:15:13.823
Z slice 72 done: 05/30/17 10:15:32.963
Z slice 73 done: 05/30/17 10:15:48.905
Z slice 74 done: 05/30/17 10:16:00.249
Z slice 75 done: 05/30/17 10:16:05.957
Z slice 76 done: 05/30/17 10:16:06.691
Z slice 77 done: 05/30/17 10:16:06.786
Z slice 78 done: 05/30/17 10:16:06.878
Z slice 79 done: 05/30/17 10:16:06.980
Z slice 80 done: 05/30/17 10:16:07.104
Z slice 81 done: 05/30/17 10:16:07.195
Z slice 82 done: 05/30/17 10:16:07.289
Z slice 83 done: 05/30/17 10:16:07.401
Z slice 84 done: 05/30/17 10:16:07.615
Z slice 85 done: 05/30/17 10:16:07.712
Z slice 86 done: 05/30/17 10:16:07.809
Z slice 87 done: 05/30/17 10:16:07.918
Z slice 88 done: 05/30/17 10:16:08.032
Z slice 89 done: 05/30/17 10:16:08.158
Z slice 90 done: 05/30/17 10:16:08.357
Z slice 91 done: 05/30/17 10:16:08.449
++ Smallest FDR q [0 (Intercept) F] = 2.12007e-10
++ Smallest FDR q [1 cond F] = 0.0117753
++ Smallest FDR q [2 group F] = 0.00939132
*+ WARNING: Smallest FDR q [3 cond:group F] = 0.466427 ==> few true single voxel detections
++ Smallest FDR q [5 r1-r2 Z] = 2.37392e-07
++ Smallest FDR q [7 ag_r1-r2 Z] = 0.00132016
++ Smallest FDR q [8 r1-r2 Chisq] = 1.07256e-07
[1] “Congratulations! You’ve got an output DBD_adult_28+tlrc”

Hi Sophia,

You are basically right, the warning means that there
is a very small fraction of voxels with low p-values
in that volume, as detected by FDR.

So you are unlikely to see a significant result from
that particular test.

  • rick

Thanks, Rick.

So, when there is no significant result (as indicated by log messages), the output image from that dataset would be typically the group mask (or standard MNI mask: I run 3dLME without using the group mask), which says “no significant result”. Is this correct?

Thanks again, Sophia

One more question.

As Rick said, it is unlikely to have any significant result from the dataset using 3dLME (two-way repeated ANOVA; 2 groups and 2 conditions/scans with pre- and post-training scans). Before running 3dLEM, I ran a group analysis (i.e., repeated measure) for each group, using FSL (GRF corrected for multiple comparisons, cluster-level thresholding Z > 2.3, p < 0.05). In the experimental group (“ag”), there was a significant effect of the perceptual training (post-/r2 scan > pre-/r1 scan) on a few brain regions/clusters, but not in the control group. Thus I expected to see some significant results from 3dLME, possibly in similar regions/clusters. Can you help me to understand potential reasons why this discrepancy occurred?

As AFNI can cope with more complex group analysis approach, I want to understand AFNI’s methods clearly. For this, your inputs will be greatly appreciated!

Best, Sophia

Sophia,

You have a pretty simple and straightforward model, and either 3dMVM or 3dLME should work fine with your data structure. Even simple t-tests with 3dttest++ would work in your case if you know how to set it up.

I ran a group analysis (i.e., repeated measure) for each group, using FSL (GRF corrected for multiple comparisons,
cluster-level thresholding Z > 2.3, p < 0.05).

Just curious: How did you run the analysis in FSL? If you perform the analysis for each group separately, that would be a simple paired t-test.

Hi Sophia,

That mask aspect is different from the FDR one. Just to be clear,
it is only for volume #3 (for cond:group F), or all of them?

And how are you viewing it? If the threshold is very low (or
even 0.0), then the result should indeed look like the mask, since
the voxels outside the mask are all 0. But that has nothing to do
with the high FDR warning.

Anyway, I am sure the comments from Gang will clarify things.

  • rick

Thanks, Gang.

I was specifically requested to show “group x condition/scan” interaction, and for it, I thought 3dLME is the option. As you suggested that this can be done by 3dMVM, too, I will take a look at this program.

In FSL, I ran repeated measure for each group, which is simple group paired t-test, as follows:

The first explanatory variable (EV) was for the condition/scan differences [1, 1, 1, …-1, -1, -1…], and then one extra EV for each subject (if there are 8 subjects, there are EV2-9). Contrast 1 with “1” for EV1 and “0” elsewhere, as well as contrast with 2 with “-1” for EV1 and “0” elsewhere, tested for the paired differences (i.e., Contrast 1 implies Pre-scan > Post-scan; Contrast 2 implies Post-scan > Pre-scan).

This method yielded significant results/clusters (Contrast 2, the increase in local functional connectivity, in a few brain regions) only in the experimental group, and post-hoc analysis (e.g., by SPSS) using connectivity values extracted from these clusters (based on the experimental group) confirmed no significant difference (increase nor decrease) in the control group.

Hence, I expected that 3dLME would yield the effect of group x scan interaction…unfortunately it was not the case, and I wonder why!

Sophia

Hi Rick,

From your comment, “That mask aspect is different from the FDR one. Just to be clear, it is only for volume #3 (for cond:group F), or all of them?”, I wonder if the output images (+tlrc) could be multiple… I’ve got only one output image “DBD_adult_28+tlrc”, which looks like the mask.

I viewed the output using AFNI viewer (Underlay = MNI_avg152T1; Overlay = the output image). 3dinfo provided the following information for the output image. One of the log message is “Number of values stored at each pixel = 9”, but I am not sure how to see these numbers in the mask-looking output image.


Dataset File: DBD_adult_28+orig
Identifier Code: XYZ_8PUb46gn02n1876r9RGx88 Creation Date:
Template Space: ORIG
Dataset Type: Func-Bucket (-fbuc)
Byte Order: LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode: BRIK
Storage Space: 16,247,322 (16 million [mega]) bytes
Geometry String: “MATRIX(2,0,0,-90,0,-2,0,126,0,0,2,-72):91,109,91”
Data Axes Tilt: Plumb
Data Axes Orientation:
first (x) = Right-to-Left
second (y) = Posterior-to-Anterior
third (z) = Inferior-to-Superior [-orient RPI]
R-to-L extent: -90.000 [R] -to- 90.000 [L] -step- 2.000 mm [ 91 voxels]
A-to-P extent: -90.000 [A] -to- 126.000 [P] -step- 2.000 mm [109 voxels]
I-to-S extent: -72.000 [I] -to- 108.000 [S] -step- 2.000 mm [ 91 voxels]
Number of values stored at each pixel = 9
– At sub-brick #0 ‘(Intercept) F’ datum type is short: 0 to 100
statcode = fift; statpar = 1 26
– At sub-brick #1 ‘cond F’ datum type is short: 0 to 32767 [internal]
[* 0.00148736] 0 to 48.7362 [scaled]
statcode = fift; statpar = 1 26
– At sub-brick #2 ‘group F’ datum type is short: 0 to 32767 [internal]
[* 0.0015869] 0 to 51.9979 [scaled]
statcode = fift; statpar = 1 26
** For info on all 9 sub-bricks, use ‘3dinfo -verb’ **

I am unfamiliar with AFNI group analyses, and sorry if all my questions are too basic or unclear!

Sophia

output.png

Sophia,

Theoretically speaking, the repeated-measures (or within-subject) ANOVA for one group you ran in FSL should be the same as a paired t-test. Are you comparing such a paired t-test from FSL with one of the post hoc test from 3dLME? If so, one possibility about the discrepancy is the following: You can only perform one-sided (or one-tailed) t-test in other software packages. It is debatable whether such a blindsight approach is valid or not, but the default in AFNI is two-sided unless the user knowingly switches to one-sided testing on the AFNI GUI.

Thanks for your explanation, Gang.

I think that the result I’ve got from FSL is based on one-tailed t-test. More specifically, my FSL result came from one of the two contrast, “A-B contrast”, showing a significant effect in the experimental group only, but no significant result for B-A contrast in either group. I assume that these may have contributed to the discrepancy (i.e., 3dLME yielded no significant interaction effect).

The AFNI team is amazingly responsive to all the questions. Thanks again!

Best, Sophia

Hi Sophia,

It seems like you need to spend time exploring your
results. The output probably looks like a mask
because no reasonable threshold has been set.

Plus, you are looking at volume #0, the Intercept
F-stat. As you have noted, there are 9 volumes in
that data. From the Define Overlay panel, there are
menu buttons on the right to choose the OLay and
Thr volumes (for color Overlay and Threshold
volumes, respectively).

Consider reviewing afni03_interactive.pdf, starting
on slide 26.

  • rick

Dear Rick,

Oh, thank you very much. Very useful pieces of information that I did not know clearly. I will go through the provided document (afni03_interactive.pdf) to understand them. Many thanks!

Sophia

I assume that these may have contributed to the discrepancy (i.e., 3dLME yielded no significant interaction effect).

You can verify whether the two analysis results match by setting up the 3dLME output with one-tailed thresholding. One approach is to match up the FSL one-tailed voxel-wise p-value (e.g., 0.005) with twice as large (e.g. 0.01) on the AFNI viewer. A second (better) approach for one-tailed thresholding in AFNI is to use the right mouse button to click the area right above the threshold slider bar, and then choose Pos Only (or Neg Only) for the Sign option.

Hi Sophia and Rick,
My colleague and I also noticed that for some subbricks in our respective studies, the max F value listed when using 3dclust was 100. Does the program max out at 100? Is there some sort of conversion we should use to get the exact F value (if it is indeed higher than 100)?

As a side note, we both ran first level models in SPM but then switched to AFNI later on down the road (for me, 3dMVM seemed more appropriate). I dont know if this is relevant to the current issue, but we wanted to share, just in case.

Thank you,
Katie

for some subbricks in our respective studies, the max F value listed when using 3dclust was 100. Does the program max out at 100?

Yes, when an F-stat value is greater than 100, there is little reason to have the exact the value, so it’s automatically truncated for storage purpose.

Is there some sort of conversion we should use to get the exact F value (if it is indeed higher than 100)?

Why do you need the exact F-stat value when it’s higher than 100?

Hi Gang,
Sorry for my delayed reply. I just saw your response!

I wanted to report the F stat- the intercept is meaningful in my case.

Thank you

I wanted to report the F stat- the intercept is meaningful in my case.

When an F-stat value is larger than 100, there is no point reporting the exact value because accuracy becomes unimportant for such a large value, and you can simply say F(n1, n2) > 100.