I have some questions regarding the interpretation of comparing amplitude modulated data to non-amplitude modulated data.
I am comparing are two 3dLME analyses with the following model where PF = four values associated with a power function. This model was used to analyze four repeated measures vectors representing different memory ages (hour, day, week, month).
3dLME -prefix myOutput
-model “PF”
-qVars “PF”
-ranEff ‘~1+PF’ \
I have results from this model that are 1) non-amplitude modulated and 2) amplitude modulated (AM2) with two continuous variables (response time and confidence).
When comparing these two models, I am wondering how to interpret voxels that were significant in the non-amplitude modulated analysis but non-significant in the AM2 analysis. In addition, I am wondering the reverse, how would I interpret voxels that were not significant in the non-amplitude modulated analysis but are significant in the AM2 analysis? My main point of confusion is that the variables in the amplitude-modulated analysis are demeaned within condition.
When comparing these two models, I am wondering how to interpret voxels that were significant in the non-amplitude
modulated analysis but non-significant in the AM2 analysis. In addition, I am wondering the reverse, how would I interpret
voxels that were not significant in the non-amplitude modulated analysis but are significant in the AM2 analysis?
I would be a little careful about the comparisons. The demarcation between “significant” and “non-significant” results is like drawing a line in sand, which depends on some underlying assumptions about the adopted model. Without knowing the model specifics and the detailed results, it’s hard to make accurate assessments about the comparisons between the two modeling pipelines. In addition, in the LME model at the population level as well as the AM2 approach at the subject level, linearity is implicitly assumed about the slopes. In addition, cross-trial variability is assumed to be negligible.
My main point of confusion is that the variables in the amplitude-modulated analysis are demeaned within condition.
With the assumption that your LME model took the intercept effects (instead of the slopes) from the AM2 output as input at the population level, demeaning would be appropriate in the current context.
With the assumption that your LME model took the intercept effects (instead of the slopes) from the AM2 output as input at the population level, demeaning would be appropriate in the current context.
How would one go about using the intercept effects instead of the slopes? I used the coefficients from a 3dDeconvolve output, which used the AM2 stimulus timing files, as inputs into the population level.
With AM2, you would have two regression coefficients associated with each condition: the first is the “intercept” that corresponds to the condition effect when the modulatory variable is adjusted at its center value (e.g., mean) while the second is the slope effect (i.e., modulation). I assume that you provided the first beta (intercept) for your population-level analysis.
Thank you, that makes sense. I have run the following 3dDeconvolve command to generate the coefficients and have pasted the output of one of the conditions. There are 24 sub-bricks associated with one condition and I was wondering which one represents the “intercept” that corresponds to the condition effect when the modulatory variable is adjusted at its center value? In addition, within the 3dDeconvolve command, does the gltlsym flag automatically use sub-brick 0 to create the glt? As an example, I am currently using coefficients generated from glt_label 9 as input to the population level and assumed it was the condition effect when the modulatory variable is adjusted at its center value. Is this correct?
– At sub-brick #50 ‘hour_targets#0_Coef’ datum type is float: -10.4489 to 12.8519
– At sub-brick #51 ‘hour_targets#0_Tstat’ datum type is float: -7.96445 to 10.5607
statcode = fitt; statpar = 1374
– At sub-brick #52 ‘hour_targets#1_Coef’ datum type is float: -14.9663 to 11.7733
– At sub-brick #53 ‘hour_targets#1_Tstat’ datum type is float: -11.5405 to 14.3313
statcode = fitt; statpar = 1374
– At sub-brick #54 ‘hour_targets#2_Coef’ datum type is float: -11.499 to 13.5667
– At sub-brick #55 ‘hour_targets#2_Tstat’ datum type is float: -9.02144 to 24.0982
statcode = fitt; statpar = 1374
– At sub-brick #56 ‘hour_targets#3_Coef’ datum type is float: -17.0224 to 10.1095
– At sub-brick #57 ‘hour_targets#3_Tstat’ datum type is float: -10.5102 to 26.6022
statcode = fitt; statpar = 1374
– At sub-brick #58 ‘hour_targets#4_Coef’ datum type is float: -21.7766 to 13.1005
– At sub-brick #59 ‘hour_targets#4_Tstat’ datum type is float: -10.0235 to 18.7194
statcode = fitt; statpar = 1374
– At sub-brick #60 ‘hour_targets#5_Coef’ datum type is float: -14.5087 to 12.9383
– At sub-brick #61 ‘hour_targets#5_Tstat’ datum type is float: -7.91893 to 9.23477
statcode = fitt; statpar = 1374
– At sub-brick #62 ‘hour_targets#6_Coef’ datum type is float: -8.64132 to 9.26615
– At sub-brick #63 ‘hour_targets#6_Tstat’ datum type is float: -5.35049 to 5.07596
statcode = fitt; statpar = 1374
– At sub-brick #64 ‘hour_targets#7_Coef’ datum type is float: -6.8888 to 7.93592
– At sub-brick #65 ‘hour_targets#7_Tstat’ datum type is float: -4.55072 to 4.17768
statcode = fitt; statpar = 1374
– At sub-brick #66 ‘hour_targets#8_Coef’ datum type is float: -0.026589 to 0.0286437
– At sub-brick #67 ‘hour_targets#8_Tstat’ datum type is float: -4.11187 to 5.41255
statcode = fitt; statpar = 1374
– At sub-brick #68 ‘hour_targets#9_Coef’ datum type is float: -0.0281863 to 0.0315514
– At sub-brick #69 ‘hour_targets#9_Tstat’ datum type is float: -3.79148 to 4.3071
statcode = fitt; statpar = 1374
– At sub-brick #70 ‘hour_targets#10_Coef’ datum type is float: -0.0288239 to 0.0401153
– At sub-brick #71 ‘hour_targets#10_Tstat’ datum type is float: -4.09525 to 4.27468
statcode = fitt; statpar = 1374
– At sub-brick #72 ‘hour_targets#11_Coef’ datum type is float: -0.0301019 to 0.0427169
– At sub-brick #73 ‘hour_targets#11_Tstat’ datum type is float: -3.97246 to 5.50767
statcode = fitt; statpar = 1374
– At sub-brick #74 ‘hour_targets#12_Coef’ datum type is float: -0.0275101 to 0.0282634
– At sub-brick #75 ‘hour_targets#12_Tstat’ datum type is float: -4.53677 to 5.98939
statcode = fitt; statpar = 1374
– At sub-brick #76 ‘hour_targets#13_Coef’ datum type is float: -0.0256773 to 0.0365262
– At sub-brick #77 ‘hour_targets#13_Tstat’ datum type is float: -3.90988 to 4.54588
statcode = fitt; statpar = 1374
– At sub-brick #78 ‘hour_targets#14_Coef’ datum type is float: -0.0301191 to 0.0580204
– At sub-brick #79 ‘hour_targets#14_Tstat’ datum type is float: -4.39122 to 4.33509
statcode = fitt; statpar = 1374
– At sub-brick #80 ‘hour_targets#15_Coef’ datum type is float: -0.0282869 to 0.0264222
– At sub-brick #81 ‘hour_targets#15_Tstat’ datum type is float: -3.80869 to 4.21798
statcode = fitt; statpar = 1374
– At sub-brick #82 ‘hour_targets#16_Coef’ datum type is float: -13.7903 to 10.0837
– At sub-brick #83 ‘hour_targets#16_Tstat’ datum type is float: -4.12466 to 4.50353
statcode = fitt; statpar = 1374
– At sub-brick #84 ‘hour_targets#17_Coef’ datum type is float: -10.6915 to 10.858
– At sub-brick #85 ‘hour_targets#17_Tstat’ datum type is float: -4.01089 to 4.65436
statcode = fitt; statpar = 1374
– At sub-brick #86 ‘hour_targets#18_Coef’ datum type is float: -15.2205 to 10.6691
– At sub-brick #87 ‘hour_targets#18_Tstat’ datum type is float: -4.0562 to 4.32744
statcode = fitt; statpar = 1374
– At sub-brick #88 ‘hour_targets#19_Coef’ datum type is float: -11.9866 to 11.6637
– At sub-brick #89 ‘hour_targets#19_Tstat’ datum type is float: -3.89083 to 4.91938
statcode = fitt; statpar = 1374
– At sub-brick #90 ‘hour_targets#20_Coef’ datum type is float: -10.585 to 15.5468
– At sub-brick #91 ‘hour_targets#20_Tstat’ datum type is float: -3.97106 to 4.45029
statcode = fitt; statpar = 1374
– At sub-brick #92 ‘hour_targets#21_Coef’ datum type is float: -12.8017 to 10.3374
– At sub-brick #93 ‘hour_targets#21_Tstat’ datum type is float: -3.96983 to 4.52196
statcode = fitt; statpar = 1374
– At sub-brick #94 ‘hour_targets#22_Coef’ datum type is float: -7.64403 to 9.91451
– At sub-brick #95 ‘hour_targets#22_Tstat’ datum type is float: -3.68258 to 3.85035
statcode = fitt; statpar = 1374
– At sub-brick #96 ‘hour_targets#23_Coef’ datum type is float: -11.5217 to 11.9084
– At sub-brick #97 ‘hour_targets#23_Tstat’ datum type is float: -4.00909 to 3.8033
I am revisiting the discussion of the question above. The 3dDeconvolve is pasted at the end of the post. For the amplitude modulation, there are 2 behavioral measures that were associated with each TR (RT and confidence). Above there are 24 sub-bricks associated with one glt condition. I was wondering which sub-bricks represents the “intercept” that corresponds to the condition effect when the modulatory variable is adjusted at its center value? Meaning, that the condition effect has therefore has been controlled for, for the behavioral variables? In addition, within the 3dDeconvolve command, does the gltlsym flag automatically use intercept sub-bricks to create the glt? As an example, I am currently using coefficients generated from glt_label 9 as input to the population level and assumed it represented the condition effect when the modulatory variable is adjusted at its center value. Is this correct?
As a follow-up to that, we are also interested in the slope effects, the modulation of the two behavioral variables. Specifically, we would like to look at the modulation of each variable separately so that we generate a brain map of only RT modulation and a second brain map of only confidence modulation. Is this possible? If so, how would we go about separating these effects?
The TENT betas should be contiguous, ordered by mean response, then modulators.
So in this case, the 8 mean response betas for “hour_target” should be #0…#7, [50…64(2)], then come the 8 for the first modulator (RT), then the 8 for confidence.
The gltsym, when not provided with a condition index list, will use the sum of all betas. I think the output of the 3dDeconvolve execution should indicate that, if you would like to verify. For example, the all_targets GLT is probably 0.25 times the sum of 96 betas(!), 24 for each condition.
To get just the sum of the mean TENT response betas use ’ .25*hour_targets[0…7] +…’
I am not sure about the follow-up. Do you mean you would like to look at the 8 betas of each modulation term, or something like an F-stat for it? Would you like to extract each of the 8 betas into a new dataset? If so, using 3dbucket with a selector like the above, [50…64(2)], should work. That means to extract volume index 50 though 64, with a step of 2 (i.e. 50, 52, 54, …, 62, 64).
Alternatively, you could run 3dDeconvolve with the -iresp option, but that would give you a TR-grid time series of only the mean response.
I believe you addressed our first question regarding how to isolate the condition effect after controlling for RT and confidence. To clarify, I re-pasted the sub-bricks associated with the mean response betas and the 2 modulators for one condition, hour_targets, below. If we are interested in the betas associated sub-brick 56 and 58 only, would we designate that as -gltsym ‘SYM: +hour_targets[3…4]’ in our 3dDeconvolve script? Then that would represent the condition effect for these two betas after controlling for RT and confidence?
For our follow-up question, we are interested in the betas associated with RT and confidence, separately. Although, we are also only interested in specific betas for each of those modulators. For the first modulator (RT), that would be sub-bricks 70 and 72. Therefore would we designate that as -gltsym ‘SYM: +hour_targets[11…12]’. Would that correctly identify those two betas only associated with RT after controlling for the mean response and confidence?
Extending this logic to the second modulator (confidence), that would be sub-bricks 88 and 90, and designated as ‘SYM: +hour_targets[19…20]’. Would that correctly identify those two betas only associated with confidence after controlling for the mean response and RT?
Thank you,
Catherine
– At sub-brick #50 ‘hour_targets#0_Coef’ datum type is float: -10.4489 to 12.8519
– At sub-brick #51 ‘hour_targets#0_Tstat’ datum type is float: -7.96445 to 10.5607
statcode = fitt; statpar = 1374
– At sub-brick #52 ‘hour_targets#1_Coef’ datum type is float: -14.9663 to 11.7733
– At sub-brick #53 ‘hour_targets#1_Tstat’ datum type is float: -11.5405 to 14.3313
statcode = fitt; statpar = 1374
– At sub-brick #54 ‘hour_targets#2_Coef’ datum type is float: -11.499 to 13.5667
– At sub-brick #55 ‘hour_targets#2_Tstat’ datum type is float: -9.02144 to 24.0982
statcode = fitt; statpar = 1374
– At sub-brick #56 ‘hour_targets#3_Coef’ datum type is float: -17.0224 to 10.1095
– At sub-brick #57 ‘hour_targets#3_Tstat’ datum type is float: -10.5102 to 26.6022
statcode = fitt; statpar = 1374
– At sub-brick #58 ‘hour_targets#4_Coef’ datum type is float: -21.7766 to 13.1005
– At sub-brick #59 ‘hour_targets#4_Tstat’ datum type is float: -10.0235 to 18.7194
statcode = fitt; statpar = 1374
– At sub-brick #60 ‘hour_targets#5_Coef’ datum type is float: -14.5087 to 12.9383
– At sub-brick #61 ‘hour_targets#5_Tstat’ datum type is float: -7.91893 to 9.23477
statcode = fitt; statpar = 1374
– At sub-brick #62 ‘hour_targets#6_Coef’ datum type is float: -8.64132 to 9.26615
– At sub-brick #63 ‘hour_targets#6_Tstat’ datum type is float: -5.35049 to 5.07596
statcode = fitt; statpar = 1374
– At sub-brick #64 ‘hour_targets#7_Coef’ datum type is float: -6.8888 to 7.93592
– At sub-brick #65 ‘hour_targets#7_Tstat’ datum type is float: -4.55072 to 4.17768
statcode = fitt; statpar = 1374
– At sub-brick #66 ‘hour_targets#8_Coef’ datum type is float: -0.026589 to 0.0286437
– At sub-brick #67 ‘hour_targets#8_Tstat’ datum type is float: -4.11187 to 5.41255
statcode = fitt; statpar = 1374
– At sub-brick #68 ‘hour_targets#9_Coef’ datum type is float: -0.0281863 to 0.0315514
– At sub-brick #69 ‘hour_targets#9_Tstat’ datum type is float: -3.79148 to 4.3071
statcode = fitt; statpar = 1374
– At sub-brick #70 ‘hour_targets#10_Coef’ datum type is float: -0.0288239 to 0.0401153
– At sub-brick #71 ‘hour_targets#10_Tstat’ datum type is float: -4.09525 to 4.27468
statcode = fitt; statpar = 1374
– At sub-brick #72 ‘hour_targets#11_Coef’ datum type is float: -0.0301019 to 0.0427169
– At sub-brick #73 ‘hour_targets#11_Tstat’ datum type is float: -3.97246 to 5.50767
statcode = fitt; statpar = 1374
– At sub-brick #74 ‘hour_targets#12_Coef’ datum type is float: -0.0275101 to 0.0282634
– At sub-brick #75 ‘hour_targets#12_Tstat’ datum type is float: -4.53677 to 5.98939
statcode = fitt; statpar = 1374
– At sub-brick #76 ‘hour_targets#13_Coef’ datum type is float: -0.0256773 to 0.0365262
– At sub-brick #77 ‘hour_targets#13_Tstat’ datum type is float: -3.90988 to 4.54588
statcode = fitt; statpar = 1374
– At sub-brick #78 ‘hour_targets#14_Coef’ datum type is float: -0.0301191 to 0.0580204
– At sub-brick #79 ‘hour_targets#14_Tstat’ datum type is float: -4.39122 to 4.33509
statcode = fitt; statpar = 1374
– At sub-brick #80 ‘hour_targets#15_Coef’ datum type is float: -0.0282869 to 0.0264222
– At sub-brick #81 ‘hour_targets#15_Tstat’ datum type is float: -3.80869 to 4.21798
statcode = fitt; statpar = 1374
– At sub-brick #82 ‘hour_targets#16_Coef’ datum type is float: -13.7903 to 10.0837
– At sub-brick #83 ‘hour_targets#16_Tstat’ datum type is float: -4.12466 to 4.50353
statcode = fitt; statpar = 1374
– At sub-brick #84 ‘hour_targets#17_Coef’ datum type is float: -10.6915 to 10.858
– At sub-brick #85 ‘hour_targets#17_Tstat’ datum type is float: -4.01089 to 4.65436
statcode = fitt; statpar = 1374
– At sub-brick #86 ‘hour_targets#18_Coef’ datum type is float: -15.2205 to 10.6691
– At sub-brick #87 ‘hour_targets#18_Tstat’ datum type is float: -4.0562 to 4.32744
statcode = fitt; statpar = 1374
– At sub-brick #88 ‘hour_targets#19_Coef’ datum type is float: -11.9866 to 11.6637
– At sub-brick #89 ‘hour_targets#19_Tstat’ datum type is float: -3.89083 to 4.91938
statcode = fitt; statpar = 1374
– At sub-brick #90 ‘hour_targets#20_Coef’ datum type is float: -10.585 to 15.5468
– At sub-brick #91 ‘hour_targets#20_Tstat’ datum type is float: -3.97106 to 4.45029
statcode = fitt; statpar = 1374
– At sub-brick #92 ‘hour_targets#21_Coef’ datum type is float: -12.8017 to 10.3374
– At sub-brick #93 ‘hour_targets#21_Tstat’ datum type is float: -3.96983 to 4.52196
statcode = fitt; statpar = 1374
– At sub-brick #94 ‘hour_targets#22_Coef’ datum type is float: -7.64403 to 9.91451
– At sub-brick #95 ‘hour_targets#22_Tstat’ datum type is float: -3.68258 to 3.85035
statcode = fitt; statpar = 1374
– At sub-brick #96 ‘hour_targets#23_Coef’ datum type is float: -11.5217 to 11.9084
– At sub-brick #97 ‘hour_targets#23_Tstat’ datum type is float: -4.00909 to 3.8033
Sure, using hour_targets[3…4] would mean the sum of betas in sub-bricks 56 and 58 (assuming it is actually the sum that you want).
As a side note, it is often nice to make contrasts unit components, using averages rather than sums. This would not affect any statistics (t/F, say), but it might make the values more meaningful. For example, consider ‘+0.5*hour_targets[3…4]’.
Yes, the corresponding betas (zero-based indices 3, 4) for RT would be specified as hour_targets[11…12].
Yes, it looks like sub-bricks 88 and 90 would correspond to the same summed pair of the confidence modulator, hour_targets[19…20].
I know this is a little tedious. It gets so much messier when TENTs are used.
Some things to note:
One can verify the index list for regressors using something 1d_tool.py:
1d_tool.py -infile X.xmat.1D -show_group_labels
For verifying, this can be compared with text output from 3dDeconvolve, such as in sample output from AFNI_data6/FT_analysis:
If you would prefer a multi-line contrast (generating an F-stat) to the single line one (generating a t-stat with the contrast sum), use multiple [[]], as in: hour_targets[[11…12]]
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.