3dMema paired test

Hello,

I am wondering how to implement a paired t-test in 3dMema.

I have a dataset wherein participants are exposed to the same stimulus during two different tasks (taskA and taskB). 10 Participants perform taskA in run 1 and taskB in run 2. During each run they are exposed to six 30 second blocks of the stimulus.

For the first level analysis I have modeled the runs separately (as the baseline conditions in each run are different) using 3Ddeconvolve and 3DReml fit. I would now like to use 3dMEMA to examine the effects of the stimulus in TaskA vs TaskB.

Would it be correct to compute the differences between the betas in Task A and Task B for each participant from the stats.reml output and use these as input for 3dMEMA? If this is case, how would I compute the within-subject variability for the input?

Thanks and Cheers,
Gerome

Gerome,

It might be possible to mechanically perform such an analysis with separate effects and their t-statistics as input (e.g., some software does allow this kind of analysis), but it would be incorrect because the correlation between the two tasks would not be properly handled.

To be able to use 3dMEMA, you would have to provide both the contrast of Task A and Task B and the associated t-statistic. This means that you have to run 3dDeconvolve and 3dREMLfit by concatenating the two runs of data.

Thanks for the response.

Would concatenating the two runs of data be correct if the baseline conditions in each run were different?

I actually tried running a model wherein the tasks are concatenated- 3dDeconvolve in this case outputs colinearity warnings (becuase run 1 is highly correlated with the regressor for task 1 and run2 is highly correlated with the regressor for task2). I am unsure how this would effect the task beta estimates of activity.

Cheers,
Gerome

Would concatenating the two runs of data be correct if the baseline conditions in each run were different?

If specified properly, the two runs would have their own baselines (and slow drifting effects).

I actually tried running a model wherein the tasks are concatenated- 3dDeconvolve in this case outputs
colinearity warnings (becuase run 1 is highly correlated with the regressor for task 1 and run2 is highly
correlated with the regressor for task2).

Are you saying that you didn’t have collinearity when analyzing each run separately but you did when the two runs were concatenated? If so, something was not properly specified in the latter situation.

This seems okay. If the tasks are run-specific, there will typically be modest correlations due to that. But the betas should actually come out the same as when analyzing the runs separately (assuming motion and other regressors are also per run).

  • rick

Thank you both for your response I was getting the following warnings when concatenating the runs- where the task in run1 is “step” and the task in run2 is “stand”.

++ Wrote matrix values to file X.nocensor.xmat.1D
++ ----- Signal+Baseline matrix condition [X] (1278x155): 3.58672 ++ VERY GOOD ++
e[7m*+ WARNING:e[0m !! in Signal+Baseline matrix:

  • Largest singular value=1.99236
  • 2 singular values are less than cutoff=1.99236e-07
  • Implies strong collinearity in the matrix columns!
    ++ Signal+Baseline matrix singular values:
    0 2.99845e-08 0.154872 0.252929 0.267126
    0.269645 0.283077 0.29791 0.316453 0.330186
    0.341668 0.342179 0.349798 0.36136 0.376046
    0.384083 0.390261 0.392637 0.398798 0.406672
    0.416771 0.436802 0.447039 0.450877 0.460095
    0.468965 0.47245 0.482391 0.488204 0.488812
    0.502547 0.524036 0.532249 0.549107 0.551849
    0.554817 0.563324 0.582939 0.59116 0.591652
    0.602114 0.602379 0.605666 0.60847 0.61215
    0.614238 0.615671 0.624534 0.656948 0.663289
    0.669554 0.672267 0.684408 0.687056 0.698804
    0.718081 0.727385 0.734869 0.743858 0.760883
    0.761613 0.764951 0.767428 0.770707 0.77948
    0.782271 0.788038 0.817865 0.835145 0.843293
    0.844543 0.854562 0.857737 0.858406 0.869383
    0.879436 0.893794 0.897641 0.899713 0.916866
    0.928522 0.945479 0.955541 0.958643 0.973911
    0.977395 0.977843 0.991498 1.004 1.00762
    1.00825 1.01852 1.01985 1.03094 1.03686
    1.03754 1.04878 1.05049 1.05411 1.06561
    1.06978 1.08776 1.10565 1.10678 1.11304
    1.11314 1.11315 1.11781 1.1338 1.15188
    1.15495 1.16197 1.17808 1.18682 1.23882
    1.24677 1.24769 1.24847 1.25219 1.26929
    1.27469 1.27684 1.29279 1.2955 1.29881
    1.31165 1.33317 1.33667 1.34196 1.34841
    1.35985 1.36439 1.37349 1.38997 1.40454
    1.42434 1.44648 1.44826 1.45726 1.45848
    1.4711 1.48664 1.51152 1.52194 1.5451
    1.55 1.55583 1.57761 1.6568 1.67749
    1.70381 1.71769 1.7346 1.7999 1.99236
    ++ ----- Signal-only matrix condition [X] (1278x11): 1.18921 ++ VERY GOOD ++
    e[7m*+ WARNING:e[0m !! in Signal-only matrix:
  • Largest singular value=1.41421
  • 2 singular values are less than cutoff=1.41421e-07
  • Implies strong collinearity in the matrix columns!
    ++ Signal-only matrix singular values:
    0 0 1 1 1
    1 1 1 1 1.41421
    1.41421
    ++ ----- Baseline-only matrix condition [X] (1278x144): 3.57446 ++ VERY GOOD ++
    ++ ----- stim_base-only matrix condition [X] (1278x108): 2.65822 ++ VERY GOOD ++
    ++ ----- polort-only matrix condition [X] (1278x36): 1.01608 ++ VERY GOOD ++
    ++ +++++ Matrix inverse average error = 0.000457856 ++ OK ++
    ++ Matrix setup time = 22.29 s

Warnings regarding Correlation Matrix: X.xmat.1D

severity correlation cosine regressor pair


high: 0.705 0.725 (16 vs. 41) Run#1Pol#0 vs. step15_on#0
high: 0.705 0.725 (12 vs. 40) Run#2Pol#0 vs. stand30_on#0

Are you saying that you didn’t have collinearity when analyzing each run separately but you did when the two runs were concatenated? If so, something was not properly specified in the latter situation.

Yes this was indeed the case. In my afni_proc for the concatenated multiple task runs this is what I used to define the model:

regress_stim_times ${subj}/afni_folder/Bafni_task.txt
-regress_stim_types AM1 AM1
-regress_stim_labels step stand
-regress_local_times
-regress_basis ‘dmUBLOCK(0)’
-regress_apply_mot_types demean deriv
-regress_motion_per_run
-regress_censor_motion 1.0 \

Let me know if anything looks off- task A in this case is step, and task b is stand.

Cheers,
gerome

What is the output from:

timing_tool.py -multi_timing afni_folder/Bafni_task.txt -multi_show_isi_stats

  • rick

This is what it outputted (although I asked about two tasks (I did it to simplify my question) my experiment actually contains 9 tasks with 9 different conditions )

  • ISI error: stimuli overlap at run 1, time 46.0,overlap 30.0
    ** ISI error: stimuli overlap at run 1, time 104.0,overlap 33.0
    ** ISI error: stimuli overlap at run 1, time 167.0,overlap 36.0
    ** ISI error: stimuli overlap at run 1, time 239.0,overlap 39.0
    ** ISI error: stimuli overlap at run 1, time 310.0,overlap 33.0
    ** ISI error: stimuli overlap at run 1, time 383.0,overlap 39.0
    ** ISI error: stimuli overlap at run 2, time 46.0,overlap 30.0
    ** ISI error: stimuli overlap at run 2, time 104.0,overlap 33.0
    ** ISI error: stimuli overlap at run 2, time 167.0,overlap 36.0
    ** ISI error: stimuli overlap at run 2, time 239.0,overlap 39.0
    ** ISI error: stimuli overlap at run 2, time 310.0,overlap 33.0
    ** bailing…

This shows very strong overlap in the event timing. I am not sure whether you would want to post the full details, but you could mail them to me (or even the actual timing files). But consider what the event list form of that command shows:

timing_tool.py -multi_timing afni_folder/Bafni_task.txt -multi_timing_to_event_list GE:ALL -

It should make the overlap between stimuli very clear.

  • rick

Hi Gang,
What is the consequence of using 3dDeconvolve’d data rather than REMLfit data in 3dMEMA, in terms of outcomes? Is it acceptable to use 3dDeconvolved data in such a case? What is the alternative if one only has beta images?
Thanks,
Matt

What is the consequence of using 3dDeconvolve’d data rather than REMLfit data in 3dMEMA, in
terms of outcomes? Is it acceptable to use 3dDeconvolved data in such a case?

Essentially 3dMEMA plays the role of differentiating the subjects based on their relative reliability. So yes, it would be still helpful to use the results from 3dDeconvolve as input for 3dMEMA for the differentiation purpose.

What is the alternative if one only has beta images?

Just use the typical population analysis programs such as 3dttest++, 3dMVM, 3dLME, etc.