I'm analyzing task-related fMRI data for two similar experiments with the same baseline task. These data were acquired with the same acquisition parameters and on the same scanner (and mostly the same subjects). My goal is to have the ability to make general comparisons about brain regions that are significant in both studies.
The main concern is the task trial length between the two studies is different (6.4s versus 12.8s) and this influences our GLM modeling. Currently we're using the AUC approach and I've selected several TENTs to include in the GLTs where I believe the peak activation would be. Due to the differences in experimental design, I'm considering selecting a different number of "peak" TENTs to include in the GLTs for one study versus the other. I believe this would create a scaling issue if we wanted to compare the two studies as the study modeled with more TENTs would have a greater summed AUC. Is there a way to compare the betas of two studies if they have a different number of TENTs in the GLTs, maybe transform to z scores?
By "general comparisons", are you seeking to show that both tasks generate similar response patterns in the brain? Or, is your goal to identify which task induces stronger responses in some brain regions? Regarding the trial durations of 6.4s and 12.8s, do you anticipate that they will result in (1) equivalent response magnitudes with different response durations, or (2) varying magnitudes and durations? The AUC approach may introduce a challenge in terms of selecting a duration for the estimated BOLD response, which may lead to some degree of arbitrariness.
Generally, I'd like to show that both tasks generate similar response patterns. My current approach would be to create an overlap map between the significant whole-brain statistical maps from both studies. Would you recommend anything more sophisticated?
I would anticipate the studies would have varying magnitudes and durations so I'm unsure if it's possible to directly compare the activation between the studies.
Typically, statistical methods are designed to evaluate differences in effects rather than similarities. If your objective is to showcase the similarity between the two results, I can suggest two approaches.
The first approach is intuitive, not sophisticated, but visually compelling. You could present the two results side by side using a series of slices, allowing the visual comparison to convey your findings.
Alternatively, you could consider employing a test-retest reliability approach and quantify the similarity using the 3dICC program.