Pretty old and frequently discussed topic aside:
I have 3 naming tasks presented in different sensory modalities (audio prompt, visual prompt, description). I am interested in what regions are most active across all three tasks. Further, for a given ROI, I would like to answer the question of which hemisphere shows more activity across all three tasks, a la laterality index (LI). All three tasks have been 3dDeconvolve
'ed separately.
I was wondering if the following approach sounds sane:
I was thinking of following the old conjunction analysis approach, simply counting up all voxels that are active across all three tasks at a given threshold. To make it more robust, I could iterate across several thousand thresholds and take a trimmed average of voxel counts, a la bootstrapping. The final result would be a bootstrapped LI telling me which hemisphere was more active in a given task.
Does this sound like a reasonable approach to my research question? Would it be better, instead, to 3dDeconvolve
all three tasks together and explicitly model a conjunction effect? If so, how could I calculate a robust laterality statistic for the conjunction effect?
Any expertise would be greatly appreciated!