Working with voxel-wise weights

Greetings,

I'm becoming interested in small ROIs (hippocampal subfields, midbrain) for which I have lets say a probabilistic map. I'm interested in the possibility of incorporating a probabilistic map into a standard t-test type analysis and was wondering whether there is any tool in the AFNI suite that can weigh voxels according to a probabilistic map?

I believe that weighting voxels could help me avoid using arbitrary thresholds and provide cleaner analysis in terms of arguing that a systematic signal change is likely happening in an anatomical ROI.

Hi-

I'm a little unsure of whether you are wanting to do an ROI-based analysis, or a voxelwise one with some focal masks. I think it is the latter, but I just want to be sure.

What if you do a standard voxelwise t-test everywhere, and then threshold it transparently? You can then zoom in on those regions defined by your probabilistic maps, perhaps outlining them in some way. For example, that's what was essentially done here:
https://www.sciencedirect.com/science/article/pii/S1053811923002896#fig0003

--pt

Thanks!
Yes just using focal masks is definitely an option and might be the way forward when dealing with small nodes...

In terms of a ROI analysis what would you suggest?

ROI-based analysis would mean averaging signals within each of your ROIs of interest, and then doing t-tests between those, potentially. Alternatively, there is the RBA program, also utilized in that same paper, in "3.4. Regionwise (ROI-based) analysis" here:
https://www.sciencedirect.com/science/article/pii/S1053811923002896#sec0003
The benefit of RBA is the data pooling aspect to adjust the estimated measures as part of the analysis; the trickier part of it is needing ROI maps that you quite good about (though you mention you do have some specific ones), and depending on the number of subjects and ROIs it can be computationally intensive (but if you have a few focal ROIs, it shouldn't be burdensome).

--pt

Right - I guess my question would be, could it be argued that voxels within my ROI should be reweighted in line with my probabilistic map before the averaging takes place? Has anyone ever done that/or could it be justified?

The RBA seems very reasonable! I will try it out on some dummy data, but if I understand correctly the input is again a table mean activations, which leads back to my previous question about the potential and pitfalls of weighing voxels? A related question from the RBA docs which would be relevant for me:

For within-subject variables, try to formulate the data as a contrast
between two factor levels or as a linear combination of multiple levels.

For a two-level factor this just means inserting the difference of the two values, right?

I also realize that with the probabilistic map one arrives at when doing "subfield segmentation" there is an additional question of overlap between the probabilities, which would somehow be accounted for in one's analysis? The little I know of bayesian analysis, this should be doable? The aim then would be to arrive at probability of not just a model but also of which region it is most likely originating from? I realize that RBA was designed with ROI being fixed, but I would appreciate some advice on whether this would be feasible in "brms" more broadly?

When performing ROI-based analyses, I guess you could use your ROI-definition probabilities to weight signals. In general, I think people often just threshold the probabilities to define firm ROI boundaries, and then average within that. I would think in most cases the results should be similar---that will be increasingly the case the sharper the boundaries are. If the boundaries are long and tapering, well, life is difficult because the ROIs aren't strongly located in space. In such case, voxelwise analysis followed by additional "highlighting" by the ROI probability would amke sense.

Re. the RBA details, I will leave those aspects to @Gang .

--pt

1 Like

For a two-level factor, does this simply involve inserting the difference of the two values?

It's not mandatory, but computationally more efficient to directly operate with the difference.

I understand that RBA was primarily designed with a fixed ROI in mind. However, I would appreciate some guidance on whether this approach could be applied more broadly using "brms"?

If the variability in ROI-level estimates can be represented using standard error, it can indeed be integrated into the model.

Gang

1 Like