I have data from an experiment with four conditions and eight runs that was designed such that each run consisted of one condition (vs. baseline, of course). The fact that it’s good practice to include every condition in every run notwithstanding, is there any reason to think that scaling relative to the mean of each voxel for each run will present a problem for contrasting conditions, given that condition is somewhat confounded with run? I’m arguing no, since each condition is contrasted with its own baseline, but my argument hasn’t been very persuasive.
In the event that I’m wrong, is there any precedent for deriving the scaling factor from the baseline periods only rather than the mean of the entire run? Or are there other alternative ways of scaling the data in this situation?
is there any precedent for deriving the scaling factor from the baseline periods only rather than the mean of the entire run?
Ideally the data should be scaled by the true baseline. However, we don’t really know what the true baseline is; the “baseline” value estimated by the model (e.g., 3dDeconvolve) is not necessarily the true baseline: how do we define a baseline when drifting effect is present? So, I think that scaling by the voxel-wise mean is a reasonably approximation in real practice.
Or are there other alternative ways of scaling the data in this situation?
Two popular approaches exist in neuroimaging: scaling by the global mean (averaging over the brain) and grand mean (averaging over the brain as well as across all the time points). The second one seems to be widely adopted in FMRI.