Forgive me for a potentially naive question. I am preprocessing multiple runs that should be scaled so they are comparable across subjects and runs. Per convention, percent signal change will do. I’ve run all of the steps, slice time correction, deobliquing, and motion correction. I think input this time series into 3dDeconvolve to regress out motion parameters, as well as CSF and WM signal.
I then submit the -errts output to 3dmerge for smoothing/blurring.
The resulting time series seems reasonable, but is not scaled to a mean of 100 and thus PSC. When I scale to 100, the resulting data is very messy and seems as though something has went wrong. (see attached)
Is it potentially the order in which I’ve scaled and regressed out nuisances?