I would suspect that removing the trend would actually be useful, because you are looking to check for outliers before any processing; there are baseline fluctuations (low/slow moving trends in the data) that exist, and getting rid of those seems necessary before looking for outliers. Also note that with 3dToutcount, you can choose whether to remove trends or not, by selecting the “-polort …”. The default 3dToutcount command that afni_proc.py creates to count outliers in the brainmask (followed by the 1deval command to turn the count into a censoring list if frac > 5%) is:
3dToutcount -automask -fraction -polort 3 -legendre \
DSET_EPI > FILE_TOUT.1D
1deval -a FILE_TOUT.1D \
-expr "1-step(a-0.05)" > FILE_CENSOR.1D
To me, it would probably make sense to leave in the “-polort …” option, but that could be removed. I am not sure if you are planning to binarize the output, which is what the 1deval command does here—anything more than 5% turns into a time point to censor.
Because of the issue that outlier checking is inherently related to the scale of “typical” fluctuations in the time series, it is hard not to see that also having a “quantitative” measure of those fluctuations would also be useful. For example, afni_proc.py has the “scale” block whereby each voxel’s fluctuations are mapped with its own baseline and std dev to units of “BOLD % signal change”. You could borrow that idea up front for your unprocessed data, as well. Then, you would see what the typical fluctuations are in your data in terms of BOLD % signal change. So, if you have a couple outliers but the typical fluctuations are miniscule, that would be different than having no outliers but a 50% BOLD signal change on average, say. The code for that looks like (with a mask, optionally):
# ================================= scale ==================================
# scale each voxel time series to have a mean of 100
# (be sure no negatives creep in)
# (subject to a range of [0,200])
3dTstat -prefix DSET_MEAN DSET_EPI
3dcalc -a DSET_EPI -b DSET_MEAN \
-c DSET_MASK \
-expr 'c * min(200, a/b*100)*step(a)*step(b)' \
And before estimating local standard deviation or variance, you might want to detrend DSET_EPI_SCALED, say.
3dReHo is a program that calculated Kendall’s Coefficient of Concordance (KCC), which has been dubbed “regional homogeneity” or ReHo in the FMRI world. Conceptually, it states how similar a set of time series are, based on their relative fluctuations, where the set can be >2 (and typically it is a local neighborhood, but can also be an ROI). That might be useful in this question, though there will be degeneracies of interpretations for “high” or “low” values. But it would add another dimension to your investigation.