Dear AFNI experts,
In my study, participants view short video clips (average duration 32 seconds, 36 in total) pseudo randomised across three runs (i.e. 12 video clips per run) and are asked to rate the clips they have seen. I am currently planning the preprocessing (based on https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/main_det_2016_ChenEtal.html) and have a couple of questions I would highly appreciate your thoughts on.
Firstly, if I am interested in subcortical structures as a priori ROI, should I omit the surface reconstruction and use FS recon-all -autorecon1 and -autorecon2 instead of recon-all -all?
When specifying the “-regress_apply_mot_types demean deriv” option in AFNI_proc.py, does the demean only relate to the motion parameters or also to the BOLD time series itself? Do the derivatives account for any scanner drifts over time, so that the data is de facto detrended after pre-processing? If I use this pre-processing pipeline, can I use the “-polort -1” option for the 3dTcorrelate program?
I would like to run the 3dTcorrelate program on the time course that relates to the display of the video clips only (that is, the volumes related to the ratings removed after preprocessing). Should I demean the time course for each video clip using the mean of the time series related to each video clip before combining the data for all 36 video clips? Please note that the video clips are presented in random order across the three runs, so they have to be reordered, so that the final BOLD time series reflects the same order of input across participants.
Lastly, I have been wondering whether I should also include band pass filtering? If so, when would it be recommendable to do so?
Many thanks for your help and best regards,
Stef