Hi
I’m using uber_subject.py to preprocess my resting-state data.
There is an option in expected options section called ‘motion censor limit’ which is 0.2 by default. I can’t fully understand this option. If I’m not wrong, 3dvolreg is used for head motion correction. So if the head motions are handled via 3dvolreg, what is the usage of motion censor limit.
And my other problem is that, when ‘motion censor limit’ is set to 0.2, I almost loose all (above 90%) subjects. when I set it to 0.3 it gets better, but again lots of subjects will be discarded. for my data 0.4 seems to work fine. But I don’t know if 0.4 is a good limit in general.
My second question is “What if I set the ‘motion censor limit’ to a high value, in order to keep all my TRs” ? What will be the consequences?
3dvolreg estimates the amount of movement. But since image registration is not perfect, it’s not possible to take an MR image (3D volume) with a large amount of movement and make it look the same as if the subject had never moved.
So afni_proc.py will cast out (censor) volumes (time points) when there was too much movement relative to the previous volume.
By default, we have set this amount of differential movement to 0.3 mm (per TR). If you are losing so much data that many subjects are useless, you are in trouble. You can try the following, but these probably won’t help you too much:
raise the movement threshold for censoring to 0.4 or even 0.5
add the option -regress_apply_mot_types demean deriv to the afni_proc.py command, to include the derivatives of the motion parameters in the regression model, which might reduce the motion artifacts
add the option -regress_anaticor_fast to the afni_proc.py command, to use tissue based regressors that might also be sensitive to motion
But, if you have so much motion, it will be hard to get good results.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.