Thank you very much for the very clear answer.
We will then, as you suggested, run the 3dFWHMx on a per-subject basis, save the results in a matrix for each subject, and then average across subjects for each of the 3 numbers needed to generate the autocorrelation function.
We also decided that, as we are using the cluster correction in afni, it's perhaps better to do everything in afni too and run an 3dLMEr. Any chance you would have time to tell us if the following model we created is correct?
In case, we have a dataset of 40 healthy controls anaesthetised with propofol; while they are anaesthetised and unresponsive, we deliver some sounds (an oddball paradigm). Then, we wake them up and ask them whether they heard the sounds (if they replied "yes" they are labelled as 'connected participants') or not (labelled as 'disconnected participants'). Participants go through this procedure, if their vital parameters allow, twice (for two sessions). It follows that for some subjects we collected only one session (because experiment had to be stopped due to problems with anaesthesia). Our goal is to investigate differences in sound processing between connected and disconnected participants.
Given the design, the same subjects can be connected in session 2 but maybe disconnected in session 1. Hence, we have the group (connected or disconnected) that varies both between and within-subject.
In summary, we have the following contrasts (variables) of interest:
- propofol concentration (changing across sessions and subjects)
- number of session (one or two)
- group (connected (CC) or disconnected (DC))
- gender
- type of stimulus (whether deviant or standard)
We thought to define the model as following:
3dLMEr -prefix LME -jobs 8 \
-mask Bin_combined_mask_fsl.nii \
-model 'Group*Propofol_Concentration*Stimulus_type + Gender + Session + (1|Subject)' \
-SS_type 3 \
-bounds -2 2 \
-gltCode CC ‘Group : 1*CC_impure’ \
-gltCode DC ‘Group : 1*DC_impure’ \
-gltCode CC-DC ‘Group : 1*CC_impure -1*DC_impure’ \
-gltCode DC-CC ‘Group : 1*DC_impure -1*CC_impure’ \
-gltCode CC_prop ‘Group : 1*CC_impure Propofol_Concentration :’ \
-glfCode DC_prop ‘Group : 1*DC_impure Propofol_Concentration :’ \
-gltCode CC_dev ‘Group : 1*CC_impure Deviant:’ \
-gltCode DC_dev ‘Group : 1*DC_impure Deviant:’ \
- gltCode CC_std ‘Group : 1*CC_impure Standard:’ \
- gltCode DC_std ‘Group : 1*DC_impure Standard:’ \
-dataTable \
And we also wanted to specify a contrast that computed the difference between standard and deviant sounds and how this difference 'differs' between CC and DC - i.e., (CC_std-dev) - (DC_std-dev). This would allow us to see whether the difference in processing between these two types of sounds increases as a function of connectedness. But we're unsure on how to specify it..
If you could help us improve the current model, we would be beyond grateful..
Thanks a lot in advance for your time and help :)