I am running 3dLMEr on my data and I am interested in looking at whether "symptom improvement" is significant in my fMRI data. We have 2 symptom scores per individual: from baseline and from endpoint. We had the participants complete a task with 2 within-subject conditions (PrevTrial and CurrTrial, both could be either incongruent or congruent; a total of 4 sub-briks per participant). I've been doing some digging and it seems like the most appropriate way to include a change in symptom scores in the model is to use 2 variables, one for baseline score and one for endpoint score on their own, rather than calculating the difference score between them (i.e. endpoint-baseline) and inputting that as its single variable. As I understand, using a difference score assumes that both baseline and endpoint will have equal and opposite signed coefficients, which we cannot assume. The baseline and endpoint scores in my model are the variables CAPS_wk0 and CAPS_wk12; we also have between-subjects variables for Motion and Sex as covariates of no interest.
Is there a way to look at whether the change in symptom scores is significant in my model? I tried to do this by setting CAPS_wk0 as -1 and CAPS_wk12 as +1 in a post-hoc test (see '-gltCode CAPS_change'). I was wondering if this was the appropriate way to code this contrast in my model? Or am I off base? When I looked at the results of the contrast, I was quite surprised as there were a lot of clusters that came up as significant (like 35 or something). When I look at CAPS_wk0 or CAPS_wk12 alone, there is only one significant cluster for each (both for the F-test and the -gltCode t-test that I specified in the first two -gltCode lines), so I suspect something is going awry here.
Did each participant undergo scanning at two distinct time points (PrevTrial and CurrTrial), with each time point corresponding to two task conditions (congruent and incongruent)? Does Motion vary across conditions and times? Additionally, why are the task conditions not considered in your model specification?
Your 3dLMEr script has a couple of issues. First, the model is not properly specified. Second, within the conventional framework, there isn’t an effective method to directly compare the slopes of two quantitative variables. Consequently, the specification ‘CAPS_wk0 : -1 CAPS_wk12 : 1’ may not be appropriately interpreted by 3dLMEr .
We only have one scan per participant. During the scan we got them to do what is essentially a Stroop task (with emotional content instead — see: Resolving emotional conflict: a role for the rostral anterior cingulate cortex in modulating activity in the amygdala - PubMed). We had 4 regressors (task conditions) in our first level model: congruent trials that were immediately following a congruent trial; congruent trials that were immediately following an incongruent trial; incongruent trials that were immediately following a congruent trial; and incongruent trials that were immediately following an incongruent trial. Here’s a few lines of the datatable in case it helps:
“Motion” represents the number of volumes that were censored per condition, so yes, it does vary.
What about the model is specified wrong? I tried to base it off of Example 4 on the 3dLMEr documentation page. I thought it would be the closest example to our situation as we have 2 within-subject factors (PrevTrial, CurrTrial), one between-subjects factor (Sex), and a few quantitative variables (CAPS_wk0, CAPS_wk12, Motion).
One way to assess comparisons like the specification ‘CAPS_wk0 : -1 CAPS_wk12 : 1’ is through the Bayesian modeling framework (e.g., at the region level).
Thanks Gang! I'll give that model a try and see what happens.
On a related note, I had a question about how to select the best random effects to go into the model. To keep things as consistent as possible, when I was analyzing the behavioural data from the task (RT and accuracy data), I used lme4 in R, and similar models to the ones I wrote for 3dLMEr (just without the Motion variable). For example, for RT the model was RT ~ PrevTrial*CurrTrial + Sex + (1|Subj) + (1|Subj:PrevTrial) + (1|Subj:CurrTrial).
When I ran that model, I got the warning: boundary (singular) fit: see help('isSingular'), which seems to come up when the random effects terms aren't really adding anything to the model. Once I took away the (1|Subj:PrevTrial) and (1|Subj:CurrTrial) terms, I stopped getting the warning (final formula: RT ~ PrevTrial*CurrTrial + Sex + (1|Subj)).
I was wondering if there was an equivalent warning in 3dLMEr? Or a way to be able to tell which random effects you should include in your model?
One way to assess comparisons like the specification ‘CAPS_wk0 : -1 CAPS_wk12 : 1’ is through the Bayesian modeling framework (e.g., at the region level).
I'm interested into looking into this -- would this be with the program RBA (AFNI program: RBA) ?
and it did not work. I changed nothing about my script except for putting Motion into the random effects. Same datatable so no mistakes with that, and the variable was spelled correctly/correct capitalization etc. This was the error I got:
~~~~~~~~~~~~~~~~~~~ Model test failed ~~~~~~~~~~~~~~~~~~~
Possible reasons:
0) Make sure that R package lmerTest has been installed. See the 3dLME
help documentation for more details.
1) Inappropriate model specification with options -model, or -qVars.
2) In correct specifications for random effect with -ranEff.
3) Mistakes in data table. Check the data structure shown above, and verify
whether there are any inconsistencies.
4) Inconsistent variable names which are case sensitive. For example, factor
named Scanner in model specification and then listed as scanner in the table hader
would cause grief for 3dLMEr.
** Error:
The failure of the model I suggested is likely due to the small number of data points. Given this situation, it would be fine sticking with your original model.
Comparing two slopes would be indeed involved in Bayesian modeling. The AFNI program RBA would be an option, but its interface is not designed for this purpose. Feel free to reach out via email if you’d like to discuss this topic in more detail.
Gang Chen
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.