Bayesian multilevel modeling using brms

Hi Gang Chen !

I have a specific question about the population-level analysis performed in your paper paper

I have a similar within subject design (2 Reward * 3 Emotion) and I want to perform a Trial-Level-Modeling using brms.

I have performed the participant level analysis modifying the script provided here ([tlm/participant-level analysis at master · afni-gangc/tlm · GitHub])

I am trying to perform the population level analysis and need help understanding how to do the trial level modeling (without RT). I went through your code. I am not familiar with R and I am unable to understand the input data structure.

Can you help me understand the input data structure of the task including the how the header detail need to be provided?

And is it also possible for you to upload the input files used in the population analysis for the above mentioned paper (file name : 'RT-task-data.txt' or ‘task-data.txt’)

Thanks, and regards,
Sahithyan

Sahithyan,

The paper you referred to aims to explore the extent and impact of cross-trial variability in a typical fMRI dataset. To better understand your study, could you clarify the following:

  1. Are you interested in cross-trial variability?
  2. What is your research hypothesis for the 2 × 3 experimental design?
  3. How many trials are there per condition?
  4. Are you planning to perform the population-level analysis at the voxel level or the region level?

Gang Chen

Hi Gang !

I am working with Srikanth Padmala to understand the cross-trial variability in our dataset.

To answer your question:

  1. Yes, we feel cross trial variability is something good to look at in our dataset.
  2. So, we are interested in understanding the interaction in our 2 × 3 experimental design. We have 2 Reward (High, Low) x 3 Emotion (Positive, Neutral, Negative).
  3. I have 24 trials per condition. Is that too low? But I have 60 subject data.
  4. For now, I am looking at performing the population-level analysis at the ROI level

Thanks
Sahithyan

Sahithyan,

The header for the ROI data should look like this:

subjID  ROI  Reward  Emotion  trial  beta  SE

Please note that the estimates for the BOLD response might be significantly biased (e.g., underestimated) because the adopted HRF is likely not well accommodative for some regions.

Gang Chen

Thank you! Can you also clarify some questions I have:

How do I get the SE values that you have mentioned in the above header.

I understand I can get the beta values using -Rbeta option.

How will I get the SE using -Rbuck ?

Second question, Can I perform the same REMLfit at the voxel level for the wholeBrain ?

  • Sahithyan

Sahithyan,

How will I get the SE using -Rbuck ?

You can add -tout in your 3dREMLfit script to obtain the t-statistics that are associated with the estimated regression coefficients from -Rbuck. Then the standard errors can be obtained through the following formula
SE=\frac{\widehat\beta}{t}.

Can I perform the same REMLfit at the voxel level for the wholeBrain ?

Sure. However, it would be computationally challenging if you plan to implement the Bayesian multilevel modeling approach at the voxel level.

Gang Chen

Hi @Gang !

I have a question regarding implementing brms. I am new to R and Bayesian modeling, It would be helpful if you could help me figure out the R code.

I have compiled the ROI data in the format you had mentioned in your previous message. But I still don't understand how to implement the following:

  1. How to Include the standard error in the brm function?
  2. I am interested in understanding the interaction in our 2 x 2 × 3 experimental designs. So, is it possible to check the three-way interaction?
  3. After that run a 2* 3 separately for the two-contingency factor.
  4. What does it mean when a warning saying “Warning: There were XXXX divergent transitions after warmup.”

Here is the link to the input ROI data and the R code (data).

Thanks,
Sahithyan

How to Include the standard error in the brm function?

Try

brm(Beta | se(SE, sigma = TRUE) ~ Reward_Value*Emotion_Value + ...

I am interested in understanding the interaction in our 2 x 2 × 3 experimental designs. So, is it possible to check the three-way interaction? After that run a 2* 3 separately for the two-contingency factor.

You may check various contrasts among the combinations of those three factors.

What does it mean when a warning saying “Warning: There were XXXX divergent transitions after warmup.

Divergences indicate that the sampler may have taken a leap that was too large, causing it to miss important regions of the posterior distribution. This often means that the geometry of the posterior is problematic, typically due to strong curvature or non-identifiability.

In R your code, did you really mean the following?

chains = 1000, iter=4

It should be

chains = 4, iter=1000 

Gang Chen

Hi !

brm(Beta | se(SE, sigma = TRUE) ~ Reward_Value*Emotion_Value + ...

Thanks, this works.

chains = 4, iter=1000

Thanks, I was wondering the same. I think there is a typo in the code you shared in your Github page. Because I got it from there.

@Gang
One of the other issue I am facing now is that these codes run for days, and IDE like RStudio just freezes while trying to run this code. Do you have any suggestion for that ?

Thanks
Sahithyan

I think there is a typo in the code you shared in your Github page.

Thanks for notifying me of the typo. It has been fixed now.

One of the other issue I am facing now is that these codes run for days, and IDE like RStudio just freezes while trying to run this code. Do you have any suggestion for that ?

Don't run it in RStudio. Instead, save your R code into a file and call it, for example, Sahithyan.R. If you have many CPUs, you may take advantage of the within-chain parallelization functionality through the R package 'cmdstanr'. For instance, if you have 32 CPUs, try this:

library('cmdstanr')
fm <- brm(..., chains=4, iter=1000, backend="cmdstanr", threads=threading(8))
save.image(file="Sahithyan.RData")

Then run it at the terminal:

nohup R CMD BATCH –no-save Sahithyan.R Sahithyan.diary & 

Gang Chen

Hi @Gang !

Sorry for my late reply.
Thank you very much for suggesting parallelization using 'cmdstanr'.
I ran the code and got the output ( L_Amyg_tlm_output )

But the summary statistics are a bit confusing. I am unable to interpret the results from the output file. Could you recommend some material that could help a beginner. Especially for reporting it to a paper.

Second, I still see a warning:

Warning message:
Parts of the model have not converged (some Rhats are > 1.05). Be careful when analysing the results! We recommend running more iterations and/or setting stronger priors.

So, I guess I need to increase the number of iterations, but more fundamental question is how do is choose or set a strong prior?

Thanks

The script from our original paper was designed for a list of ROIs. Are you analyzing data for only one ROI? If so, the constructed model may not be appropriate.

Gang Chen

Hi @Gang

Apologies for my late reply.

Yes, I previously analyzed just for one ROI.

Here, is the data of all the ROI ( All_ROI_Results )

Why is the constructed model not appropriate when one ROI is analyzed?

Why is the constructed model not appropriate when one ROI is analyzed?

When analyzing a single region, there is no cross-region variability. Therefore, including ROI as a variable in your model is unnecessary. In this case, you should remove ROI from your model specification.

For multiple ROIs, you can extract information for each ROI using a similar approach to the sumROI function found in the RBA.R code within the AFNI binary directory.

Gang Chen