I am trying to perform the population level analysis and need help understanding how to do the trial level modeling (without RT). I went through your code. I am not familiar with R and I am unable to understand the input data structure.

Can you help me understand the input data structure of the task including the how the header detail need to be provided?

And is it also possible for you to upload the input files used in the population analysis for the above mentioned paper (file name : 'RT-task-data.txt' or ‘task-data.txt’)

The paper you referred to aims to explore the extent and impact of cross-trial variability in a typical fMRI dataset. To better understand your study, could you clarify the following:

Are you interested in cross-trial variability?

What is your research hypothesis for the 2 × 3 experimental design?

How many trials are there per condition?

Are you planning to perform the population-level analysis at the voxel level or the region level?

I am working with Srikanth Padmala to understand the cross-trial variability in our dataset.

To answer your question:

Yes, we feel cross trial variability is something good to look at in our dataset.

So, we are interested in understanding the interaction in our 2 × 3 experimental design. We have 2 Reward (High, Low) x 3 Emotion (Positive, Neutral, Negative).

I have 24 trials per condition. Is that too low? But I have 60 subject data.

For now, I am looking at performing the population-level analysis at the ROI level

The header for the ROI data should look like this:

subjID ROI Reward Emotion trial beta SE

Please note that the estimates for the BOLD response might be significantly biased (e.g., underestimated) because the adopted HRF is likely not well accommodative for some regions.

You can add -tout in your 3dREMLfit script to obtain the t-statistics that are associated with the estimated regression coefficients from -Rbuck. Then the standard errors can be obtained through the following formula SE=\frac{\widehat\beta}{t}.

Can I perform the same REMLfit at the voxel level for the wholeBrain ?

Sure. However, it would be computationally challenging if you plan to implement the Bayesian multilevel modeling approach at the voxel level.

I am interested in understanding the interaction in our 2 x 2 × 3 experimental designs. So, is it possible to check the three-way interaction? After that run a 2* 3 separately for the two-contingency factor.

You may check various contrasts among the combinations of those three factors.

What does it mean when a warning saying “Warning: There were XXXX divergent transitions after warmup.

Divergences indicate that the sampler may have taken a leap that was too large, causing it to miss important regions of the posterior distribution. This often means that the geometry of the posterior is problematic, typically due to strong curvature or non-identifiability.

In R your code, did you really mean the following?

Thanks, I was wondering the same. I think there is a typo in the code you shared in your Github page. Because I got it from there.

@Gang
One of the other issue I am facing now is that these codes run for days, and IDE like RStudio just freezes while trying to run this code. Do you have any suggestion for that ?

I think there is a typo in the code you shared in your Github page.

Thanks for notifying me of the typo. It has been fixed now.

One of the other issue I am facing now is that these codes run for days, and IDE like RStudio just freezes while trying to run this code. Do you have any suggestion for that ?

Don't run it in RStudio. Instead, save your R code into a file and call it, for example, Sahithyan.R. If you have many CPUs, you may take advantage of the within-chain parallelization functionality through the R package 'cmdstanr'. For instance, if you have 32 CPUs, try this:

library('cmdstanr')
fm <- brm(..., chains=4, iter=1000, backend="cmdstanr", threads=threading(8))
save.image(file="Sahithyan.RData")

Then run it at the terminal:

nohup R CMD BATCH –no-save Sahithyan.R Sahithyan.diary &

Sorry for my late reply.
Thank you very much for suggesting parallelization using 'cmdstanr'.
I ran the code and got the output ( L_Amyg_tlm_output )

But the summary statistics are a bit confusing. I am unable to interpret the results from the output file. Could you recommend some material that could help a beginner. Especially for reporting it to a paper.

Second, I still see a warning:

Warning message:
Parts of the model have not converged (some Rhats are > 1.05). Be careful when analysing the results! We recommend running more iterations and/or setting stronger priors.

So, I guess I need to increase the number of iterations, but more fundamental question is how do is choose or set a strong prior?

The script from our original paper was designed for a list of ROIs. Are you analyzing data for only one ROI? If so, the constructed model may not be appropriate.

Why is the constructed model not appropriate when one ROI is analyzed?

When analyzing a single region, there is no cross-region variability. Therefore, including ROI as a variable in your model is unnecessary. In this case, you should remove ROI from your model specification.

For multiple ROIs, you can extract information for each ROI using a similar approach to the sumROI function found in the RBA.R code within the AFNI binary directory.

Gang Chen

The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.