I recently used 3dMEMA for a simple group analysis looking at activation differences between two conditions within one group (e.g. “StimA-StimB”), and followed the pipeline described here https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/codex/main_det_2018_ChenEtal.html. Things have worked as expected (thank you for publishing this script!). However, in the output statistical maps, in areas where there are strong effects, the estimated beta values appear to be capped at -100 and 100 (i.e. the data in effect estimate sub-brick, “StimA-StimB:b”). When compared to the same group analysis using 3dttest++, the t-maps from each program are very similar, however the mean effect estimate maps from 3dttest++ have no such cap on the values. Is this the intended behavior of 3dMEMA?
I preprocessed this data using afni_proc.py and am providing 3dMEMA with the results from 3dREMLfit. One thing is that I did not use the “scale” block in afni_proc, so the individual subject beta coefficient maps are on a scale of ~ -3000:3000. Is this problematic for 3dMEMA?
Several reasons may cause this. But first, did you include option “-missing_data 0” in your 3dMEMA script? For those voxels where you have 3dMEMA output of -100 or 100, check each subject: do you see huge beta value or 0 t-statistic? Also, why didn’t you consider scaling during preprocessing?
I did include the option “-missing_data 0” in the 3dMEMA script. And, after checking some of the values, I don’t seem to see any strong outliers. The attached png shows boxplots for subject-level coefficients from the contrast of interest, extracted from three voxels (two from motor cortices and one from fusiform area). These voxels are capped in the 3dMEMA output. Also attached is screenshot of 3dMEMA effect estimate map after cluster correction. You can see large areas of values capped at -100:100.
In regards to not including scaling, this was mostly an oversight, and saw this was just an “optional” block in afni_proc. I assumed this didn’t affect the final statistic estimates, but plan to include it for future preprocessing, to make the effect sizes more interpretable and to be able to plot/report in terms of percent signal change. I am considering reprocessing the current dataset with scaling if you think that will yield usable effect estimates from 3dMEMA. Also, for other’s reference, there was some helpful discussion of scaling in AFNI here https://sscc.nimh.nih.gov/sscc/gangc/TempNorm.html and here https://afni.nimh.nih.gov/pub/dist/edu/data/CD.expanded/AFNI_data6/FT_analysis/tutorial/t14_scale.txt
It’s hard to diagnose the situation without access to the data. I do suggest that you reprocess the data with scaling added during preprocessing. In addition to the reasons you mentioned, group analysis without proper scaling during preprocessing might be problematic when the effect estimates are not meaningfully comparable across subjects. See more thorough discussion here: https://www.ncbi.nlm.nih.gov/pubmed/27729277
Let me know if the problem persists after you scale the data.
Ok, I understand. I will reprocess with scaling and report back. Thanks for the additional reference and for your support!
After scaling, as expected, the subject-level t-maps remained nearly identical (seemingly within rounding error). The effect estimate map from 3dMEMA group-level analysis now looks correct (see attached screenshot). The group-level t-values and clusters change a bit, but presumably more correct since the subject-level betas are more meaningfully comparable. Thanks Gang!
Ben, thanks for the update! Nice to hear that the issue has been resolved.
I have a new, though potentially related, issue with 3dMEMA output (if you all think this should be a separate post, happy to do that). This time the group-level t-stat maps from 3dMEMA are showing several clusters with a value of 100.
These areas are locations with the highest group-level coefficients, but they should not be t-values of 100.
These maps are derived from scaled data, so I’m not sure if this relates to the issue I raised previously regarding capped effect estimates (i.e. which was fixed after implementing a scaling step). And, I am not noticing anything strange about these data in particular.
I have prepared a folder that contains subject files with the t-stat and coefficient bricks of interest, as well as a ready to go 3dMEMA script that can be run within the unzipped folder (total size = 109 MB, 49 subjects). Could I send this to you and have you take a look?
Ben, is it possible that some subjects have missing data at those voxels where you see strange results? If so, consider using option
and see if that fixes the problem.
This is not the case. These voxels have data for all subjects. No masking has been applied at the subject level (aside from the EPI extents mask). Regardless, I have been using the option -missing_data 0 when running 3dMEMA, so I don’t think that’s the issue.