We preprocessed task fMRI images and fit the GLM at the individual level. When we check the statistical maps for each condition at the individual level (p < .001), some statistical maps seem not to align the anatomical images very well. For example, in the attached files which belong to the visual condition, children just passively view the characters presented on the screen. A large proportion of “activated” voxels lay outside of the brain. We also tried -giant_move option but it did not help a lot. Do you have any ideas to fix this?
There should be an output QC HTML in your afni_proc.py results directory; it will be the “simpler” one, and in the future you might want to add “-html_review_style pythonic” to your afni_proc.py command.
But anyways, based on your subject ID, there should be a directory called “QC_s0” in the results directory; can you open the “index.html” file there, either by navigating to it and clicking on it, or perhaps just using a browser from the command line? For example, on Linux with Firefox install, you could type:
firefox QC_s0/index.html
from the results directory.
More details about this are provided here: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/tutorials/apqc_html/main_toc.html
… but in the top 2 sections are the ve2a (volumetric EPI to anatomical) and va2t (volumetric anatomical to template) QC blocks. If you could post those images here, we could assess each of those individual alignment steps?
If you would like to re-run the QC with the nicer Pythonic form (for motion and other plots), assuming you have Python and its matplotlib module installed, you could do the following:
# get a script to redo the QC part that happens at the end of afni_proc.py,
# which will run it in Pythonic form
wget https://raw.githubusercontent.com/afni/afni/master/src/ptaylor/supplement/redo_apqc.tcsh
# execute the script: with no arguments, it runs in the current directory; or, you can put
# a list of one or more results directories from afni_proc.py output,
# and it will run in each.
tcsh redo_apqc.tcsh
This will move any existing APQC (afni_proc.py QC directory) to a backup old_QC_*, and then create a new QC_${subj} again in the same spot, where ${subj} is the subject ID.
Thanks for sharing the ve2a and va2t images—in both cases, the alignment looks good to me, so I expect the overall alignment (which is just the concatenation of those) should be good.
To check the modeling a bit, could you also please share the vstat image, so we can see what your F-stat patterns from modeling look like? We should see places where F-stat is large, which means where the regression model explains a lot of the input well.
Thanks for the detailed explanation! Please see the attached F map. The whole experiment contains 4 conditions: auditory (A), visual (V), auditory-visual consistent (AVC), and auditory-visual inconsistent (AVI).
I just realized that a lot of TRs (10 out of 24) were censored for the visual condition, it might be just the data quality problem?
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.