Hello AFNI expert,
I am trying to interpret the quality check of the output of afni_proc.py in order to see if one or multiple metrics could help me to understand the quality of my results in resting-state fMRI in dogs. I basically tested 4 sequences, 1 SE EPI and 3 GE EPI. I co-registrate all the BOLD to a template and performed a seed base analysis.
Then, it appeared that the SE EPI gave me the “best” DMN statistical map (this is very subjective but I have got strong correlations to parietal and temporal areas that I do not have with the other sequences). For this reason, I am trying to understand if some parameters of the quality check could help me to understand what happened.
The motion has been checked and the lowest for the SE EPI and one GE EPI
The SE EPI got the lowest TSNR (28) and the others are around 40.
The grayplot seems way more homogenous for the SE EPI even if I compare it with the GE EPI sequence with less motion.
So that is probably the beginning of an explanation: less thermal noise or non explained noise?
Then, the most important difference is probably on the global correlation (GCOR).
SE EPI: 0.00862326
GE EPI, all around 0.2
can I interpret this difference?
on the corr_brain I have got a more scattered statistical map for the SE EPI when for some of the GE I got some aggregated voxels.
Are there any other metrics that I might want to look at?
I had a deep look at https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/tutorials/apqc_html/apqc_ex1.html but I am still struggling to know which metrics should be used for my purposes and how to interpret them.
If you have any teaching documentation available, like the nice PDF that you produced for afni_proc.py don’t hesitate to copy the link.
Of course, it is preliminary work and I would need statistics to fully validate any interpretations.
I continue to be in awe of the interesting datasets you work on… I’m curious where you got the template for this?
It sounds like you have done a good+careful job at looking at lots of things here. There can be a lot of subtle differences between sequences; sometimes even having the same voxelsize can effectively mean different things, for example.
Before discussing this more, do you think you would be able to share the APQC HTML outputs of each subject, so I could take a look myself? This is a new type of dset for me, too, to think about. I would also like to see the afni_proc.py commands that go along with these, if possible.
I will email you for this.
The results of the QC has been sent.
Here is the command:
@animal_warper -input dset_anat_deob -base studytemplatebrain -atlas Brain_label.nii \
-outdir ouput_dir_a_id -ok_to_exist -align_centers_meth OFF -cost nmi -skullstrip mask_brain.nii.gz
afni_proc.py -subj_id a_id -script ouput_dir_a_id + /proc. + a_id -scr_overwrite -out_dir ouput_dir_a_id + / + a_id + .results \
-blocks despike tshift align tlrc volreg blurmask regress \
-dsets dset_bold_deob_RECEN \
-copy_anat dset_anat_deob_RECEN \
-anat_has_skull no \
-tcat_remove_first_trs 5 -blip_reverse_dset dset_bold_deob_RECEN_inv \
-align_opts_aea -epi_strip 3dAutomask -cost nmi -giant_move -check_flip \
-volreg_align_to MIN_OUTLIER -volreg_align_e2a -volreg_tlrc_warp -mask_epi_anat yes \
-tlrc_base studytemplatebrain \
-tlrc_NL_warp -tlrc_NL_warped_dsets ouput_dir_a_id1 + /anatT1_deob_warp2std_nsu.nii.gz \
ouput_dir_a_id1 + /anatT1_deob_shft_al2std_mat.aff12.1D ouput_dir_a_id1 + /anatT1_deob_shft_WARP.nii.gz \
-regress_censor_motion 2 -regress_censor_outliers 0.05 -regress_motion_per_run \
-regress_apply_mot_types demean deriv -blur_size 2 -regress_run_clustsim \
no -regress_est_blur_errts -html_review_style pythonic -execute \
the functional and the anat were centered prior to these 2 commands
the other sequences didn’t have -blip_reverse_dset
and lpc+zz was also used depending on the result of the co-registration (sometimes it just didn’t work with lpc+zz and sometimes didn’t work with nmi…)
the templates was:
and I am currently trying: