Is it appropriate to compare the final lpc cost from different subjects/groups to show comparable data preprocess quality?
# record final registration costs
3dAllineate -base final_epi_vr_base_min_outlier+orig -allcostX \
-input anat_final.$subj+orig |& tee out.allcostX.txt
It’s a good idea, but my vote would be “no”: there is too much potential difference between subjects (brain volume, mask size, weighting values, etc.). For such QC, I like flipping through images created by afni_proc.py’s QC HTML-- while not a number, this is an entirely reasonable way to just alignment (and, in fact, probably the best available, at present).
One place we do compare cost function values is within a subject’s own dataset, to find relative left-right flipping as described in Daniel’s work here:
As we did analysis within same std.141 surface ROI for all subjects, then will it make sense to Surf2Vol the ROI mask for each subject, then calculate the lpc value within ROI masks on volume, then compare that value from different subjects? Is it too complicated for a supplemental analysis to show similar data preprocess? Or just show the overlay images of every subject like the QC html as you suggested? Just want to quantify that…
I don’t quite see that being better than visualization. It is a number, sure, but what scale does it have? What value is “good” or “worrisome”? The way that would be determined is by… visualization. Also, if there were some ROIs that looked bad but had a good-seeming number, which evaluation would take precedence? Again, I think the visualization.
The images of overlap from warping could be provided in a supplement, either as a stack of montages or as a flip-book movie, perhaps?
Thanks for your suggestions.
I think I did not fully understand the Local Pearson Correlation(LPC) score.
After keeping in the similar ROI (same surface std ROI mapped to volume) in different subjects:
a. Does the lpc value reflect the alignment at this ROI, and lager absolute lpc value means better alignment?
b. Does the lpc value at this ROI can be compared between subjects?
For different ROIs, as size/curvature affects the correlation score, not suitable to compare?
Does the signal change of the epi influence the lpc value? Even two epi volumes align perfect with anat, but they have different signal then produce different lpc, so the comparison of lpc (or other cost functions) make no sense?
When we found the bold difference between Group-A/B at ROI-A but not at ROI-B, we want to show there is no difference in data acquisition(scales like motion parameters, tSNR, or ?) and data preprocess(alignment or ?) between ROIs or subjects. Visualization is good, but for publishing, images of each subjects may not be presented in large sizes for easily picking alignment differences, even harder for different ROIs in every subject. So, a quantitive method is what I am seeking.
Thank you again,