Dear AFNI experts,
regarding 3dNetCorr

Is there a way to approximate output values? (e.g., 0.65286124125 → 0.653)

There seem to be two ways to calculate ROIbased functional connectivity:
[ul]
[li] Estimating the average time series across voxels in each ROI, and then calculating their correlation;
[/li][li] Estimating the correlation among each voxel in each ROI, and then calculating their average;
[/li][/ul]
As far as I see, 3dNetCorr only accounts for the first one. Is that right? Are these two approaches equivalent?
Thanks in advance,
Simone
Hi, Simone
Re. #1 no, that flexibility does not exist. One could write a short Python script to do that, I suppose. Can I ask why 3 decimals would be better than 4? Is this for reporting purposes?
Re. #2 I’m curious was there a big difference between the sum of voxelwise correlations (presumably as Fisher Ztransforms of Pearson r?) vs the average time series correlations?
–pt
Hi Paul,
regarding your question #2.
Well, I will tell you the difference as I get the results. Anyway, I’d like to have your specific opinion here.
It is important to note that I am using the Glasser’s parcellation (360 parcels, from the Nature paper). Each parcel is a RoI.
By default, one can expect a decrease in correlation values, especially regarding intraparcel connectivity, when averaging correlations. However, I am worrying about the nonhomogeneity of parcels (their size varies from 30/40 to 400/500 voxels), which may lead to huge differences between the two approaches (averaging then correlating VS correlating then averaging).
My opinion is that the choice between the twoapproaches depends on how much the user trusts the actual parcellation (i.e., how much the average timecourse reflects the biological entity ‘parcel’).
By default, I would use the ‘averaging and then correlating’ approach, because (i) I trust the parcellation, and (ii) I trust an acceptable computational time.
What do you think about this dilemma?
Best,
Simone
Well, this comparison strikes me as a pretty empirical question, so I don’t know how to theorize about it, unfortunately…
Having nonhomogenous time series in an ROI, regardless of size, should drive the averaged correlation toward zero, regardless of the order of operations. I’m not sure why that would depend on ROI size.
“How much one trusts” a parcellation does matter, sure I guess that depends on how much variability of time series there is per ROI. ReHo is one way to quantify that, I suppose, though I don’t know about how sensitive it would be or what scale exists to judge how much homogeneity is “enough” across a region. You could make a covariance matrix of time series within a ROI and investigate it that way?
–pt
Dear Paul,
I realize the problem is quite deep, here.
Anyway, as they may be useful, I attach the two functional connectivity matrices obtained using the two methods (one subject, two concatenated 7.5 minutes restingstate runs).
You may want to know that the two approaches produces similar  but not identical  outputs (the 2D correlation coefficient between the two matrices is 0.83).
There is a mediumlow correlation (r=0.32) between the parcels’ size (in voxels) and the change of parcels’ mean connectivity between the two methods. Parcels’ size is also correlated to the shift in the degree centrality (r=0.47) and betweenness centrality (0.31).
Thus parcel’s size may have an impact. Probably, “driving the averaged correlation toward zero” may be stronger if more operations are computed (i.e., more voxels).
Just to be clear. With “trust a parcellation” I didn’t mean something like “trust by faith”. Parcellation~=Revelation.
There are some parcellations which are more similar to an undersampling, identifying circularshaped, equallysized parcels. In this case, I think that the two approaches may be more or less equivalent. Other parcellations may have been aimed at identifying areas which encompass voxels which are likely to have common functional signatures. In this case in my opinion the approach average>correlation makes more sense.
http://https://imgur.com/BP8X1Ja
Are the brightness scales in those matrices really the same? And are they Pearson scores, I guess (since the max value in one case was 1.2, based on the color scale)?
–pt
The same scale is used for both images ([0.4 1.2]), and the brightness represents the zFisher transformed value of the Pearson’s r.