Question(s) on 3dNetCorr

Dear AFNI experts,

regarding 3dNetCorr

  1. Is there a way to approximate output values? (e.g., 0.65286124125 → 0.653)

  2. There seem to be two ways to calculate ROI-based functional connectivity:
    [li] Estimating the average time series across voxels in each ROI, and then calculating their correlation;
    [/li][li] Estimating the correlation among each voxel in each ROI, and then calculating their average;
    As far as I see, 3dNetCorr only accounts for the first one. Is that right? Are these two approaches equivalent?

Thanks in advance,

Hi, Simone-

  1. The values are output to a precision of 4 decimal places. I don’t see how you have one with 11? What is your version of AFNI? (That is, what is the output of “afni -ver”?)

  2. While averaging is a linear operation (just summing and dividing by a constant), calculating the correlation is not. Therefore, doing averaging then correlation is not the same as doing correlation then averaging. I suspect that they would be pretty similar for the ranges of values in FMRI, but that is purely a guess. Using the latter, one might get a sense fo the variability in the estimate, which might be useful. At present, though, 3dNetCorr doesn’t have an option for this.


Thanks for the answer.

  1. You are right. I was looking at the output of 3dmaskdump, which returns values with a precision of 6 decimal places (mine was just an example, sorry for being imprecise).
    Is there a way to output only the first - for example - 3 decimal places? That would be useful…

  2. Got it. So I suspect that the procedure which I implemented - cycling through voxels - is the only choice (although quite slow).


Hi, Simone-

Re. #1-- no, that flexibility does not exist. One could write a short Python script to do that, I suppose. Can I ask why 3 decimals would be better than 4? Is this for reporting purposes?

Re. #2-- I’m curious-- was there a big difference between the sum of voxelwise correlations (presumably as Fisher Z-transforms of Pearson r?) vs the average time series correlations?


Hi Paul,

regarding your question #2.
Well, I will tell you the difference as I get the results. Anyway, I’d like to have your specific opinion here.

It is important to note that I am using the Glasser’s parcellation (360 parcels, from the Nature paper). Each parcel is a RoI.

By default, one can expect a decrease in correlation values, especially regarding intra-parcel connectivity, when averaging correlations. However, I am worrying about the non-homogeneity of parcels (their size varies from 30/40 to 400/500 voxels), which may lead to huge differences between the two approaches (averaging then correlating VS correlating then averaging).

My opinion is that the choice between the twoapproaches depends on how much the user trusts the actual parcellation (i.e., how much the average timecourse reflects the biological entity ‘parcel’).
By default, I would use the ‘averaging and then correlating’ approach, because (i) I trust the parcellation, and (ii) I trust an acceptable computational time.

What do you think about this dilemma?


Well, this comparison strikes me as a pretty empirical question, so I don’t know how to theorize about it, unfortunately…

Having nonhomogenous time series in an ROI, regardless of size, should drive the averaged correlation toward zero, regardless of the order of operations. I’m not sure why that would depend on ROI size.

“How much one trusts” a parcellation does matter, sure-- I guess that depends on how much variability of time series there is per ROI. ReHo is one way to quantify that, I suppose, though I don’t know about how sensitive it would be or what scale exists to judge how much homogeneity is “enough” across a region. You could make a covariance matrix of time series within a ROI and investigate it that way?


Dear Paul,

I realize the problem is quite deep, here.
Anyway, as they may be useful, I attach the two functional connectivity matrices obtained using the two methods (one subject, two concatenated 7.5 minutes resting-state runs).
You may want to know that the two approaches produces similar - but not identical - outputs (the 2D correlation coefficient between the two matrices is 0.83).
There is a medium-low correlation (r=0.32) between the parcels’ size (in voxels) and the change of parcels’ mean connectivity between the two methods. Parcels’ size is also correlated to the shift in the degree centrality (r=0.47) and betweenness centrality (0.31).
Thus parcel’s size may have an impact. Probably, “driving the averaged correlation toward zero” may be stronger if more operations are computed (i.e., more voxels).

Just to be clear. With “trust a parcellation” I didn’t mean something like “trust by faith”. Parcellation~=Revelation.
There are some parcellations which are more similar to an undersampling, identifying circular-shaped, equally-sized parcels. In this case, I think that the two approaches may be more or less equivalent. Other parcellations may have been aimed at identifying areas which encompass voxels which are likely to have common functional signatures. In this case in my opinion the approach average->correlation makes more sense.


Are the brightness scales in those matrices really the same? And are they Pearson scores, I guess (since the max value in one case was 1.2, based on the color scale)?


The same scale is used for both images ([-0.4 1.2]), and the brightness represents the z-Fisher transformed value of the Pearson’s r.