3dNetCorr - handling censored TRs

I am running analysis of preprocessed rs-fMRI data, want to measure correlations between the Glasser ROIs (then use these as inputs for classification analysis).

I am using 3dNetCorr, with inset of errts.SUBJECT.fanaticor_tlrc, and in_rois with the Glasser atlas (but resampled to same grid as the errts).

I want to make sure I am not making a large error - so 2 questions:

(1) I decided to resample the Glasser (1mm) to the errts (3.5mm) resolution grid, should I have sampled the other way (errts to Glasser?).

(2) I’m worried about censored TRs. In the errts file, it looks like TRs are censored by zeroing all the voxels in that subbrick? Does 3dNetCorr take that into account and run its correlation analysis with properly excluding the zeroed/censored TRs? Is there something else I need to be doing to get correct correlation analysis between these ROIs given these zeroed sensored volumes?

My general command for reference if desired:

  3dNetCorr                                                   \
    -inset errts.${sid}_${series}.${naming}.fanaticor+tlrc.   \
    -in_rois ${glass_resam}                                   \
    -ts_out                                                   \
    -prefix "roi_corr"



Re 1: Sure, resampling that way makes sense. Note that if there are tiny ROIs in the Glasser atlas, those might disappear.

Re. 2: if your time series has zero mean (e.g., which might occur if it is residuals from resting state processing, say), then there shouldn’t be a difference between extra zeros or not.
Note that you can also input a “weight” vector with “-weight_ts …”, which could be 1 for non-censored time points and 0 for censored time points-- such a beast would be created by afni_proc.py, and likely called censor_${subj}_combined_2.1D in the *.results/ directory.
… and you can compare the results of using the weight vector and not doing so to verify that there is no difference (again, if your times series all have zero mean).

Note that in assessing the statistical significance of the Pearson r, you would want to use the degrees of freedom of the time series, which is even different than the number of time points, often.


Thank you. I guess I don’t quite follow “time series has zero mean” or how to check that. It is residuals from resting state processing.

I will look into finding the weight vector in the results directory. We did use afni_proc.py for this.


Hi Paul,

Unrelated but related... does 3dNetCorr allow for assessing the significance of the Pearson correlations? Or is that another afni function?



You can output a matrix of Fisher Z-transformed values (in addition to the Pearson r ones), using:

    -fish_z          :switch to also output a matrix of Fisher Z-transform
                      values for the corr coefs (r):
                          Z = atanh(r) ,
                      (with Z=4 being output along matrix diagonals where
                      r=1, as the r-to-Z conversion is ceilinged at 
                      Z = atanh(r=0.999329) = 4, which is still *quite* a
                      high Pearson-r value.

The Fisher Z values follow a normal distribution, and hence can be directly mapped to p-values, say.


Regarding having mean 0, note that a residual time series output by 3dDeconvolve or 3dTproject (as yours presumably is) should have a mean of zero, since it is orthogonal to all of the regressors and the polort 0 regressors are constant, per run, explicitly modeling the mean.

You can test this with something like:

3dTstat -mean -prefix mean.errts errts.${sid}_${series}.${naming}.fanaticor+tlrc

And just for kicks you could even extract the non-censored time points (as done in the proc script) from the errs and run 3dNetCorr on that, again, just to see that the result is the same. e.g.

set ktrs = `1d_tool.py -infile censor_${subj}_combined_2.1D -show_trs_uncensored space`
3dTcat -prefix errts.nocensor.nii.gz "errts.${sid}_${series}.${naming}.fanaticor+tlrc[$ktrs]"

The numbers won't be EXACTLY the same, due to truncation errors in the datasets and computations, but they should be very close. The mean won't be reported as exactly zero for the same reason, but they should be small.

  • rick