Ha, OK (or, as they say in Italy, OK).
Re. the correlation conundrum:
Ah, I see. Indeed, that is a bit tricky. So, you want region-to-voxelwise correlation, not within-network-of-regions correlation? Well, at least 3dNetCorr can output all the time series at once:
-ts_out :switch to output the mean time series of the ROIs that
have been used to generate the correlation matrices.
Output filenames mirror those of the correlation
matrix files, with a '.netts' postfix.
-ts_label :additional switch when using '-ts_out'. Using this
option will insert the integer ROI label at the start
of each line of the *.netts file created. Thus, for
a time series of length N, each line will have N+1
numbers, where the first is the integer ROI label
and the subsequent N are scientific notation values.
-ts_indiv :switch to create a directory for each network that
contains the average time series for each ROI in
individual files (each file has one line).
The directories are labelled PREFIX_000_INDIV/,
PREFIX_001_INDIV/, etc. (one per network). Within each
directory, the files are labelled ROI_001.netts,
ROI_002.netts, etc., with the numbers given by the
actual ROI integer labels.
… and it can even do the set of region-to-voxelwise correlation map calculation for you:
-ts_wb_corr :switch to perform whole brain correlation for each
ROI's average time series; this will automatically
create a directory for each network that contains the
set of whole brain correlation maps (Pearson 'r's).
The directories are labelled as above for '-ts_indiv'
Within each directory, the files are labelled
WB_CORR_ROI_001+orig, WB_CORR_ROI_002+orig, etc., with
the numbers given by the actual ROI integer labels.
but I take your point about the role of smoothing in effectively reducing some distortion effects.
Part of me wonders about doing region-to-voxelwise calculations, and then applying a blur to those maps. The correlation process is not linear, so it isn’t exchangeable with the linear process of blurring, but probably that would fairly closely approximate having a separate dataset that was blurred and calculating the correlation maps “properly”. Unfortunately verifying that hypothesis does the full amount of dual-processing that one would hope to avoid.
Will have to ponder a bit…