Blurring and 3dNetCorr

Dear Colleagues,

Stop me if you’ve heard this one. A mechanical, chemical, and software engineer get into a car. It won’t start. The mechanical engineer says, “Hmm, we should take apart the starter.” The chemical engineer says “What?! No! We should analyze the composition of the battery acid.” The software engineer says, “Let’s get out and get back in again.”

How might you recommend that I do spatial smoothing in conjunction with 3dNetCorr? The mean time series of each region of interest should be calculated before blurring, whereas the correlations themselves should be calculated after blurring.

Sincerely,

Dante

Ciao, Dante-

As always, a pleasure seeing you post here!

Re. 3dNetCorr and blurring:
I would say that you should not blur the data during processing. Let 3dNetCorr do the averaging of time series within each ROI for you (which it does internally). If you blur the data prior to that, you will likely artificially boost the correlations among neighboring ROIs, because they now share time series information.

–pt

1 Like

Ciao, Paulo!

We must now switch to English, or the conversation would end because I shamefully do not speak Italian. I have to ask you one day where you learned Italian.

Exactly. When I do seed-based functional correlations for one region:

  1. I take a region and calculate its mean time series
  2. I plop that out to a separate file for later use
  3. Only then do I perform spatial smoothing as the final “preprocessing” step
  4. Then I take my outputted file and use it to do the correlations

That is my conundrum. I must not do spatial smoothing before calculating the mean time series. And I must do it afterwards and immediately prior to using the calculated mean time series in the correlations. However, the sublime convenience of 3dNetCorr creates a paradox because it performs everything in a single step. Maybe that is just the reality: the convenience of 3dNetCorr may preclude spatial smoothing because everything is done in a single step.

This is gnawing at me a lot lately. Because, the spatial smoothing not only gives us some critical statistical niceness. It obviates the problem of residual geometric distortion present in all fMRI studies if you did not collect a phase image, which I did not collect for later use with epi_b0_correct.py, if the spatial smoothing that you plan to do anyway is larger than the distortion.

Sincerely,

Dante

Ha, OK (or, as they say in Italy, OK).

Re. the correlation conundrum:

Ah, I see. Indeed, that is a bit tricky. So, you want region-to-voxelwise correlation, not within-network-of-regions correlation? Well, at least 3dNetCorr can output all the time series at once:


-ts_out          :switch to output the mean time series of the ROIs that
                      have been used to generate the correlation matrices.
                      Output filenames mirror those of the correlation
                      matrix files, with a '.netts' postfix.

    -ts_label        :additional switch when using '-ts_out'. Using this
                      option will insert the integer ROI label at the start
                      of each line of the *.netts file created. Thus, for
                      a time series of length N, each line will have N+1
                      numbers, where the first is the integer ROI label
                      and the subsequent N are scientific notation values.

    -ts_indiv        :switch to create a directory for each network that
                      contains the average time series for each ROI in
                      individual files (each file has one line).
                      The directories are labelled PREFIX_000_INDIV/,
                      PREFIX_001_INDIV/, etc. (one per network). Within each
                      directory, the files are labelled ROI_001.netts,
                      ROI_002.netts, etc., with the numbers given by the
                      actual ROI integer labels.

… and it can even do the set of region-to-voxelwise correlation map calculation for you:


-ts_wb_corr      :switch to perform whole brain correlation for each
                      ROI's average time series; this will automatically
                      create a directory for each network that contains the
                      set of whole brain correlation maps (Pearson 'r's).
                      The directories are labelled as above for '-ts_indiv'
                      Within each directory, the files are labelled
                      WB_CORR_ROI_001+orig, WB_CORR_ROI_002+orig, etc., with
                      the numbers given by the actual ROI integer labels.

but I take your point about the role of smoothing in effectively reducing some distortion effects.

Part of me wonders about doing region-to-voxelwise calculations, and then applying a blur to those maps. The correlation process is not linear, so it isn’t exchangeable with the linear process of blurring, but probably that would fairly closely approximate having a separate dataset that was blurred and calculating the correlation maps “properly”. Unfortunately verifying that hypothesis does the full amount of dual-processing that one would hope to avoid.

Will have to ponder a bit…

–pt

1 Like

Paul,

Ah, yes. You’re right. I forgot that 3dNetCorr wonderfully gives me correlation matrices based on every region of interest’s average time series. Thank you for patiently reawakening those circuits in my brain.

Yes, my main result will be the plot generated by 3dLME from the region-to-region correlation matrices that will be fed into 3dLME from 3dNetCorr via the fetal fat_lme_prep.py program. Hmm, that does make spatial smoothing moot in a way. I will need to ponder that a bit as well.

Sincerely,

Dante

P.S.: Although the region-to-region correlations will be my main result, it does seem to me at present that the region-to-voxelwise correlations that I am getting via the -ts_wb_Z option must also be used to show whole-brain maps in the publication for a region or two, so it’s not totally moot.