3dNetCorr on Event-Related Task fmri

Hi Afni gurus,

I’d like to run a whole-brain connectivity analysis on EVENT related data. I’ve read that 3dNetCorr can be used for this, but so far I have not found any documentation or forum questions about it.

Basically I have a template of over 200 rois, and I’d like to correlate the average timeseries of each roi with one another, but for each condition separately.

Since resting state connectivity uses the errts, I am guessing I should maybe use the fitts time series that is outputted from 3dDeconvolve. But how might I go about obtaining, say the portions of the fitts time-series that are only specific to ‘Condition A’? And even if this is possible, is it logically sound to do this?

Alternatively I could try modeling single trial betas using STIM_IM, and concatenate the resulting coef sub-bricks corresponding to Condition A (for instance) into a timeseries, However, I am currently running a few proc_py scripts with STIM_IM condition and they’ve been running for two days now. (There are several hundred trials, so this may be very taxing, and Im not sure if the programs are stalling, or if it is normal they are taking so long. )

The end goal is to have matrices for each condition, and to run graph theory analysis on them.

Please help! I’ve been racking my brain on this for the last few days and would love help on a solution.

Howdy-

Well, 3dNetCorr is not built for that per se. One typically enters an ROI map and a 4D time series dataset in the same space, and then the program calculates the average time series in each ROI, and makes a correlation matrix out of that (with extra functionality like doing Fisher Z transform if you want, as well as being able to calculate whole brain correlation maps for each average ROI time series, and probably some other things I am forgetting…). It is certainly possible to select subsets/intervals of a time series for calculating the correlations, but…

… I don’t know that it is practical to calculate the correlation of just specific event responses (or, depending on the study design, even if it is possible). Firstly, most event related responses are quite short: by definition, event-related designs have a very short stimulus, and the actual blood response time will be probably be something like 8-15 s long in total (depending on what kind of hemodynamic response is appropriate for modeling). For the TR used in most FMRI studies, that would be something in the range of 2-7 time points, and calculating correlation between that few time points is not meaningful (the uncertainty would be sooo large, and it would likely be dominated by noise). Event related HRFs tend to be quite noisy (which is why people tend to have many repeated events in a stimulus paradigm, for improved statistics). I would think that a single HRF would be pretty meaningless to correlate, purely due to the noise present (FMRI is very noisy). Finally, in most event-related designs I have seen, there is expected overlap of HRFs due to stimuli-- that is, the events are not typically spaced far enough apart to have a brain response start and finish before another event has started.

Another consideration, even if you want to take the whole time series (not just one stimulus) and make correlation matrices: often, event-related designs have randomized timings across subjects, so I am not sure if calculating correlation matrices to compare across subjects would be meaningful.

Please let me know your thoughts about these thoughts.

–pt

Hello,

Similar to Paul’s excellent recommendations, I think the most (probably unique) reasonable way is through the beta correlations, i.e. using the -stims_times_IM, that you also propose. However, it is not normal that the 3dDeconvolve command takes so long. Could you please post it? Alternatively, what’s the dimensionality of the dataset(s), number of trials per condition, TR?

Thanks,
Cesar

Thank you both so much for your recommendations.

@Paul - your points make sense why taking subsets of the fitts series wouldn’t make sense… My events are only 3 seconds long, and are randomly distributed across each run. So I can’t see a good way of isolating them from fitts…

@Cesar and Paul - I’m glad to hear that the beta-series correlations might be a reasonable approach, and I have read of it being done, but not in a roi-roi analysis way. I.e. the initial paper that proposed it created a beta-series within seed rois. and looked for areas that were correlated with the seed timeseries ( ‘Measuring functional connectivity during distinct stages of a cognitive task’ Rissman, Gazzaley, & D’Esposito, 2004). Because I’m trying to take a more data-drive approach, I’d like to avoid looking at seeded correlations.

In terms of the dimensionality, I’d say its high… however upon closer look at my output files, I now see that the problem is not with 3dDeconvolve which already has completed, but rather 3dREMLfit. Is REMLfit perhaps not as compatible when modeling many regressors? I have 18 conditions, and 16 of them are being modeling with single trial betas. When I look at the output file for one subject, across all the STIM_IM conditions there are about 330 regressors. (The amount of regressors per condition is variable between 9 - 33). In addition the TR is 1.5, and there are many TRs per run-- with runs collapsed there are over 1,300 timepoints.

I’ve always used REMLfit, but do not have a very strong conceptual grounding on why, and even if is optimal/preferable over 3dDeconvolve for a beta-series correlation. Any thoughts?

Thank you guys so much for the helpful advice so far!!!

Hello

I would say that 3dDeconvolve and 3dREMLfit would give you very similar beta values because the main difference between the two programs is regarding the modelling of the residuals. Therefore, if 3dREMLfit is very slow, I would use 3dDeconvolve.

Since the number of trials is relatively equivalent to the number of time points, you might have a look to 3dLSS that also estimates beta amplitudes at the trial level.

Hope this helps,
Cesar

Hi Cesar, thank you for the input! I have considered 3dlss and discussed it elsewhere on the messageboard, though for a different type of analysis.

Just to clarify, I have about 333 trials, and over 1,300 1.5s TRs. (Where stimuli are repeated ever 4.5 seconds). So there are actually about 2.5 TRs between each stimulus presentation.

Given these numbers, would you recommend sticking with 3dDeconvolve unless their is high collinearlity across regressors? For instance, in the two subjects I’ve run so far there are (coincidentally) six pairs of regressors with medium-severity collinearity. (In the first subject, collinearity is between 6 pairs of stimuli, but in the second subjcect three of the six collinearity warning correspond to correlations between a simulus and motion regressorss.

Considering that their are 333 total trials, I imagine this is not super concerning. Would you agree or does this make you think 3dlss might be preferable?

Given these numbers, would you recommend sticking with 3dDeconvolve unless their is high collinearlity across regressors?

3dDeconvolve is more preferable to 3dLSS unless 3dDeconvolve really shouts out loud regarding severe collinearity. Later on when you compute the correlations among the regions, consider censoring out large beta values by, for example, setting [-2, 2] as your interval.

Hi Gang,
Thanks for this tip!
I just want to make sure I am understanding correctly, and that this data shouldn’t be ringing any alarm bells, so I have a couple additional questions:

  1. As my data is currently processed, I have not used the SCALE block in proc_py. Do you think for a beta series correlation, since I am concatenating betas together, that the data should have been normalized/scaled prior to 3dDeconvolve? Or is it okay to not use normalized betas? I’m thinking the data should be normalized but I’d love some feedback.

  2. As a test I have created a beta series (with unscaled betas) of one of my conditions, where there are 18 different individual trials. Therefore the beta series has 18 sub-bricks- one for each stimulus presentation. There is quite a LARGE range of beta values across these subbricks, with one brick having a voxel with a beta as low as -67 and another brick having a beta as high as +70. When I calculate the whole-brain average beta for each stimulus (ie each subbrick) I get a much more normal story, where the betas are all between +1 and -1. and the standard deviations range between 1 and 2.6.

Are these GIANT voxel-specific betas alarming? Or do you think it is not of concern since the average betas are within normal expected range?

Do you think for a beta series correlation, since I am concatenating betas together, that the data should have been normalized/scaled prior to 3dDeconvolve?

I assume that you’re computing correlations among regions within each each subject. If you have multiple runs/sessions, I would suggest a scaling step to calibrate the baseline fluctuations across runs/sessions. Otherwise, you could skip scaling.

Are these GIANT voxel-specific betas alarming?

Not necessarily. You may take a close look and find out why it occurred. Sometimes censoring may cause such a problem. Try it without censoring and see what happens.

Ok thanks!

So to clarify, you would test masking out voxels that have any of those large betas? (I.e. great than [+/-2]?

And I’ll look into the censoring thing. Wouldn’t adding back in censored stimuli potentially add spurious correlations due to widespread sensitivity of correlations across voxels to motion? Is the point just to see whether that gets rid of the large betas, but not necessarily to analyze the final data in this way?

you would test masking out voxels that have any of those large betas? (I.e. great than [+/-2]?

There are more principled ways to handle outliers, but censoring them is more economical and straightforward.

Wouldn’t adding back in censored stimuli potentially add spurious correlations due to widespread sensitivity
of correlations across voxels to motion? Is the point just to see whether that gets rid of the large betas, but not
necessarily to analyze the final data in this way?

I’m not aware of any systematic exploration about this. Censoring due to head motion adjustment, as I mentioned, is likely just one of the possible sources of rendering outlying betas. It would be quite an undertaking to explore this small project. If you can afford it, you can just remove those beta outliers when computing correlations among regions.

Also, Paul raises a potential concern and suggests that scaling should be always performed because of brightness inhomogeneity across voxels within an ROI.

Hi gurus! Im trying to figure out the best way to implement Gang’s suggestion:

Later on when you compute the correlations among the regions, consider censoring out large beta values by, for example, setting [-2, 2] as your interval.

Im not sure if understanding this correctly - does this mean to create some sort of mask which zeroes out voxels that are outliers, and that this mask should be inputted into 3dNetCorr along with the betaseries files?

If so, Can someone help me with the syntax? I am having some trouble making this mask - I tried using 3dcalc on the betaseries file with the logical ‘within(a,-2,2)’, but that must not be right, as im not getting a normal looking mask out of it. (In fact my gui is even getting glitchy when I try to look at it!).

Furthermore, I’m having trouble conceptually with understanding whether Gang is recommending zeroing out a voxel if it EVER has a beta larger than +/-2, or if I should be zeroing them out if the AVERAGE of the voxels is greater than ±2 across all timepoints. OR if im thinking about this in entirely the wrong way, and that Gang was not even recommending a mask!

AAAAAAH my brain.

Thank you in advance!

It’s not clear to me as to what you’re planning to do in the next step. Are you computing the correlations among region pairs using those betas? Or are you doing something else?

HI Gang, Yes - that is exactly what I would like to do. Here is the specific outline.

For each subject:
1.I create artificial ‘time-courses’ for each condition of interest. Each beta series will contain the coef sub-bricks corresponding to individual stimuli within a specific condition. So if I have two conditions, I would have two separate time course files
2.For each condtion beta series, I run 3dNetCorr with an roi atlas and output the pearson and fisher z correlation matrices.
3.Then i will run additional analysis on subject-specific matrices (potentially network analysis), as well as group analysis on the matrices.
(BTW I just read your MBA paper, very cool!!)

Here are some of my questions

  1. So my main needs are to figure out the best way to handle beta values that are outside the normal range (you said [-2, +2]). When I did a visual inspection on- one subject’s beta series, there were two obvious spurious/giant betas, that corresponded to trials that had over half of their TRs removed. So those were easy to remove. However there are other voxel timeseries that sometimes have beta values outside of the +/-2 interval, but not so extreme. Like beta values of 3,4,5,6…etc. Since you had suggested masking out larger beta-values, I am trying to figure out what you mean exactly… For instance, some voxels might have a large beta value for one stimulus/ (“timepoint”) but for all other timepoints the beta value might be fine. Would you still get rid of such a voxel?
    If so, what is the correct command to create such a mask?

  2. How exactly do you interpret scaled betas that are greater than +/-2 ? The goal of beta series analysis is to exploit the variability in one beta series to see if there are other series throughout the brain that it is correlated with, ie covaries with. So is there an argument for not masking out those beta values? (I apologize, i still don’t have a strong enough understanding of collinearity and the size of betas, to know whether this is a ridiculous thought.)

3.An alternative analysis approach I would be interested in running is to use gPPI, (rather than or in addition to the beta series method) to assess condition-specific connectivity. However, I need to be able to run gPPI with a brain roi/parcellation scheme (ie. ROI to ROI connectivity), rather than seed ROI to all other voxels. Is there a way to do this in AFNI?

Thank you guys!

For outliers, probably better to deal with them at the ROI level after you average the voxels within each ROI. As for computing the correlations, I don’t have a perfect solution, but maybe censor out those trials with outlying values for such an ROI (and its pairing ones)? I’m not sure if there is an easy mechanism in 3dNetCorr to achieve this.

there are other voxel timeseries that sometimes have beta values outside of the +/-2 interval, but not so extreme. Like beta values of 3,4,5,6…etc.

I was just using [-2, 2] as an example, you could decide as to what range would be empirically reasonable.

is there an argument for not masking out those beta values?

At the moment I cannot come up with a decent solution for correlation matrix computation. As a modeler I definitely would prefer a more principled way to handle this than hard threholding, and it would be a much larger undertaking.

I need to be able to run gPPI with a brain roi/parcellation scheme (ie. ROI to ROI connectivity), rather than seed ROI to all other voxels. Is there a way to do this in AFNI?

How about running the conventional seed-based PPI at the whole brain level, and then extracting the ROIs you are interested?