Task-Based Functional Connectivity - 3dDeconvolve & 3dSynthesis questions

Hi AFNI wizards,

I am returning to an old dataset I previously undertook an activation analysis on to run some ROI-to-ROI FC network analyses and have a few questions (further task context below if needed):

1a. How does 3dDeconvolve determine the duration(#of TRs) of its modeled HDR function from an stim-times_AM(1) ‘dmBLOCK(1)’ set of stim-times in the output .xmat stimulus columns? e.g. is it a function or set number of TRs or something else to account for the long tail of the HDR?

2a. Is it possible to use a variation of 3dDevonvolve -fitts output to isolate a single condition’s relevant BOLD signal from 1 or more other task conditions, or is 3dSynthesize & 3dCalc (per Gang’s Simple Correlation Analysis | afni.nimh.nih.gov) to remove the effects/conditions of no interest from the original signal, the correct approach? In other words: can you select the best fit estimated BOLD data of a single condition out from the full model, or can you only model the effects of no interest to remove and use what is left behind?

2b. I recall reading somewhere in a help file or powerpoint that 3dSynthesize & 3dCalc script approach might be “out of date”. Is there an updated process or does this merely refer to a newer python composite script?

Further context: The task is a variable-duration event-related auditory discrimination task. There are 3 randomly interleaved conditions (Strong, Weak, Silence) I would like to separate and compare the various ROI-to-ROI correlations within the network, across the group, yet between the conditions. Ex: Is FC stronger or weaker between ROI-1 & ROI-5 during strong stimuli or weak stimuli? Current 3dDeconvolve script and example stim-times file attached.

Thank you for tackling several different questions. You guys really provide a wonderful resource and support system to the community here.


Just re-broadcasting… Any help appreciated



Sorry for the slow response!

To model varying durations across trials, I suggest that you use dmUBLOCK(-X) in 3dDeconvolve, where X is the duration which you want to explicitly associate the effect estimate (beta) with. Consider adopting the same X value across all subjects so that the effects are consistent, comparable and interpretable at the group level.

As far as I know, there is no ideal way to remove some effects/conditions of no interest from the original signal. The reason is that the estimated/modeled effects are very crude, and there is a huge amount of variation across trials. In addition, the assumed regressors are just a poor man’s best approximation at the moment. Thus, you can only remove the major components of an effect. If you’re determined to try, 3dSynthesize combined with 3dcalc is probably the way to go.

Thank you for swinging back around, and especially patience/perspective with the follow up questions below.


Regarding dmUBlock: My understanding was that this was a variant used for PPI analyses where you had a psychological measure/performance value that varied and could be applied as a potential moderating factor. Unfortunately while our task actually has “strong” vs. “weak” stimuli the performance data was so good we had a ceiling effect and insufficient variance to utilize such an approach.

Re dmUBLOCK(-X): Our stimuli does not have a consistent duration as mentioned. The nature of the audio stim mean they vary between 8 and 10 seconds yet contain the same number of auditory tones (an intended control) depending time between tones. Is there a significant reason/benefit to treat all stimuli a single length (potentially ignoring TRs of stimuli), rather than use dmBlock_AM1 and instruct AFNI to fit the duration of each event using the “Onset:duration” formatting?

Re Original 3dDeconvolve HDR length inquiry: In the attached image you can see a visualization using excel of TRs (rows) broken down by stimuli condition along side the 3 HDR models from 3dDeconcolve’s .xmat.1D output column’s. My mentor noticed that the columnar models both tend to combine trials that are close together and also append 7 or 8 TRs to the end of each stimuli period, and wanted to know how these are computed since length varies so much.

Re 3dSynthesize: Thank you for the warning and recommendation. It is noted and will be discussed. If we decide to proceed, than I believe our process would be the pipeline below. Please let me know if you see any holes or issues, and if ‘-cenfill none’ option sounds correct.

A. Remove effects of no interest via your 3dSythn “-cenfill none” +3dCalc with for each condition separately (ex: [Strong] - [Weak + Silence + 6 degree Movement + Censors])
B. 3dMaskave our ROIs
C. Use the .xmat.1D column TR periods for each condition (as in excel above) to create 0/1 condition time masks for trimming the .1D signals
D. Remove all 0’s to Combine condition periods into contiguous blocks.
E. Compare the ROI-to-ROI correlations between condition’s blocks.


Duration modulation is usually used for the situation where the duration varies substantially from trial to trial. I don’t fully know the nature and detail of your experiment design, so I cannot directly offer concrete suggestions for handling your scenario. In what sense the trial duration is varying? When the duration varies, two aspects of the BOLD response are potentially affected: the length and the magnitude of the hemodynamic response. The response length is roughly linear with the stimulus duration, while the magnitude has a nonlinear relationship with the duration (initially increasing but gradually getting saturated). The several options available in 3dDeconvolve are quite confusing because each of them may have different underlying assumptions.

As for isolating the effects associated with one particular condition through removing all other modeled effects, I don’t feel the 3dSynthesize+3dcalc method is compelling. As there is substantial amount of effects not properly modeled, the removal might be largely ineffective.

Just throw in an alternative idea here (some people call it beta series correlation):

  1. model each trial separately;
  2. formulate your correlation matrix based on the beta series across the trials of the same condition.

With this approach, you don’t need to worry about the duration variability, and you obtain the correlation matrix directly from the modeled effects instead of jumping through the tanged hoops.

Appreciate patience with response. Will try to touch on each topic below:


Duration modulation: The auditory stimuli (both “Strong” & “Weak”) involved the same number of identical tones which varied only in their patterns and inter beat intervals (IBI - 100s of Milliseconds between tone onsets). Practically speaking these differences meant the stimuli duration ranged from 8.5 to 10.5 seconds (with a 2.5sec TR) depending on which patterns and IBI were utilized in creating the stimuli. Or in other words stimuli that varied by as much as an entire TR (and by as much as 3TRs in instances were movement censoring shortened a given stimuli presentation). The attached infographic should also help visualize.

3dDeconvolve options: Yes they can be pretty confusing. Please let me know if any specifics of protocol indicate a different option is warranted and what that may be. We want to be sure we are approaching this correctly and happy to answer questions. As mentioned, we landed on dmBlock given we had randomly ordered condition’s stimuli that varied by 1-3 TRs. The dmBlock_AM1 version was selected since the Strong vs. Weak patterns aren’t quantifiably different (rather merely classified by past experimental studies on perception of rhythm) and discrimination performance was above 95%. Thoughts?

stimtimes_IM: If we were to employ IM option rather than AM1 as you mentioned in your email, would 3dDeconvolve be able to handle modeling 180 different stimuli events (ie. 60 different stimuli events * 3 conditions - Strong, Weak, Silence) per subject?

HDR Model length: From what you are saying it sounds like it is a linear equation based on duration of each stimuli presentation as provided in the Stim_times file, correct? Can you provide any other infomarion on that what that equation is or perhaps how to find it (both to better understand mechanics and for citing in publication).

Beta Series: This does sound like a safer approach to do the correlation matrix. Appreciate you mentioned. Unfortunately we are also doing an effective connectivity analysis (specifically a uSEM approach via GIMME software Gates et. al., 2010). Correct me if I misunderstand Beta Series Analysis, it outputs something akin to a Z score measure of how ROI BOLD signals predict one another rather than an r score, and we would not have the isolated BOLD signals from each condition to feed into an effective connectivity analysis to estimate likely directional paths. ie. If we want to do effective connectivity it would require something similar to 3dSynthesize+3dcalc to isolate each condition’s BOLD signal to have the necessary input for ecMRI predictions, correct?.

3dSynthesize+3dcalc: Any other options beyond this come to mind for untangling the 3 interleaved conditions?

If your ultimate goal is to untangle the 3 interleaved conditions, I don’t think that you could reasonably achieve that by modeling the BOLD response at the condition level (the issue of handling duration variation aside). The reason is that the BOLD response varies substantially across trials, therefor simply modeling the BOLD responses at the condition level would not good enough to untangle the effects at the trial level.

On the other hand, beta series analysis is good for computing the inter-regional correlations with the estimated effects. If you’re planning to provide the time series at each region as input for the “connectivity” analysis, I’m afraid I can’t see any effective solution for that.