I had a question regarding the interpretation of the beta weights produced by 3dDeconvolve. For my experiment, I have analyzed each subject’s breathhold data in 3dDeconvolve using the following script:
3dDeconvolve -input 101.breathhold.BOLD.nophymotcorr.tcat.scale.nii
-stim_times 1 breathhold_stimulus.1D ‘BLOCK(15,1)’
-stim_label 1 breathhold
-iresp 1 BH_test.101
-fout -tout -rout -bout -x1D X_test.xmat.1D -xjpeg X_test.jpg
For the stim_times I have a breathhold_stimulus.1D file that has the breathhold onset times, plus a delay that represents the average amount of time it took for the BOLD signal to peak after each breathhold for that subject. For example, a participant might have a BOLD signal that peaks 8.7 seconds on average after each breathhold. Their stimulus times file would read 43.7 90.7 137.7 184.7 231.7.
Now I am extracting mean values from the breathhold coefficient subbrick of the resulting stats file from different ROIs using 3dROIstats. What does this number represent? If I understand correctly, 3dDeconvolve runs a GLM on each voxel and estimates the beta weight. I also understand that a beta weight is the average amount by which the dependent variable increases when the independent variable increases one standard deviation and other independent variables are held constant. But what does this mean in this context? What unit is this amount measured in? Your thoughts are much appreciated!
If I understand your experiment correctly, each stimulus (breath-holding) onset time should be the beginning of each breath-holding event. In other words, you should not manually add a delay of 8.7 seconds to the onset times because such delays are handled through the model.
In this context, the beta value is a multiplier (or scaling factor) to the regressor. Since the regressor is scaled to have a peak value of 1, the beta can be interpreted as percent signal change to voxel-wise average if you scale the input data during the preprocessing.
Many thanks, Gang. If I were to scale the data for this, would I set the mean as 0.5 with a range from 0-1? Perhaps with a script like the following?
scale each voxel time series to have a mean of 0.5, subject to a range of [0,1]
3dTstat -prefix rm.mean.nii 101.breathhold.BOLD.nophymotcorr.tcat.nii
3dcalc -a 101.breathhold.BOLD.nophymotcorr.tcat.nii -b rm.mean.nii -c 101.brain_mask+orig -expr ‘c * min(1, a/b*0.5)*step(a)*step(b)’ -prefix 101.breathhold.BOLD.nophymotcorr.tcat.scale.nii
I don’t see the point of adding a scaling factor 0.5 in your 3dcalc command. Could you elaborate it a bit?
Perhaps I don’t understand the scaling process too well. I got this script from someone else in my lab where they scaled the data to have a range of 0-200 and a mean of 100. I tried to apply the same logic to my analysis using a range of 0-1 and a mean of 0.5. I just want to scale the data in a way that allows me to interpret the beta weights in the manner you described in your initial response:
“In this context, the beta value is a multiplier (or scaling factor) to the regressor. Since the regressor is scaled to have a peak value of 1, the beta can be interpreted as percent signal change to voxel-wise average if you scale the input data during the preprocessing.”
If you scale your data like this,
3dcalc -a 101.breathhold.BOLD.nophymotcorr.tcat.nii -b rm.mean.nii -c 101.brain_mask+orig -expr ‘c * min(200, a/b*100)*step(a)*step(b)’ -prefix 101.breathhold.BOLD.nophymotcorr.tcat.scale.nii
the data will be scaled by the mean value at each voxel. This step will result in the values around 100. The number 200 is used to set a cap at voxels where strange things happen (e.g., the mean is too small). As the slow drift in the signal is modeled as additive effects in the form of polynomials, the regression coefficients (beta values) can be directly interpreted in percent signal change since the baseline would be close to 100 after the scaling.
Many thanks, Gang. I think I have the scaling part sorted now.
I did have a question about adding a delay to the BH onset times. I’m wondering what sort of delay is incorporated into 3dDeconvolve’s block analysis that makes it inappropriate to add a delay. It might make sense to use a built-in delay for behaviours like button pressing tasks, but I have read in the literature about specific delay values (e.g…, 9 s) being added into models to account for the delay in the BOLD response to the breathhold.
For example, from Bright & Murphy (2013), who if I understand correctly used AFNI for their analyses:
“The timing of BOLD response to respiratory challenges varies across the healthy brain (Bright et al., 2009), and this temporal mismatch must be taken into account prior to further analysis. The breath-hold regressors were shifted with respect to the data to account for delays between the breath-hold challenge, the physiological response, and the resulting BOLD signal changes.”
Our approach was for each subject was to calculate the average time required for the BOLD response to peak after each breathhold, and then add this averaged time to each BH onset time for that subject. The issue I find is that the breathhold is often correlated with decreases in activation if I do not add this individualized delay, which makes little sense as the breathhold should increase cerebral blood flow. But I might be missing something, and I would like to better understand the delay built into AFNI’s 3dDeconvolve block waveform. Any thoughts would be welcome. You’ve been very helpful.
I have to admit that I don’t know anything about the BOLD response associated with breath-holding. So you’re definitely better positioned with your experience to make the judgment call than I’m.
One thing I may add is that, instead of presuming the response delay (and shape), it might be worthwhile to estimate the response shape based on the data through basis functions such as TENTzero, if your situation permits.
Daniel Handwerker has done some work on breathholding and Valsalva effect too. In a more recent paper, he used HRFs (computed with tent functions) to look at the effect:
Many thanks for the suggestions. I will look into these options.
Regardless of how I go about doing the adjustment to the regressor, do either of you have any suggestions about how to assess the model fit beyond this link from Gang’s page? https://afni.nimh.nih.gov/sscc/gangc/Fit.html
If using the second suggestion from this page (modeled signal), can I assume if the peaks and valleys of the model and data line up fairly well that I have a good model? Or should I be looking for something stronger or more quantitative than this?
For example, is there a way to look at model fit against the time series for the peak voxel (a graph similar to FSL’s tsplot output)? Or, is there a way to go to a specific voxel, so I can compare that voxel for a subject across versions of the model (including an average delay, 9s delay, no delay, etc)?