I am trying to conduct a trial by trial analysis using 3dDeconvolve’s -stim_times_IM option (to generate separate beta estimates for each trial in each condition). As a sanity check, I’m having 3dDeconvolve calculate an all vs. baseline contrast using the glt option (shown at the bottom of this post). The script completes without error, but results in strange values for this contrast for only some runs. One run will have coefficient values ranging no more than about +/- 100, but the next run will have all values more extreme than +/- 10^8 (so that even thresholding at the most extreme value does not block out any voxels). However, this does not happen for the beta estimates for the trials themselves, only the contrast. I’m wondering what might be causing these improbably extreme values. Any advice? Maybe this is not ultimately a problem because it’s just an issue with the sanity check?
If it’s relevant, the onsets of each trial do not coincide with the onset of TRs. The duration I’m modeling always starts 200 ms before the TR and lasts until halfway through the TR (1000 ms). So one condition’s line is:
A minor point is that there is no reason to subtract
the baseline here. The beta weights are implicitly
against the baseline already.
A more important point is that if there are 200 events
for cond1, then that contrast will add up all 200 beta
weights. In such a case, it might be good to scale
them by 1/(# events), to get an average.
Another important point is that if there are too many
events for IM to separate itself from the baseline,
then the betas could individually be on the order of
100 (assuming you have scaled the data). In such a
case, if there were 1000 total events, the contrast
values could easily hit 100,000, even in gray matter.
This might vary from subject to subject, as it is
determined by the data.
Longer rest periods are generally needed for IM to
separate the conditions baseline and each other.
How many total events are there? And what does a
histogram of betas in the brain mask show?
The runs are short, totaling 20 total events, 5 in each condition. Histograms look normal overall, with the extreme-value runs histogram still being centered on 0 but extending to extreme values. You mentioned, “In such a case, it might be good to scale them by 1/(# events), to get an average.” How do I do that? When defining the GLT? Or is that moot given the relatively small number of total events?
If there are 5 events per condition, then you should still divide by 5.
Actually if you are going to add up all events (across the 4 conditions),
then it would be good to divide by 20 (yes, in the GLT), getting an
average beta rather than a sum.
But clearly, 20 events is not the issue here. Exactly what are those
the histograms of, beta weights? Those values go over 24 billion!
It would be good to back up and review how you got there. Can you
show what was done since registration, say? Or even give a full
overview, plus pertinent commands?
It would also be good to try the analysis with afni_proc.py, even if
nothing looked strange.
I’ve looked into this a more and I identified when the error occurs. (It’s not related to preprocessing.) The average beta is going to those extreme values when a single trial has extreme values. This occurs whenever the trial is too close to the end of the run, basically starting 200 ms before the last TR (but not all runs have this because optseq sometimes introduced jittered null events between the last experimental event and the end of the run).
I tried dropping that event and the extreme values went away. Ultimately, though, I want to include the event and I still thought it might be related to modeling the events 200 ms before the TR onset. So I updated the event onsets to coincide with the TRs and modified the 3dDeconvolve command to not specify parameters of the GAM function (-stim_times_IM 1 subject_cond1.1D ‘GAM’ ). Now it produces no extreme values in any runs! Success.
…except ideally, I’d like to model the events starting 200 ms before and lasting only 1200 ms (the length of time stimuli are on the screen), rather than the full 2 s TR. Do you think I can still find a way to model the events like this?
This makes sense, though it helps reinforce that such events
should really be dropped.
The reason the betas would be huge is because the modeled HRF
is still tiny, and it is only non-zero at a single time point
for that event (before the run ends). The regressor will fit
the data perfectly, and will yield a huge beta.
This is akin to a division by (almost) zero situation.
There is no point in modeling such a response. It provides no
information because the BOLD response has not even started
before the run is over (hence the tiny value in the regressor).
Drop the event, and try to avoid this in the future.
There is little point in stimulating a subject less than 2 or 3
seconds or so from the end of a run. The BOLD response is
sluggish.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.