Strange values in trial by trial analysis using -stim_times_IM

Hi,

I am trying to conduct a trial by trial analysis using 3dDeconvolve’s -stim_times_IM option (to generate separate beta estimates for each trial in each condition). As a sanity check, I’m having 3dDeconvolve calculate an all vs. baseline contrast using the glt option (shown at the bottom of this post). The script completes without error, but results in strange values for this contrast for only some runs. One run will have coefficient values ranging no more than about +/- 100, but the next run will have all values more extreme than +/- 10^8 (so that even thresholding at the most extreme value does not block out any voxels). However, this does not happen for the beta estimates for the trials themselves, only the contrast. I’m wondering what might be causing these improbably extreme values. Any advice? Maybe this is not ultimately a problem because it’s just an issue with the sanity check?

If it’s relevant, the onsets of each trial do not coincide with the onset of TRs. The duration I’m modeling always starts 200 ms before the TR and lasts until halfway through the TR (1000 ms). So one condition’s line is:

-stim_times_IM 1 subject_cond1.1D ‘GAM(8.6,.547,1.2)’ -stim_label 1 cond1 \

where I’m explicitly setting the duration of the gamma as 1.2 seconds. Could this be causing the issue?

To create all vs. baseline:
-num_glt 1
-glt_label 1 allVSfix
-gltsym “SYM: +cond1 +cond2 +cond3 +cond4 -4*Ort[0]”

Thank you for your help!
Ben

Hi Ben,

A minor point is that there is no reason to subtract
the baseline here. The beta weights are implicitly
against the baseline already.

A more important point is that if there are 200 events
for cond1, then that contrast will add up all 200 beta
weights. In such a case, it might be good to scale
them by 1/(# events), to get an average.

Another important point is that if there are too many
events for IM to separate itself from the baseline,
then the betas could individually be on the order of
100 (assuming you have scaled the data). In such a
case, if there were 1000 total events, the contrast
values could easily hit 100,000, even in gray matter.
This might vary from subject to subject, as it is
determined by the data.

Longer rest periods are generally needed for IM to
separate the conditions baseline and each other.

How many total events are there? And what does a
histogram of betas in the brain mask show?

  • rick

Hi Rick,

The runs are short, totaling 20 total events, 5 in each condition. Histograms look normal overall, with the extreme-value runs histogram still being centered on 0 but extending to extreme values. You mentioned, “In such a case, it might be good to scale them by 1/(# events), to get an average.” How do I do that? When defining the GLT? Or is that moot given the relatively small number of total events?

Best,
Ben

Example of extreme values with 100 time bins in the histogram:
#Magnitude Freq Cum_Freq
-24905848832.000000 1 1
-24391172096.000000 1 2
-23876495360.000000 0 2
-23361818624.000000 3 5
-22847141888.000000 2 7
-22332465152.000000 3 10
-21817788416.000000 3 13
-21303111680.000000 3 16
-20788436992.000000 12 28
-20273758208.000000 11 39
-19759083520.000000 5 44
-19244406784.000000 12 56
-18729730048.000000 20 76
-18215053312.000000 22 98
-17700376576.000000 26 124
-17185699840.000000 36 160
-16671023104.000000 37 197
-16156346368.000000 48 245
-15641669632.000000 58 303
-15126992896.000000 64 367
-14612316160.000000 67 434
-14097640448.000000 111 545
-13582963712.000000 127 672
-13068286976.000000 158 830
-12553610240.000000 184 1014
-12038933504.000000 219 1233
-11524256768.000000 262 1495
-11009580032.000000 296 1791
-10494904320.000000 357 2148
-9980227584.000000 406 2554
-9465550848.000000 506 3060
-8950874112.000000 548 3608
-8436197376.000000 630 4238
-7921520640.000000 778 5016
-7406843904.000000 820 5836
-6892167168.000000 956 6792
-6377490432.000000 1057 7849
-5862813696.000000 1207 9056
-5348136960.000000 1426 10482
-4833460224.000000 1479 11961
-4318783488.000000 1641 13602
-3804108800.000000 1688 15290
-3289432064.000000 1844 17134
-2774755328.000000 2007 19141
-2260078592.000000 2142 21283
-1745401856.000000 2260 23543
-1230725120.000000 2335 25878
-716048384.000000 2541 28419
-201371648.000000 2513 30932
313305088.000000 2601 33533
827981824.000000 2648 36181
1342658560.000000 2601 38782
1857335296.000000 2633 41415
2372012032.000000 2647 44062
2886688768.000000 2606 46668
3401365504.000000 2545 49213
3916040192.000000 2491 51704
4430716928.000000 2411 54115
4945393664.000000 2246 56361
5460070400.000000 2193 58554
5974747136.000000 2016 60570
6489423872.000000 1896 62466
7004100608.000000 1771 64237
7518777344.000000 1649 65886
8033454080.000000 1413 67299
8548130816.000000 1330 68629
9062807552.000000 1237 69866
9577482240.000000 1092 70958
10092161024.000000 943 71901
10606835712.000000 853 72754
11121514496.000000 772 73526
11636189184.000000 630 74156
12150867968.000000 650 74806
12665542656.000000 483 75289
13180221440.000000 385 75674
13694896128.000000 348 76022
14209574912.000000 310 76332
14724249600.000000 263 76595
15238928384.000000 216 76811
15753603072.000000 214 77025
16268281856.000000 135 77160
16782956544.000000 120 77280
17297631232.000000 67 77347
17812310016.000000 62 77409
18326984704.000000 64 77473
18841663488.000000 28 77501
19356338176.000000 28 77529
19871016960.000000 21 77550
20385691648.000000 19 77569
20900370432.000000 16 77585
21415045120.000000 11 77596
21929723904.000000 6 77602
22444398592.000000 5 77607
22959077376.000000 6 77613
23473752064.000000 3 77616
23988430848.000000 1 77617
24503105536.000000 4 77621
25017780224.000000 3 77624
25532459008.000000 2 77626
26047133696.000000 1 77627

Hi Ben,

If there are 5 events per condition, then you should still divide by 5.
Actually if you are going to add up all events (across the 4 conditions),
then it would be good to divide by 20 (yes, in the GLT), getting an
average beta rather than a sum.

But clearly, 20 events is not the issue here. Exactly what are those
the histograms of, beta weights? Those values go over 24 billion!
It would be good to back up and review how you got there. Can you
show what was done since registration, say? Or even give a full
overview, plus pertinent commands?

It would also be good to try the analysis with afni_proc.py, even if
nothing looked strange.

  • rick

I’ve looked into this a more and I identified when the error occurs. (It’s not related to preprocessing.) The average beta is going to those extreme values when a single trial has extreme values. This occurs whenever the trial is too close to the end of the run, basically starting 200 ms before the last TR (but not all runs have this because optseq sometimes introduced jittered null events between the last experimental event and the end of the run).

I tried dropping that event and the extreme values went away. Ultimately, though, I want to include the event and I still thought it might be related to modeling the events 200 ms before the TR onset. So I updated the event onsets to coincide with the TRs and modified the 3dDeconvolve command to not specify parameters of the GAM function (-stim_times_IM 1 subject_cond1.1D ‘GAM’ ). Now it produces no extreme values in any runs! Success.

…except ideally, I’d like to model the events starting 200 ms before and lasting only 1200 ms (the length of time stimuli are on the screen), rather than the full 2 s TR. Do you think I can still find a way to model the events like this?

Hi Ben,

This makes sense, though it helps reinforce that such events
should really be dropped.

The reason the betas would be huge is because the modeled HRF
is still tiny, and it is only non-zero at a single time point
for that event (before the run ends). The regressor will fit
the data perfectly, and will yield a huge beta.

This is akin to a division by (almost) zero situation.

There is no point in modeling such a response. It provides no
information because the BOLD response has not even started
before the run is over (hence the tiny value in the regressor).
Drop the event, and try to avoid this in the future.

There is little point in stimulating a subject less than 2 or 3
seconds or so from the end of a run. The BOLD response is
sluggish.

  • rick