TENT function - extreme values at trial start


I am trying to use the TENT function to run an FIR analysis. I did this using afni_proc:

afni_proc.py -subj_id s$sub
-script proc.s$sub.3conds_norming_tent
-out_dir s$sub.3conds_norming_tent.results
-dsets func/sub-s${sub}_ses-01_task-combo_run-0*_bold.nii.gz
-copy_anat anat/anatSS.s${sub}.nii
-anat_has_skull no
-blocks tshift align volreg mask blur scale regress
-tshift_opts_ts -tpattern alt+z
-volreg_align_to third
-volreg_zpad 4
-volreg_interp -heptic
-mask_apply anat
-blur_to_fwhm -blur_size 6.0
-regress_bandpass .008 99999
amb att rel
-regress_stim_types times times times
-regress_basis_multi ‘TENT(0,18,10)’ ‘TENT(0,18,10)’ ‘TENT(0,18,10)’
-gltsym ‘SYM: +.5att +.5rel -amb’
-gltsym ‘SYM: +att -rel’
-glt_label 1 UNAMBvsAMB
-glt_label 2 ATTvsREL
-jobs 24
-regress_est_blur_epits \

When I went to analyze the betas output in the stats, I found that for all subjects, ROIs, and conditions, my betas at time 0 were extreme relative to the other time points. For example:

File Sub-brick Mean_1
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 0[amb#0_Coe] -112.195664
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 1[amb#1_Coe] 0.551796
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 2[amb#2_Coe] 0.063248
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 3[amb#3_Coe] 0.018582
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 4[amb#4_Coe] 0.149465
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 5[amb#5_Coe] 0.360442
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 6[amb#6_Coe] 0.363047
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 7[amb#7_Coe] 0.374584
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 8[amb#8_Coe] 0.316820
stats.s104+orig[1,3,5,7,9,11,13,15,17,19] 9[amb#9_Coe] 0.255430
File Sub-brick Mean_1
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 0[amb#0_Coe] 86.534339
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 1[amb#1_Coe] -0.072039
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 2[amb#2_Coe] 0.605378
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 3[amb#3_Coe] 0.593267
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 4[amb#4_Coe] 0.472463
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 5[amb#5_Coe] 0.326305
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 6[amb#6_Coe] 0.071705
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 7[amb#7_Coe] 0.268122
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 8[amb#8_Coe] 0.072327
stats.s105+orig[1,3,5,7,9,11,13,15,17,19] 9[amb#9_Coe] -0.000413
File Sub-brick Mean_1
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 0[amb#0_Coe] -18.552854
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 1[amb#1_Coe] -0.021274
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 2[amb#2_Coe] -0.044923
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 3[amb#3_Coe] -0.024933
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 4[amb#4_Coe] -0.114865
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 5[amb#5_Coe] -0.240969
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 6[amb#6_Coe] -0.208894
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 7[amb#7_Coe] -0.188989
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 8[amb#8_Coe] -0.184979
stats.s106+orig[1,3,5,7,9,11,13,15,17,19] 9[amb#9_Coe] -0.209463
File Sub-brick Mean_1
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 0[amb#0_Coe] 206.078016
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 1[amb#1_Coe] -0.542887
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 2[amb#2_Coe] 0.361305
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 3[amb#3_Coe] 0.384412
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 4[amb#4_Coe] 0.272852
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 5[amb#5_Coe] 0.322065
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 6[amb#6_Coe] 0.221116
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 7[amb#7_Coe] 0.054475
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 8[amb#8_Coe] 0.052189
stats.s108+orig[1,3,5,7,9,11,13,15,17,19] 9[amb#9_Coe] -0.064882

Am I right in thinking this is unusual? And, if so, is there any explanation as to why this might have happened or suggestions for ameliorating the situation?



Since it’s not expected to have BOLD response at each stimulus onset, it is very likely the model picked up something unrelated to the real signal at the beginning of each event. I suggest that you replace TENT(0,18,10) with TENTzero(0,18,10).

Hi Heather,

Perhaps the stimulus events are consistently close to but not equal to a TR multiple. What is the output from this?

timing_tool.py -multi_timing stimuli/*tent.txt -tr 2.0 -warn_tr_stats
  • rick

This seems like the culprit. Here’s the output:

timing_tool.py -multi_timing stimuli/*tent.txt -tr 2.0 -warn_tr_stats

within-TR stimulus offset statistics (stimuli/sub104_imagine_amb_norming_tent.txt) :
per run
offset means 0.009 0.009 0.007
offset stdevs 0.002 0.003 0.003

overall:     mean = 0.008  maxoff = 0.017  stdev = 0.0028
fractional:  mean = 0.004  maxoff = 0.008  stdev = 0.0014

             ** WARNING: small maxoff suggests (almost) TR-locked stimuli
                consider: timing_tool.py -round_times (if basis = TENT)

within-TR stimulus offset statistics (stimuli/sub104_imagine_att_norming_tent.txt) :
per run
offset means 0.008 0.011 0.008
offset stdevs 0.003 0.003 0.005

overall:     mean = 0.009  maxoff = 0.018  stdev = 0.0038
fractional:  mean = 0.005  maxoff = 0.009  stdev = 0.0019

             ** WARNING: small maxoff suggests (almost) TR-locked stimuli
                consider: timing_tool.py -round_times (if basis = TENT)

within-TR stimulus offset statistics (stimuli/sub104_imagine_rel_norming_tent.txt) :
per run
offset means 0.011 0.011 0.007
offset stdevs 0.005 0.004 0.003

overall:     mean = 0.010  maxoff = 0.021  stdev = 0.0044
fractional:  mean = 0.005  maxoff = 0.011  stdev = 0.0022

             ** WARNING: small maxoff suggests (almost) TR-locked stimuli
                consider: timing_tool.py -round_times (if basis = TENT)

I re-ran the analysis using the TENTzero function, as suggested by Gang. The output of that analysis produced much more reasonable values and the plot looked like an HRF. Do you recommend scrapping that and re-running the analysis with TENT and the rounded times or, if I am content with fewer modeling timepoints, does the TENTzero output still make sense?


Hi Heather,

I would expect this to be problematic whether using TENT or TENTzero. Since the times are basically TR-locked already, just go ahead and enforce it regardless of which version of TENTs you use. Use “timing_tool.py -round_times” on each file and see how that affects the results.

By the way, did afni_proc.py warn about this? I think it is supposed to. It might show up in the APQC html report.

  • rick

Hi rick,

Thanks, I’ll give this a shot!

Yes, I do see the warnings now. See attached image.


Great, thanks!

  • rick