Strange TENT Betas

I am running a model with 11 predictors (large number of tents for each predictor) and for all of the predictors I get this strange pattern. After the 14th TENT the betas have abnormal values. It should be noted that for one predictor the pattern is reverse things start off strange and the values enter the normal range post 14.

Any input regarding what might be behind this would be helpful.

Thanks Again (example of the average betas from a spherical ROI below)

-0.157883
-0.179444
-0.141009
-0.0816818
0.0462152
-0.0257286
-0.0570135
-0.0109536
-0.00382085
0.156048
0.185519
0.360908
0.463101
0.626983
-4835.59
-5484.83
-9699.45
-6375.99
-4845.23
-2085.72
2884.77
14053.8
22374.1
24256.4
16543.6
5522.53
16.2079

What is the output of timing_tool.py -warn_tr_stats on your stimulus timing files? e.g.,

timing_tool.py -multi_timing stim*.txt -tr 2.0 -warn_tr_stats
  • rick

Note: Had to change the TR value to reflect my scans

First off thanks again for your assistance. I ran timing_tool.py with warn_tr_stats and nothing showed up in the terminal so I assume it had no warnings. Next I ran timing_tool.py with the show_tr_stats option and the output was interesting. The mean offset was equal to my TR (stdev 0) for all of my stim conditions for all runs except three of the runs in the condition that exhibited the reverse pattern (mentioned in post) had deviate offset means and had stdevs ~=0. Additionally, another condition had a single run with a stdev ~= 0 and a deviant offset mean. For the deviant cases the offset means were lower.

Okay just found this in the output

Warnings regarding Correlation Matrix: X.xmat.1D
severity correlation cosine regressor pair

but the correlations were all in the severity medium range (all in the .4s).

If you wouldn’t mind, please post the exact “1d_tool.py -show_tr_stats” command and output.

Thanks,

  • rick

Here it is

within-TR stimulus offset statistics (corr1_3.txt) :
per run
------------------------------
offset means 0.880 1.100 1.100 1.100 1.100 1.100
offset stdevs 0.492 0.000 0.000 0.000 0.000 0.000

overall:     mean = 1.035  maxoff = 1.100  stdev = 0.2668
fractional:  mean = 0.941  maxoff = 1.000  stdev = 0.2425

within-TR stimulus offset statistics (corr4.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (corr5.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (corr6.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (corr7.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (error4.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (error5.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (error6.txt) :
per run
------------------------------
offset means 1.100 1.100
offset stdevs 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (error7.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (error_omission.txt) :
per run
------------------------------
offset means 1.100 1.100 1.100 1.100 1.100
offset stdevs 0.000 0.000 0.000 0.000 0.000

overall:     mean = 1.100  maxoff = 1.100  stdev = 0.0000
fractional:  mean = 1.000  maxoff = 1.000  stdev = 0.0000

within-TR stimulus offset statistics (voa.txt) :
per run
------------------------------
offset means 1.100 1.048 1.100 1.100 0.995 1.048
offset stdevs 0.000 0.240 0.000 0.000 0.331 0.240

overall:     mean = 1.065  maxoff = 1.100  stdev = 0.1936
fractional:  mean = 0.968  maxoff = 1.000  stdev = 0.1760

So your TR is 1.1 then? That output uses 3 places after the decimal. I expect this means that the values are not exactly at multiples of 1.1, but very close (note that binary numbers cannot even store 1.1 exactly). That is what “timing_tool.py -warn_tr_stats” is supposed to flag. But it is unusual to see it among the interior betas.

Would you please send me one such timing file? You should be able to click on my name for that. Or you can just post the contents of one such file.

Also, what is the exact basis function you are using? More information is generally helpful.

Thanks,

  • rick

Yes the TR is 1.1 and when checking the timing files it seemed that all the values were divisible by 1.1 (hopefully I did not miss a simple erroneous onset value, I am checking again). The last time I used TENT I had to use the timing tool due to a TR locking issue but in that situation there was an extreme beta value at the first point (you mentioned how it is strange to have extreme values for the interior betas). Maybe this has something to do with binary numbers not being able to properly capture 1.1

I have posted the contents one of the event files (note it is tab delimited but things don’t appear that way when pasted into the posting window)

5.5 47.3 69.3 135.3 240.9
423.5
163.9 273.9
141.9 247.5 267.3 375.1 397.1
5.5 271.7 397.1
350.9

Thanks again and if you want the other files I can send them your way

Could PSFB syndrome cause the strange betas? Two events were cut off due to scans terminating prematurely. I can remove them from the event files but I am not sure how this would lead to the abnormal betas.

I don’t think PSFB syndrome is the problem. It still seems likely to be a truncation issue.

Would you please mail me the file X.xmat.1D? It would help to have all of that information together.

Actually, to be complete, would you please just upload the X.xmat.1D file (or whatever you have called it), along with all of the stim timing files? I will PM you with instructions, if that is okay.

Thanks,

  • rick

Thanks for the X-matrix.

Perhaps this is not a truncation issue, which would have been bad, given that all the times were nice multiples of 1.1. A truncation issue might be found if some regressors had very small maximums. But that is not the case here, they are all basically 1.

There might be near multicollinearity in the system, making the solution unstable. The main warning of this would come from 3dDeconvolve (and therefore afni_proc.py → @ss_review_driver or the QC HTML pages). There should be screen output and a file called 3dDeconvolve.err that should complain about the condition number of the matrix (the ratio of the largest to smallest eigenvalue).

One thing you could try is changing TENT to TENTzero. Instead of having 27 regressors per class, you would drop to 25, with the assumption that the endpoints are zero. If this changes the results a lot, then multicollinearity is a concern.

Can you give that a try?

  • rick

[b]I checked 3dDeconvolve.err and there was no multicollinearity warnings.

I posted the stuff below before but this is found in the recorded output[/b]

Warnings regarding Correlation Matrix: X.xmat.1D

severity correlation cosine regressor pair


medium: 0.439 0.447 (122 vs. 297) corr6#17 vs. voa#3
medium: 0.439 0.447 (123 vs. 298) corr6#18 vs. voa#4
medium: 0.439 0.447 (124 vs. 299) corr6#19 vs. voa#5
medium: 0.438 0.447 (146 vs. 294) corr7#14 vs. voa#0
medium: 0.437 0.445 (126 vs. 301) corr6#21 vs. voa#7
medium: 0.435 0.444 (125 vs. 300) corr6#20 vs. voa#6
medium: 0.435 0.444 (121 vs. 296) corr6#16 vs. voa#2
medium: 0.433 0.442 (129 vs. 304) corr6#24 vs. voa#10
medium: 0.433 0.442 (130 vs. 305) corr6#25 vs. voa#11
medium: 0.431 0.440 (147 vs. 295) corr7#15 vs. voa#1
medium: 0.431 0.440 (120 vs. 295) corr6#15 vs. voa#1
medium: 0.429 0.438 (119 vs. 294) corr6#14 vs. voa#0
medium: 0.429 0.438 (127 vs. 302) corr6#22 vs. voa#8
medium: 0.426 0.434 (155 vs. 303) corr7#23 vs. voa#9
medium: 0.426 0.434 (152 vs. 300) corr7#20 vs. voa#6
medium: 0.426 0.434 (131 vs. 306) corr6#26 vs. voa#12
medium: 0.426 0.434 (128 vs. 303) corr6#23 vs. voa#9
medium: 0.426 0.434 (158 vs. 306) corr7#26 vs. voa#12
medium: 0.424 0.432 (157 vs. 305) corr7#25 vs. voa#11
medium: 0.424 0.432 (156 vs. 304) corr7#24 vs. voa#10
medium: 0.420 0.428 (151 vs. 299) corr7#19 vs. voa#5
medium: 0.420 0.428 (150 vs. 298) corr7#18 vs. voa#4
medium: 0.420 0.428 (154 vs. 302) corr7#22 vs. voa#8
medium: 0.418 0.426 (153 vs. 301) corr7#21 vs. voa#7
medium: 0.416 0.425 (148 vs. 296) corr7#16 vs. voa#2
medium: 0.410 0.418 (149 vs. 297) corr7#17 vs. voa#3

I will give the TENTzero tomorrow since I am processing stuff right now.

Update: I gave it a try yesterday and it does not solve the problem (still getting extreme values).

What does 3dDeconvolve say about the condition numbers? What is the contents of 3dDeconvolve.err?

  • rick

3dDeconvolve.err gives me the stuff posted below (I am aware two events are outside the range of the scan (scans got cut short).

*+ WARNING: Input polort=3; Longest run=484.0 s; Recommended minimum polort=4
*+ WARNING: ‘-stim_times 6’ (LOCAL) run#1 has 1 times outside range 0 … 438.9 [PSFB syndrome]
*+ WARNING: ‘-stim_times 11’ (LOCAL) run#1 has 1 times outside range 0 … 438.9 [PSFB syndrome]
*+ WARNING: Smallest FDR q [2 corr1_3#0_Tstat] = 0.249566 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [30 corr1_3#14_Tstat] = 0.999888 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [50 corr1_3#24_Tstat] = 0.172477 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [81 corr4#14_Tstat] = 0.999894 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [101 corr4#24_Tstat] = 0.172472 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [104 corr5#0_Tstat] = 0.324792 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [132 corr5#14_Tstat] = 0.999888 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [152 corr5#24_Tstat] = 0.172488 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [183 corr6#14_Tstat] = 0.99989 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [203 corr6#24_Tstat] = 0.172477 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [234 corr7#14_Tstat] = 0.999888 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [254 corr7#24_Tstat] = 0.172492 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [285 error_omission#14_Tstat] = 0.999896 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [305 error_omission#24_Tstat] = 0.172492 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [336 error4#14_Tstat] = 0.999897 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [356 error4#24_Tstat] = 0.172473 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [387 error5#14_Tstat] = 0.999891 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [407 error5#24_Tstat] = 0.172496 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [438 error6#14_Tstat] = 0.999898 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [458 error6#24_Tstat] = 0.172464 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [489 error7#14_Tstat] = 0.999894 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [509 error7#24_Tstat] = 0.172474 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [512 voa#0_Tstat] = 0.999887 ==> few true single voxel detections
*+ WARNING: Smallest FDR q [532 voa#10_Tstat] = 0.172467 ==> few true single voxel detections

In regards to condition numbers where would I find this information in the output? I know you feed it the number of stims.

Interesting update if I take voa out of the model things look normal

That is helpful, I had been looking for something like that. It can be seen from a view of the list of events:

timing_tool.py -multi_timing stimuli/* -multi_timing_to_event_list GE:ALL -

or even

timing_tool.py -multi_timing stimuli/* -multi_timing_to_event_list GE:ALL - | grep voa

The voa events are always exactly 15.4 seconds (14 TRs) after every other event. Since these are TR-locked events and tents, and since the tents last 26 TRs, the first 11 TRs of the voa class (the first 11 regressors) will be exactly duplicated by the combination of the last 11 regressors from every other class. That is the multi-collinearity in the model, and it is why the condition number is high. It also explains why TENTzero did not help (removing the first and last regressors). There are 11 overlapping regressors, not just 2.

With fixed-shape basis functions, this model would be solvable, but not with all of the TENTs. Note that even sub-sampling the TENTs down to every 2 or 3 TRs might not be good enough, as there would still be the 12.1 seconds of consistent overlap. It would be helpful if there were some random jitter in the event timing.

  • rick

This makes total sense to me now. Thanks for your help