Hi,
I've been reading Gang's recent paper on HRF modelling and it inspired me to investigate what effect changing HRF model might have on some task data I have.
It's a block task consisting of 60s rest, {20s stimulation, 30s rest}*5, 60 rest. MB-ME EPI images were acquired with TR=1.6s.
Before re-running 3dD, I've been graphing the output of the TENT functions. Since the 20 second block is not evenly divisible by the TR, I chose to set the duration of the TENT to 19.2 rather than have it bleed into the start of the following rest block.
3dDeconvolve -nodata 80 1.6 -num_stimts 1 -polort -1 -local_times -x1D stdout: -stim_times 1 '1D: 60' 'TENT(0,19.2,12)' | 1dplot -thick -one -stdin -xlabel Time -jpg tent_scale_chrysler.jpg
*+ WARNING: no -stim_label given for stim #1 ==> label = 'Stim#1'
++ 3dDeconvolve: AFNI version=AFNI_23.1.09 (Jun 25 2023) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ using TR=1.6 seconds for -stim_times and -nodata
++ using NT=80 time points for -nodata
++ -stim_times using TR=1.6 s for stimulus timing conversion
++ -stim_times using TR=1.6 s for any -iresp output datasets
++ [you can alter the -iresp TR via the -TR_times option]
++ -stim_times 1 using LOCAL times
++ Number of time points: 80 (no censoring)
+ Number of parameters: 12 [0 baseline ; 12 signal]
++ Wrote matrix values to file stdout:.xmat.1D
++ ----- Signal+Baseline matrix condition [X] (80x12): 1.58375 ++ VERY GOOD ++
++ ----- Signal-only matrix condition [X] (80x12): 1.58375 ++ VERY GOOD ++
++ 1dplot: AFNI version=AFNI_23.1.09 (Jun 25 2023) [64-bit]
++ Authored by: RWC et al.
With 12 knots, I get something like the Chrysler Building in the graph of the suite of regressors, and it does not scale uniformly to 1.
3dDeconvolve -nodata 80 1.6 -num_stimts 1 -polort -1 -local_times -x1D stdout: -stim_times 1 '1D: 60' 'TENT(0,19.2,13)' | 1dplot -thick -one -stdin -xlabel Time -jpg tent_scale_flat_top.jpg
*+ WARNING: no -stim_label given for stim #1 ==> label = 'Stim#1'
++ 3dDeconvolve: AFNI version=AFNI_23.1.09 (Jun 25 2023) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ using TR=1.6 seconds for -stim_times and -nodata
++ using NT=80 time points for -nodata
++ -stim_times using TR=1.6 s for stimulus timing conversion
++ -stim_times using TR=1.6 s for any -iresp output datasets
++ [you can alter the -iresp TR via the -TR_times option]
++ -stim_times 1 using LOCAL times
++ Number of time points: 80 (no censoring)
+ Number of parameters: 13 [0 baseline ; 13 signal]
++ Wrote matrix values to file stdout:.xmat.1D
++ ----- Signal+Baseline matrix condition [X] (80x13): 2.7679 ++ VERY GOOD ++
*+ WARNING: !! in Signal+Baseline matrix:
* Largest singular value=1.41421
* 1 singular value is less than cutoff=1.41421e-07
* Implies strong collinearity in the matrix columns!
++ Signal+Baseline matrix singular values:
0 0.184592 0.366025 0.541196 0.707107
0.860919 1 1.12197 1.22474 1.30656
1.36603 1.40211 1.41421
++ ----- Signal-only matrix condition [X] (80x13): 2.7679 ++ VERY GOOD ++
*+ WARNING: !! in Signal-only matrix:
* Largest singular value=1.41421
* 1 singular value is less than cutoff=1.41421e-07
* Implies strong collinearity in the matrix columns!
++ Signal-only matrix singular values:
0 0.184592 0.366025 0.541196 0.707107
0.860919 1 1.12197 1.22474 1.30656
1.36603 1.40211 1.41421
++ 1dplot: AFNI version=AFNI_23.1.09 (Jun 25 2023) [64-bit]
++ Authored by: RWC et al.
With 13 knots, i get a flat top in the graph of the suite of regressors, and it also does not scale to 1.
I thought TENT functions scaled to 1. Is that not correct?
Why does changing the number of knots by 1 have such a drastic effect on the plateau, lack of it, in the graphs?
Thanks of your help.