Why do stim_times and stim_file give vastly different outputs?

Hey guys, I have a theoretical question.

I’m running 3dDeconvolve on simulated data to check out an experimental design. I’ve read the 3dDeconvolve manual but it’s dated 2001 when stim_file was the norm. I had emailed Doug Ward some questions that he was awesome enough to answer and he recommended for my simulations, to use binary stimulus timing files and the stim_file option in 3dDeconvolve.

Long story short I did 3dDeconvolve on my data in 2 ways:

  1. With the -stim_times options using a ‘BLOCK(12,1)’ duration for 12s stimuli.
  2. With the -stim_file options using binary stimulus files (only 1 one where the onset was) and a -minlag option of 0 and -maxlag option of 12.

The -stim_times option gives almost perfect results, right on top of my simulated data but the -stim_file option looks vastly different. This isn’t a huge problem because I’ll just use -stim_times in my actual analysis but I’m curious as to why the results are so different. In both cases I am also inputting shock onsets (again as either stimulus_times or stim_files, respectively) and it looks like in the case of -stim_files the shock is also being modeled as part of the stimulus which makes it look messed up but I’m just not sure why -stim_times would take care of that but stim_file wouldn’t. I mostly just want to know the difference between the two analyses. My two commands are below.

  1. 3dDeconvolve -input1D simulated_all_cs_all_shock.1D -force_TR 2 -polort 0 -num_stimts 5 -stim_times 1 stim_test_times_one_ones.01.1D ‘BLOCK(12,1)’ -stim_times 2 stim_test_times_one_ones.02.1D ‘BLOCK(12,1)’ -stim_times 3 stim_test_times_one_ones.03.1D ‘BLOCK(12,1)’ -stim_times 4 shock_test_times.01.1D ‘BLOCK(1,1)’ -stim_times 5 shock_test_times.02.1D ‘BLOCK(1,1)’ -xout -iresp 1 CSp_sim_iresp_sts -iresp 2 CSm_sim_iresp_sts -iresp 3 CST_sim_iresp_sts -fitts sim_fitts_sts -errts sim_errts_sts

  2. 3dDeconvolve -input1D simulated_all_cs_all_shock.1D -force_TR 2 -polort 0 -num_stimts 5 -stim_file 1 CS+_one_ones.1D -stim_file 2 CS-_one_ones.1D -stim_file 3 CST_one_ones.1D -stim_file 4 CS+_shock.1D -stim_file 5 CST_shock.1D -stim_minlag 1 0 -stim_maxlag 1 12 -stim_minlag 2 0 -stim_maxlag 2 12 -stim_minlag 3 0 -stim_maxlag 3 12 -stim_minlag 4 0 -stim_maxlag 4 1 -stim_minlag 5 0 -stim_maxlag 5 1 -xout -nfirst 0 -iresp 1 CSp_sim_iresp_sfile1 -iresp 2 CSm_sim_iresp_sfile1 -iresp 3 CSt_sim_iresp_sfile1

Thanks for any insight!

(P.S. I have also tried using -stim_files where there is a 1 not just at the onset but, but 6 1s in a row for each stimulus representing that it’s on screen for 12s (6 TRs) and it looks even worse).

Do you recall the background on why Doug suggested lags?
He surely had a good reason for it, which might be separate
from the discussion raised in your post.

But those 2 methods should indeed be quite different. The
BLOCK(12,1) function will yield a single regressor that
would hopefully match the data. The onset + lag version
will actually create 13 regressors (1 for each lag), where
the result of the linear regression would be a 13 parameter
curve showing the best fit of an “average response” to the

Since you mention BLOCK(12,1) looking good on top of the
data, it seems likely to be plotted as an ideal (rather than
a fit - the fit should look good in either case, even better
with the lagged version). And the ideal regressors in the
lagged case are just offsets of original, so none of them
would look particularly good with the data, and even their
sum would only be a box car.

In any case, if you want to see this with data, run the
regression and plot the fitts curve along with the input
data. For a quick example of this plot, look at page 18
of the old FT_analysis tutorial, t18_results_2_EPI.txt .
It is described the first place ‘fitts’ is mentioned.

  • rick