strange beta values after 3dDeconvolve with a number of stim files

Hello,

I have questions about the strange beta values when I apply 3dDeconvolve to my dataset.

The conditions for the script I tried are as follows:

  1. As I censored the motion in preprocessing step, I used 1d_tool.py with the option '-censor_motion 0.4(“motion_threshold” from here) ’ and then used the output to the 3dDeconvole with the option ‘-censor’.

  2. Used options before setting ‘-num_stimts’ are ‘-censor’, ‘-mask’, ‘-polort A -float’ and ‘-allzero_OK’.

  3. The number of stimulus files is 114 with the option ‘-num_stimts 114’.
    3-1. 108 out of 114 are text files in which stimulation onset timing is represented (ex. 1st file has the value 4.03, 2nd file has the value 10.04 in second, and so on. The number of trials of our experiment is 108). Therefore, the statements such as ‘-stim_times 1 1st_file $basis -stim_label 1 Int1’ are repeated from number 1 to number 108. $basis is ‘BLOCK(2,1)’.
    3-2. 6 out of 114 are from motion_demean.~.1D file. They are roll, pitch, yaw, dS, dL and dP.

  4. And then, options for output file are followed; ‘-fout’, ‘-tout’, ‘-x1D’, ‘-xjpeg’, ‘-x1D_uncensored’ and ‘-bucket’.

However, beta values for the trials after trying above 3dDeconvolve are strange; beta values for some trials are abnormally large.

For instance, maximum beta values for trials 46-49 and 93-96 are about 10^6 and 10^7, respectively. The order of the minimum beta values is similar.

I thought this might be from subject’s considerable movement and the censored TR, so I tried 1d_tool.py with ‘-censor_motion 0.5’ and then 3dDeconvolve again with newly got censor file. This time, beta values for trials 46-49 are lowered to about 10^5, but those for trials 93-96 is still similar (about 10^7).

I don’t know why those strange beta values are calculated. Could you make suggestions for me to fix this problem? I’d be appreciated to any explanation or suggestions. Thank you.

That is odd, but understanding such a thing tends to mean
looking closely at the data (though I suspect point 3, below,
addresses it). Some thoughts…

  1. It sounds like you are modeling one event at a time.
    That could probably be more easily done via -stim_times_IM.
    It would not affect the results (assuming no mistakes have
    been made).

  2. IM on top of censoring might leave a few time points that
    are extremely noisy.

  3. If every BLOCK response time point were censored but
    the very last, the regressor might be all zero, except for a
    tiny value at that time point. This could lead to a reciprocally
    huge beta weight, as you are seeing.

  4. For comparison, I suggest you also run this though afni_proc.py.
    If that does not affect the results (for one subject, say), at least you
    can be more sure there are probably no mistakes in your stream.

  • rick