proc.py fatal error

Hi,

I have two separate problems when trying to run the proc.py file created by uber_subject.py

1.) I get the following error with one participant:
0m ‘-stim_times 1’ file ‘stimuli/ACL.letterA.final.92TR.1D’ has 1 auxiliary values per time point [nopt=15]

ACL.letterA.final.92TR.1D is one of two stimulus timing files. There are 5 runs of 93 TRs for this task (the first data point removed).

2.) For a different experiment, we keep getting an error saying:

  • warnings for local stim_times format of file /home/data/A110211/110211A/A110211_intn_BL.new.FIXED.1D
    • row 0 : time 134 exceeds run dur 0.182
    • row 1 : time 124 exceeds run dur 0.182
    • 9 row times exceed run dur 0.182 …

We get this for all 5 stimulus timing files. There are 9 runs for this experiment, each with 93 TRs (we remove 2 at the beginning for a total of 91 TRs). The runs are longer than .182 seconds. Somehow the orders of magnitude seems off as 182 would be the number of seconds in the runs after the first 2 TRs were removed.

Much thanks!
Wendy

Hi Wendy,

  1. The first message is not an error. Are you using
    amplitude modulation?

  2. Your datasets were probably created with a TR
    measured in ms rather than s. You should fix those
    datasets, and check them for all subjects. If there
    is slice timing, that would also need to be fixed.

To see what that information is, consider

3dinfo -slice_timing DSET+orig
3dinfo -tr DSET+orig

  • rick

Hi Rick,

Thanks for your response.

  1. Yes, we are using amplitude modulation. It did say later that 3dDeconvolve closed due to a fatal error. I didn’t see anything other than what I sent that was a red flag. I’ll try running again and looking for a different error.

  2. Your answer here makes sense due to the order of magnitude problem. What is weird is that we’ve processed several other participants collected the same way without any problems. How do I go about ‘fixing’ these datasets if this is the problem?

Much thanks!
Wendy

Hi Wendy,

  1. Having the TR off by a factor of 1000 probably means
    that most stim classes do not have any events in the
    regression (as they would have to occur within the first
    0.182 seconds in a run - unlikely). So that would cause
    the errors. They are expected.

  2. Exactly how were the datasets created (what were the
    commands)? If things previously worked, then presumably
    either the commands changed, or the information in the
    DICOM files changed.

Please either post or send me direct email with the output
from 3dinfo on both a working dataset and a failing one,
e.g. for datsets good+orig and bad+orig, please mail me
the output from:

3dinfo -VERB good+orig | head -n 25
3dinfo good+orig | tail -n 25
3dinfo -VERB bad+orig | head -n 25
3dinfo bad+orig | tail -n 25

Or, if you do send it via email, just send the full output:

3dinfo -VERB good+orig
3dinfo -VERB bad+orig

Click (or hover over) my name for the email address.

  • rick