'active' baseline task and make_random_timing.py

Hello,

I am using make_random_timing.py for the first time and can’t figure out how to get it work with an ‘active’ baseline instead of a ‘passive’ baseline, such as rest.

Specifically, I have people do an odd/even digit judgment task as my baseline. I am allowing them 3 seconds to do the judgment. Thus, my baseline trials have a minimum duration of 3 seconds and must occur in 3-second chunks, i.e., 0 seconds (no baseline trials), 3 seconds (1 trial), 6 seconds (2 trials), 9 seconds (3 trials), etc.

How can I code for this in make_random_timing.py so I can figure out the best experimental design using @stim_analyze?

Regards,
Christine

Hi Christine,

It should be okay to picture one of the conditions as a baseline.
Then all of the event durations should add up to the total duration
of each run.

For example, if there are 3 2 second event conditions, and one
3 second “baseline” condition, and 10 events of each type, this
command might apply:

make_random_timing.py -num_stim 4 -run_time 90 -stim_dur 2 2 2 3 \
    -num_reps 10 -prefix ss -stim_labels A B C D -num_runs 1 -show_timing_stats

The -show_timing_stats option shows no rest time, which seems
to be what you want.

To evaluate the full order of conditions, consider:

timing_tool.py -multi_timing ss* -multi_stim_dur 2 2 2 3 -multi_timing_to_event_list GE:ALL all.times.txt

What command are you trying now?

  • rick

Hi Rick,

I like your idea of pretending that the active baseline condition is just another type of stimulus condition.

How would the program know that this extra task is also part of the same condition as the “-post_stim_rest” TRs? Or is there a way to force each run to end with 9 seconds of my baseline ‘condition’ and set “-post_stim_rest” equal to zero?

Finally, for tests of my plannted GLTs, I am interested in 1) all the conditions together vs baseline and 2) the area under the curve for each of the 8 conditions of interest. Can you help me understand how to specify those to identify their fitness? I am interested in reducing the amount of run time devoted to the baseline condition and want to see if these GLTs will suffer when I start eliminating baseline trials.

Below are my variables and command line.

Best,
Christine

Here are my variables:
set num_stim = 9
set num_runs = 5
set pre_rest = 12 # min rest before first stim (for magnet steady state)
set post_rest = 9 # min rest after last stim (for trailing BOLD response)
set min_rest = 0 # minimum rest after each stimulus
set tr = 0.75 # used in 3dDeconvolve, if not make_random_timing.py
set stim_durs = 15.5 15.5 15.5 15.5 15.5 15.5 15.5 15.5 3
set stim_reps = 4 4 4 4 4 4 4 4 50
set run_lengths = 890
set labels = “label1 label2 label3 label4 label5 label6 label7 label8 baseline”

Here is my [your :)] command line:
make_random_timing.py -num_stim $num_stim -stim_dur $stim_durs
-num_runs $num_runs -run_time $run_lengths
-num_reps $stim_reps -prefix stimes.$iter
-pre_stim_rest $pre_rest -post_stim_rest $post_rest
-min_rest $min_rest
-stim_labels $labels
-seed $seed
-tr $tr
-show_timing_stats
-save_3dd_cmd cmd.3dd.$iter
>& out.mrt.$iter

Hi Rick,

I get a weird error when I try to run your make_stim_times.txt program (see below) that has all the variables and scripting and the make_random_timing.py call.

The error says: set Variable name must begin with a letter.

In your code all the set commands are following by variable names that begin with a letter so I am thinking that something else must be going on.

Can you offer any guidance?

Christine

#!/bin/tcsh

try to find reasonable random event related timing given the experimental

parameters

---------------------------------------------------------------------------

some experiment parameters (most can be inserted directly into the

make_random_timing.py command)

set num_stim = 9
set num_runs = 5
set pre_rest = 12 # min rest before first stim (for magnet steady state)
set post_rest = 9 # min rest after last stim (for trailing BOLD response)
set min_rest = 0 # minimum rest after each stimulus
set tr = 0.75 # used in 3dDeconvolve, if not make_random_timing.py
set stim_durs = 15.5 15.5 15.5 15.5 15.5 15.5 15.5 15.5 3
set stim_reps = 4 4 4 4 4 4 4 4 50
set run_lengths = 890
set labels = “label1 label2 label3 label4 label5 label6 label7 label8 baseline”

---------------------------------------------------------------------------

execution parameters

set iterations = 100 # number of iterations to compare
set seed = 1234567 # initial random seed
set outdir = stim_results # directory that all results are under
set LCfile = NSD_sums # file to store norm. std. dev. sums in

set pattern = LC # search pattern for LC[0], say

set pattern = ‘norm. std.’ # search pattern for normalized stdev vals

===========================================================================

start the work

===========================================================================

------------------------------------------------------------

recreate $outdir each time

if ( -d $outdir ) then
echo “** removing output directory, $outdir …”
\rm -fr $outdir
endif

echo “++ creating output directory, $outdir …”
mkdir $outdir
if ( $status ) then
echo “failure, cannot create output directory, $outdir”
exit
endif

move into the output directory and begin work

cd $outdir

create empty LC file

echo -n “” > $LCfile

echo -n “iteration (of $iterations): 0000”

------------------------------------------------------------

run the test many times

foreach iter (count -digits 4 1 $iterations)

    # make some other random seed

    @ seed = $seed + 1


    # create randomly ordered stimulus timing files
    # (consider: -tr_locked -save_3dd_cmd tempfile)

    make_random_timing.py -num_stim $num_stim -stim_dur $stim_durs  \
            -num_runs $num_runs -run_time $run_lengths              \
            -num_reps $stim_reps -prefix stimes.$iter               \
            -pre_stim_rest $pre_rest -post_stim_rest $post_rest     \
            -min_rest $min_rest                                     \
            -stim_labels $labels                                    \
            -seed $seed                                             \
            -tr $tr                                                 \
            -show_timing_stats                                      \
            -save_3dd_cmd cmd.3dd.$iter                             \
                    >& out.mrt.$iter

    # consider: sed 's/GAM/"TENT(0,15,7)"/' tempfile > cmd.3dd.$iter
    #           rm -f tempfile

    # now evaluate the stimulus timings

    tcsh cmd.3dd.$iter >& out.3dD.$iter

    # save the sum of the 3 LC values
    set nums = ( `awk -F= '/'"$pattern"'/ {print $2}' out.3dD.${iter}` )

    # make a quick ccalc command
    set sstr = $nums[1]
    foreach num ( $nums[2-] )
        set sstr = "$sstr + $num"
    end
    set num_sum = `ccalc -expr "$sstr"`

    echo -n "$num_sum = $sstr : " >> $LCfile
    echo    "iteration $iter, seed $seed"                  >> $LCfile

    echo -n "\b\b\b\b$iter"

end

echo “”
echo “done, results are in ‘$outdir’, LC sums are in ‘$LCfile’”
echo consider the command: “sort -n $outdir/$LCfile | head -1”

note that if iter 042 seems to be the best, consider these commands:

cd stim_results

set iter = 042

timing_tool.py -multi_timing stimes.${iter}_0* \

-run_len $run_lengths -multi_stim_dur $stim_durs \

-multi_show_isi_stats

tcsh cmd.3dd.$iter

1dplot X.xmat.1D’[6…$]’

1dplot sum_ideal.1D

- timing_tool.py will give useful statistics regarding ISI durations

(should be similar to what is seen in output file out.mrt.042)

- run cmd.3dd.$iter to regenerate that X martix (to create actual regressors)

- the first 1dplot command will show the actual regressors

(note that 6 = 2*$num_runs)

- the second will plot the sum of the regressor (an integrity check)

(note that sum_ideal.1D is produced by cmd.3dd.$iter, along with X.xmat.1D)

Hi Christine,

Lists such as stim_durs and stim_reps should be put
in parentheses, e.g.

set stim_reps = ( 4 4 4 4 4 4 4 4 50 )

Regarding the prior post, the simple usage of MRT.py does not
have an option to distinguish random rest after the last event
from post_stim_rest. The advanced usage can do that using
“-rand_post_stim_rest no”, but that is not the form you are
using. That is to say with the basic usage, one cannot force
exactly 9 seconds of post stim rest.

To be sure, what would the GLT look like for 1) “all the
conditions together vs baseline”? I want to distinguish that
from 2) the area under the curve.

  • rick

Hi Rick,

Wouldn’t “post_rest 9” give me the 9 seconds (i.e. 3 baseline trials) I need at the end of each run?

You had asked about what the GLT would look like for “all the conditions together vs baseline”.

When trying to create these GLTs I got confused because I will need to code the baseline conditions as ‘rest’ as far as 3dDeconvolve in concerned, that is, not code for it at all. This is different from how I am coding the baseline condition in MRT.py because I am having to code it as a ‘real’ condition. So, any automated scripts that take the MRT.py and pipe them into the 3dDeconvolve -nodata will have to be modified to account for this.

Assuming we (:-)) can figure that trick out, I would code the 3dDeconvolve glts as follows:

Here I am trying to do the task (labels 1 through 8) vs baseline comparison.
gltsym ‘SYM: +label1 +label2 +label3 +label4 +label5 +label6 +label7 +label8’ -glt_label 1 “Task_vs_Baseline”

For area under the curve for each condition it would be
-gltsym ‘label1’’ -glt_label ‘label1AUC’
etc, same for each label.

Christine

Hi Christine,

Sorry for dropping this. Please feel free to “bump”
threads with reminders if we seem to forget…

In any case, post_rest 9 will indeed put 9 seconds of
rest at the end of each run, but “-rand_post_stim_rest no”
will make sure that it is exactly 9 seconds, not more
(i.e. it blocks random rest from being added at the end).

3dDeconvolve should not see any “rest” condition coded.
That condition is implicit.

Your Task_vs_Baseline gltsym is really the sum of betas
(it might be better to average them, i.e. multiply each
by 0.125). There is no “area under the curve” for each
condition, as each is just the single beta itself. And
there is no reason to ask for single betas, since they
are output by default.

Does that seem reasonable?

  • rick

Hi Rick,

Thanks for the reminder to bump up forgotten posts. I always thought that failures to respond meant you all were not willing to continue working on things. It is good to know that is not the case.

For my code, I added your suggestion of -rand_post_stim_rest no. I pasted in the relevant parts of the code below in the section titled “THIS IS THE make_stim_times code:”.

Also, I am working on modifying the 3dDeconvolve command that is generated by the make_stim_times program (see the 3dDeconvolve call at bottom of this message). For this one, I removed the condition associated with the Baseline condition from the auto-generated code. I also added the new GLT according to your recommendation. The glt should get the average response to all the conditions of interest (labels 1 through label 8) relative to baseline.

Can you take a look at this code to see if it is doing what I think it is supposed to be doing?

Also, directly below is the output from the 3dDeconvolve call. How do I figure out if I need to add more baseline trials or if I can remove some and save some experiment time?

Regards,
Christine

3dDeconvolve: AFNI version=AFNI_17.0.15 (Sep 4 2009) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ using TR=0.75 seconds for -stim_times and -nodata
++ using NT=4500 time points for -nodata
++ Input polort=5; Longest run=675.0 s; Recommended minimum polort=5 ++ OK ++
++ -stim_times using TR=0.75 s for stimulus timing conversion
++ -stim_times using TR=0.75 s for any -iresp output datasets
++ [you can alter the -iresp TR via the -TR_times option]
++ ** -stim_times NOTE ** guessing GLOBAL times if 1 time per line; LOCAL otherwise
++ ** GUESSED ** -stim_times 1 using LOCAL times
++ ** GUESSED ** -stim_times 2 using LOCAL times
++ ** GUESSED ** -stim_times 3 using LOCAL times
++ ** GUESSED ** -stim_times 4 using LOCAL times
++ ** GUESSED ** -stim_times 5 using LOCAL times
++ ** GUESSED ** -stim_times 6 using LOCAL times
++ ** GUESSED ** -stim_times 7 using LOCAL times
++ ** GUESSED ** -stim_times 8 using LOCAL times

GLT matrix from ‘SYM: +.125label1 +.125label2 +.125label3 +.125label4 +.125label5 +.125label6 +.125label7 +.125label8’:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

++ Number of time points: 4500 (no censoring)

  • Number of parameters: 38 [30 baseline ; 8 signal]
    ++ Wrote matrix values to file X.xmat.1D
    ++ ----- Signal+Baseline matrix condition [X] (4500x38): 2.18301 ++ VERY GOOD ++
    ++ ----- Signal-only matrix condition [X] (4500x8): 1.0306 ++ VERY GOOD ++
    ++ ----- Baseline-only matrix condition [X] (4500x30): 1.00605 ++ VERY GOOD ++
    ++ ----- polort-only matrix condition [X] (4500x30): 1.00605 ++ VERY GOOD ++
    ++ Wrote matrix values to file X_XtXinv.xmat.1D
    ++ +++++ Matrix inverse average error = 3.61072e-16 ++ VERY GOOD ++
    ++ Matrix setup time = 0.61 s

Stimulus: label1
h[ 0] norm. std. dev. = 0.0686

Stimulus: label2
h[ 0] norm. std. dev. = 0.0706

Stimulus: label3
h[ 0] norm. std. dev. = 0.0688

Stimulus: label4
h[ 0] norm. std. dev. = 0.0701

Stimulus: label5
h[ 0] norm. std. dev. = 0.0701

Stimulus: label6
h[ 0] norm. std. dev. = 0.0703

Stimulus: label7
h[ 0] norm. std. dev. = 0.0688

Stimulus: label8
h[ 0] norm. std. dev. = 0.0679

General Linear Test: Task_vs_Baseline
LC[0] norm. std. dev. = 0.0448

THIS IS THE make_stim_times code:
set num_stim = 9
set num_runs = 5
set pre_rest = 12 # min rest before first stim (for magnet steady state)
set post_rest = 9 # min rest after last stim (for trailing BOLD response)
set min_rest = 0 # minimum rest after each stimulus
set tr = 0.75 # used in 3dDeconvolve, if not make_random_timing.py
set stim_durs = (15.75 15.75 15.75 15.75 15.75 15.75 15.75 15.75 3)
set stim_reps = (4 4 4 4 4 4 4 4 50)
set run_lengths = 675
set labels = “label1 label2 label3 label4 label5 label6 label7 label8 baseline”

---------------------------------------------------------------------------

execution parameters

set iterations = 100 # number of iterations to compare
set seed = 1234567 # initial random seed
set outdir = stim_results # directory that all results are under
set LCfile = NSD_sums # file to store norm. std. dev. sums in

set pattern = LC # search pattern for LC[0], say

set pattern = ‘norm. std.’ # search pattern for normalized stdev vals

make_random_timing.py -num_stim $num_stim -stim_dur $stim_durs
-num_runs $num_runs -run_time $run_lengths
-num_reps $stim_reps -prefix stimes.$iter
-pre_stim_rest $pre_rest -post_stim_rest $post_rest
-min_rest $min_rest
-rand_post_stim_rest no
-stim_labels $labels
-seed $seed
-tr_locked
-tr $tr
-show_timing_stats
-save_3dd_cmd cmd.3dd.$iter
>& out.mrt.$iter

THIS IS THE 3dDeconvolve code:
3dDeconvolve
-nodata 4500 0.750
-polort 5
-concat ‘1D: 0 900 1800 2700 3600’
-num_stimts 8
-stim_times 1 stimes.0044_01_label1.1D ‘BLOCK(15.75,1)’
-stim_label 1 label1
-stim_times 2 stimes.0044_02_label2.1D ‘BLOCK(15.75,1)’
-stim_label 2 label2
-stim_times 3 stimes.0044_03_label3.1D ‘BLOCK(15.75,1)’
-stim_label 3 label3
-stim_times 4 stimes.0044_04_label4.1D ‘BLOCK(15.75,1)’
-stim_label 4 label4
-stim_times 5 stimes.0044_05_label5.1D ‘BLOCK(15.75,1)’
-stim_label 5 label5
-stim_times 6 stimes.0044_06_label6.1D ‘BLOCK(15.75,1)’
-stim_label 6 label6
-stim_times 7 stimes.0044_07_label7.1D ‘BLOCK(15.75,1)’
-stim_label 7 label7
-stim_times 8 stimes.0044_08_label8.1D ‘BLOCK(15.75,1)’
-stim_label 8 label8
-gltsym ‘SYM: +.125label1 +.125label2 +.125label3 +.125label4 +.125label5 +.125label6 +.125label7 +.125label8’
-glt_label 1 “Task_vs_Baseline”
-x1D X.xmat.1D

Hi Christine,

That option (-rand_post_stim_rest no) only applies to the
advanced usage. You are currently applying the simple
usage. It is okay, but the -rand_post_stim_rest option
will not work in this basic case.

Regarding your question of whether you need more baseline
trials, that is not actually something this methods is
designed to evaluate. This method evaluates different
randomizations with the given timing durations. If you
change the overall experiment duration, that might not
be directly comparable.

Currently you have 150s of rest, distributed in 3s
multiples (so 0 or 3 or 6, etc). Is that really how
you want it? Would you prefer a range, maybe 2…10,
say? It looks like the mean ISI should be about 5s,
the way you have set it up.

  • rick

Hi Rick,

You mentioned that I "Currently you have 150s of rest, distributed in 3s
multiples (so 0 or 3 or 6, etc). Is that really how
you want it? Would you prefer a range, maybe 2…10,
say? It looks like the mean ISI should be about 5s,
the way you have set it up. "

For my experiment, I do not have true rest. Instead, I have an baseline ‘task’ that takes 3-sec to complete. Based on our earlier conversations, I have been coding this as a separate task and not as rest. Thus, I cannot have the baseline ‘task’ range from 2…10 sec because then people would have too little time (2 sec) or too much time (anything > 3 sec) to do the task.

I am having trouble figuring out what the mean ISI would be because the metrics that are outputted are not setup to have no real ‘rest’.
QUESTION: How did you arrive at the ~5sec ISI?

Also, I am noticing another artefact of coding the baseline task as a condition. The best randomization had the baseline condition occurring as the first stimulus in one of the runs. In my case, the baseline condition should create the jitter between my conditions of interest and should not occur at the beginning or end of a run. Note that I already have 12 sec of the baseline task to begin each run (to reach steady state) and 9 sec at the end to catch the end of the HRF. I want the first stimulus in each block to be a condition of interest (labels 1-8), not the baseline condition.
QUESTION: Is there any way to prevent the baseline ‘condition’ from occurring first or last in a run?

Regards,
Christine

Hi Christine,

The ~5 s ISI comes from total ISI = 150 = 675 - 21 - 3215.75
where 675 = total per run, 32
15.75 = 32 event times, and
21 = pre/post rest time, all in seconds per run.

So 150 s ISI time divided by 31 (32-1 ISI periods) is 4.839 s,
the average ISI (assuming none at the start or end of a run).
To put it another way, that 150 s was for your pseudo-rest
events: 50*3 s events.

There is no current way to force one condition not to occur
first or last in a run, not when creating an output timing
file (i.e. it might be possible to make ISIs a multiple of
3s, but that has its own difficulty since the other task
events are not - still, it would be doable).

Maybe it would not be too difficult to add this (though it
would have to apply to both the basic and advanced cases,
and maybe I would not try to do it with the max consec or
ordered stim cases). We’ll see…

  • rick

Hi Christine,

Okay, I just added -not_first and -not_last options that work
in the basic mode. So try your MRT script (posted earlier in
this thread) with the additional options, -not_first baseline
-not_last baseline, and see what you think.

You should be able to update your binaries via:

@update.afni.binaries -d

Please let me know how it goes.

  • rick

Perfect solution!

I’m off to the races now.

Thanks,

Christine

Great, thanks!

  • rick