Question of Efficiency of Stim Schedule

Hello,

I’m using make_random_timing.py to make stimulus schedules for a Stroop task and then testing the efficiency with 3dDeconvolve -nodata, and specifying the contrast of congruent - incongruent trials.
I ran it with just a hundred iterations a few months ago and picked the most efficient schedule (one with an efficiency of 0.185 for the contrast of interest). The stim schedule that was the most efficient had a nice alteration of congruent and incongruent trials (it seemed random and there were no long stretches of the same stim type in a row).

Recently, I ran the scripts with thousands of iterations and picked the stim schedule that was the most efficient for the same contrast (its efficiency was on the order of 0.09). The stimulus schedule corresponding to this efficiency was much ‘clumpier’ than the one I had generated previously with the lower efficiency–it often alternated between several congruent trials in a row, followed by several incongruent trials in a row. And there was even a stretch with 12 incongruent trials in a row.

It violates my intuitions that the ‘clumpier’ stimulus schedule would be more efficient, so I wanted to make sure my understanding of the efficiencies was correct and see if you have any explanation for why that might be the case.

Thank you!

It violates my intuitions that the ‘clumpier’ stimulus schedule would be more efficient, so I wanted to make sure my
understanding of the efficiencies was correct and see if you have any explanation for why that might be the case.

It’s not clear to me whether the two scripts had the same experiment parameters other than the number of iterations. As for clumpiness, check out this one:

https://www.wired.com/2012/12/what-does-randomness-look-like/