Our lab is currently setting up a few fMRI tasks, and we had a question about an X/Y Go/NoGo task in which participants see a series of either X’s or Y’s, and are supposed to make a button press every time they alternate and withhold the response when they repeat. Our idea was to create a task in which 120 out of 720 total trials (over 4 runs, TR = 2 sec, SOA = 1400 plus/minus 100ms) were Nogo trials, with no null trials. As you might be able to guess, we primarily want to model the Nogo-Go difference overall, as well as nogo errors vs nogo correct trials (and plan to analyze the data with the TENT function). This being said, we want to keep it so that a go always follows a go, as well as a go always follows a nogo. The way we did this was to instead of having 30 nogo trials and 150 go trials (per run), we’d have 30 nogo trials and 60 go trials, then manually add a go after each in the printed stim list. However, when we did this the optimizer (following the HowTo, checking for the four most optimal lists with RSFgen) clumped a bunch of the Nogo trials together where they would be back-to-back say three to five times, not including the go trials we manually stuck in between. From a behavioral standpoint, we’d rather have nogo trials separated by at least 2 or more go trials to maintain the prepotency to respond. This is how our lab has done it in the past with different modalities (EEG, MEG). So the real question I have I guess is: how do we randomize the trials in such a way that our nogo trials aren’t clustered, and are more spread out?
If anyone has any suggestions, we’d greatly appreciate it. Thanks!