I have a decision-making experiment where I present a stimulus (for 500 ms) and then subjects make a binary decision by pressing a button (1 or 2) within 2 s. Afterwards there are other irrelevant (for my analysis) stimuli, accounting for ~13 s, and then the next trial starts. I acquire data all over the experiment (TR = 2.2s).
I am trying to analyze the fMRI data associated to two different conditions: whether the decision is correct or whether it is wrong. My design should be model-based, as the order of trials (correct vs. incorrect) depends on the subjects’ responses. However, I couldn’t find tutorials on how to perform this analysis in AFNI. The afni_proc.py script I put together (below) gives very random results (significant areas mostly outside the brain). Stim files are text files with one rows per TR, and 1 if it is the TR with onset at the stimulus that leads to correct/incorrect decision, or 0 otherwise.
Do you have any suggestions on how to model this analysis in AFNI? I have been bouncing my head on this problem for months!
One is that the correct and incorrect files should contain stimulus onset times, rather than just 0/1 at the onset time points. The 0/1 files will not model any BOLD response.
The 0/1 stim files can be converted to timing using make_stim_times.py, which would be done automatically, if the -regress_use_stim_files option were not included.
So consider removing -regress_use_stim_files (and “-regress_stim_types file”) and trying again.
The second basic point is with respect to the other (irrelevant, for your analysis) stimuli. If there are other stimuli presented, they should be modeled. If those regressors were completely orthogonal to your regressors of interest, it should not make much difference. But it sounds like that will not be the case here.
This brings up a third point that I am concerned about. If the stimuli of no interest are included, exactly how much non-stimulus time is there between stimulus conditions and across trials?
Thanks for your answer. I will remove the -regress_use_stim_files and -regress_stim_types parameters.
Each trial starts with a cross for 500ms, then there is a stimulus composed of two images presented one after another for 500ms each, and then the subject has 2s to make a decision. After that, there are “irrelevant” (for this correct/incorrect analysis) stimuli: the subject is asked to rate its confidence by pressing a button 1-4 within 2s, then there is a random delay between 2.5-3s, then the subject sees a feedback stimulus about that decision for 2s, is asked to make a second decision within other 2s, and finally there is another random delay of 2.5-3s. It would be great to model these other stimuli as well, as they are related to the main decision-making process. How can I model multiple ones at once? I have stim_files saying which trials are low vs. high confident, or in which trials the second decision is different from the first one.
For each stimulus type, you can specify onset times in a text file. You could make one for each of the 2 image onsets, along with each of the “irrelevant” conditions. The duration for each event can be specified as part of the basis function (e.g., BLOCK(2), for a 2-second event), or can be specified attached to each event onset time (using dmUBLOCK, for example).
I think the two images could be interpreted as a video, so I’ll have onset times for the first image and then a block function of 1 second. So, I have to generate stimuli files for each condition that I want to compare, correct? Basically, in one file (correct) having onsets of the first image when the subject makes a correct decision, and in another file (incorrect) the onsets of the first image when the subject is incorrect. Then, I can do the same for the other two “irrelevant” stimuli, namely confidence (split into low / high confidence) and trust (split into “subject makes a different decision” vs. “subject makes the same decision” after feedback). Does this make sense? See code below.
Yes, this looks like a good way to encode the event timing.
One small point is that as specified, you would not want to directly compare correct/incorrect vs. any of the other 4 conditions, since their stimulus durations are different. If you do wish to compare them, the basis functions can be altered to make that more appropriate.
I have rerun the script, but still, when I overlay the correct-incorrect T stat and apply a threshold of p=0.001, I get most clusters outside the brain. A few things I noted are:
I tried making censor parameters more stringent, but got the same results
I have noticed a warning on high correlation between correct and confident conditions, but didn’t worry about it as I am not directly comparing them (I am only interested in correct vs. incorrect, and, separately, confident vs. nonconfident). Is that OK?
When I use the T stats for other comparisons (confident-notconfident, or trust-distrust), I get even more random behavior
Do you have any other suggestion on what to look / check?
Getting clusters outside the brain does not necessarily mean the model is bad. It might mean the design allowed for too much stimulus correlated motion, or even that there is ghosting in the EPI. Often, such patterns are okay, once we understand where they are coming from. Sometimes they are not okay.
Look at the patterns on the edge of and outside of the brain. Where are you seeing false activation?
In most subjects, I see false activations at the edges of the image, but in some of them the false activations are also closer to the brain edge. See the two examples attached.
Davide
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.