I was trying to look at the timecourses in a memory experiment at encoding. I ran the analysis using TENT(0, 15.4, 15) as the basis function (TR = 1.1s) and found out that the values for the first time points (t=0) ware always way off (either extremely high or low). And this was true across brain regions and participants. I tried moving the first time point to t = -3.3s, but the first time points were still off.
I then ran the same analysis but used TENTzero rather than TENT, and the timecourses look more reasonable.
I have uploaded some screenshots of timecourses here, corresponding to analyses with TENT(0, 15.4, 15), and TENTzero(0, 15.4, 15)
And here’s the code I used to run the analysis with TENT: https://pastebin.com/U1kx1xCh
Do you happen to know what could have gone wrong in the TENT analysis (or maybe both)?
TENTzero(0, 15.4, 15) is supposed to capture the hemodynamic response after the stimulus onset. In contrast, TENT(0, 15.4, 15) would also estimate the hemodynamic response at the moment when each trial is initiated. TENTzero(0, 15.4, 15) would offer a more accurate characterization of the HDR if you don’t believe that 1) there was any subject anticipation effect, and 2) stimulus timing was coded incorrectly.
I don’t know much about the details of the experimental design, so it is hard for me to guess what’s going on as regard to why there is a big dip/jump in the estimated hemodynamic response at the onset time with TENT(0, 15.4, 15). Such a dip/jump is most likely unrelated to the hemodynamic response you intended to capture. In that sense, the results from TENTzero(0, 15.4, 15) are more trustworthy.
Thanks the quick response. Could you elaborate on what kinds of stimulus timing error can lead to such difference between TENTzero and TENT?
As for the design, this is a levels of processing experiment. Participants make semantic, phonological or orthographic judgments about words while being scanned. We use a mixed block event-related design. Each trial is 4 seconds, and a fixation between 0.4 to 4.8 seconds follows. Participants make the same kind of judgments in “blocks” (e.g. 10 phonological trials in a row), and the order of such blocks are counterbalanced. We categorized the trials based on later memory performance (subsequent hits (remember), subsequent hits (know) and subsequent misses). The timecourses were looking at subsequent hits (remember).
Could you elaborate on what kinds of stimulus timing error can lead to such difference between TENTzero and TENT?
For example, if the correct stimulus onset time is 23.5 seconds, you fed in 3dDeconvolve with 26 seconds.
There is an option called -fitts in 3dDeconvolve, which outputs the modeled signal. You can use the AFNI GUI and plot out the original data versus the modeled signal at each voxel, and see if you can see anything abnormal at those stimulus onset time points.
Hi I would like to revive this thread and provide some update.
As mentioned previously, using TENT (0, 15.4, 15) led to first time points that were way off (across the entire brain and across participants).
However, TENTzero (0, 15.4 15) did not have this issue.
I saw a previous thread on TENT and thought that maybe reducing the number of time points in the model would help. https://afni.nimh.nih.gov/afni/community/board/read.php?1,83030,83065#msg-83065
I re-ran the analysis using TENT (0, 19.8, 10) and TENT (-2.2, 17.6, 10), (basically setting up one knot every other TR) and the hemodynamic response shapes look fairly reasonable (at least the first time points aren’t showing values such as -17).
Is the degrees of freedom in the model the issue here?
if that is the case, how do I check for the degrees of freedom?
Also, why doesn’t analysis using TENTzero suffer from the similar issue? (I’ve tried TENTzero with 19 time points, such as 0, 19.8, 19, but the first time point still looks normal.)
After changing the
Now that I am modeling every other TR, does this influence the interpretability of the percent signal change?
Is the degrees of freedom in the model the issue here?
I would not worry too much about the degrees of freedom unless you get some warning from 3dDeconvolve about the possibility of high correlation among regressors.
if that is the case, how do I check for the degrees of freedom?
You can check the degrees of freedom in the header file of the output using 3dinfo.
Also, why doesn’t analysis using TENTzero suffer from the similar issue? (I’ve tried TENTzero with 19 time points,
such as 0, 19.8, 19, but the first time point still looks normal.)
TENTzero assumes that there is no response at the moment when the stimulus starts.
Now that I am modeling every other TR, does this influence the interpretability of the percent signal change?
No, not really. With less tents you’re using less regressors, which may lead to lower likelihood for multicollinearity.
Thank you so much for the prompt response again. I think multicollinearity is indeed the culprit here.
It seems like the first time point (t=0) and the second time point (t = 1.1) have extremely high correlations (around .96) for the three conditions that I model.
Reducing the number of time points in half in TENT either got rid of the warning or dropped the correlation to a much lower level (like .44)
In this case, rounding is basically irrelevant.
Your maximum offsets are never even 3.5% into a
TR, so truncation is what will happen.
It is good to verify whether this is necessary
though. If the event times are modestly well
distributed throughout the TRs, then no rounding
or truncating is advisable. It is only because
they are close to the TR times, but not actually
on them that it is of particular importance.
It is okay to use rounding, just to be safer.
But optimally, only apply this when it is needed.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.