Hello afni experts,
I was running the @ss_review_driver script for quality control and I have a question in regard to how to read the sum_ideal.1D plot?
From the proc.py website I got the description that it shows the sum of all non-baseline regressors. I still don t really understand how to interpret those graphs and consequently how to detect mistakes in the stimulus timing.
I attached a plot that I got. (this is from a fast task-based fMRI -stimulus display for 3s with 1s inter stimulus interval, 3 runs-84 TRs per run, 5 regressors of interest-2 modeled with TENTzero (0,15,7), 3 modeled with GAM)
-What does a number like 1.75 for a given TR mean?
-and should the value go down to zero in-between runs?In my plot it does not at the end. Does it mean there is something wrong?
Thank you very much in advance
The sum_ideal.1D time series is literally the sum of non-baseline regressors from the regression matrix. It can help detect timing mistakes, as well as give a sense of how “busy” the stimulus presentations are (i.e. how much overlap). But it is highly dependent on the model (basis functions, how they convolve, TENTs, modulators…).
Since you are modeling with TENT and GAM, all individual regressors will have a height of approximately 1. Therefor the magnitude of the y-axis shows the overlap of BOLD response events.
With 3s displays and 1s ISIs, hopefully there is more baseline time. So TR=2.5s here? That is what your TENT model suggests.
Yes the sum will likely go to 0 at the start of each run, since the BOLD responses do not cross run breaks, and since response models usually start at 0 (using TENT, it is possible they could start at 1, but using TENTzero, it is not).
Things that look good to me here:
There is a lot of variability in the sum. That helps. If the sum went up and flat-lined, a voxel responding to all conditions could look very similar to a voxel from a can of sardines (the can cannot go into a scanner).
It does not go to zero for long periods. If the sum went to zero in a surprising way, or for a long time, it would suggests timing file problems, since you would likely not leave subjects idle very long in the scanner.
It does not go way up anywhere. Again, mistakes with timing could put events at overlapping times, making the sum ideal much higher than typical.
The sum seem to hover around 1.5 or 1.75. If events were too closely packed together, the sum would be larger.
It is hard to be very concrete with what to look for, since that depends so highly on the model. With your parameters, nothing jumps out as problematic.