As suggested by my AFNI readings, I have been using the @ss_review_driver to check the accuracy of the pre-processing outputs. I have gone through these outputs and the related readings, and I have a few questions regarding what I should be specifically checking at each stage. I am attempting to ensure my outputs are correct and accurate, so I appreciate any clarifications and further reading suggestions.
SS_review_basic:
GCOR: It has been suggested that larger numbers suggest more coherence, which is likely artifactual. Is there a threshold or a value I should look out for?
Average outlier fraction: I have read that, “some outliers are expected, but if a large fraction of voxels in a volume are outliers, the data should be investigated more closely”. Is there a threshold or a certain value I should flag? Relatedly, is there a threshold I should use for the number of outliers detected if I am using 0.1 as the cut-off value?
Degrees of freedom: How is this number generated? I have 16 regressors included in my 3dDeconvolve, and I also included motion parameters in my regression model.
Non-baseline Regressors in X-matrix
X.stim.xmat.1D: In this pop-up, should I just be checking to make sure all my regressors of interest are present? What does the Y-axis represent?
Sum_ideal.1D: Are these curves representing the sum of all my regressors of interest across the entire scan? It has been suggested that this output may help find mistakes in the stimulus timing files. Is there something specific I should be keeping an eye out for?
As well, I read that the @ss_review_driver is the minimal checking that should be done on each participant. Are there other “checks” that should be completed?
There are no particular guidelines for GCOR, though indeed, higher values are usually not good. I do not know that anyone has applied a GCOR threshold for the purposes of dropping subjects, if that is what you are thinking about.
For a subject with high GCOR value, consider using @radial_correlated for further investigation. That can even be added to the proc script using -radial_correlate yes.
Using 0.1 as an outlier threshold means censoring any time points where more than 10 percent of the brain is considered an outlier. You might even consider using 0.05. But any way, that value is applied as a fraction of the masked brain.
The original degrees of freedom available is the number of time points in the data (after any censoring), or “TRs total”. The “degrees of freedom used” is basically the number of regressors in the x-matrix. So the “degrees of freedom left” is their difference.
For X.stim.xmat.1D, review it to see whether it seems correct. There are any number of mistakes one could make when generating timing, and the software will only look for issues that we decide to make it look for. A researchers eyeballs are important.
Yes, sum_ideal.1D has the sum of the X.stim.xmat.1D regressors. Again, there is no rule here, particularly because the shape varies wildly across studies. Look for anything strange. Does it hover around one number? Does it go to zero outside the run breaks? Does it shoot up high at just one or two points? Evaluating these takes practice.
There are many checks that can be performed, but computers can only be programmed to do particular ones. If we could itemize all important problems, we would just write software to look for them. But new issues keep coming up. Just practice looking, that is the most important thing. Also, @radial_correlate can help find particular issues within the data, itself.
The x-axis is time points, which should match the EPI data
(though some matrices, like X.xmat.1D, has censored time
points removed, which is why X.nocensor.xmat.1D is there).
The y-axis is determined by the regressors. For X.stim.xmat.1D,
it contains the actual regressors of interest, most of which are
probably close to unit height, though motion and modulators
do not need to be).
X.sum.xmat.1D is just the sum of regressors in X.stim.xmat.1D.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.