I have a question regarding the TSNR image outputted from the afni_proc.py program. The explanation provided is the following:
By default, a temporal signal to noise (TSNR) dataset is created at
the end of the regress block. The “signal” is the all_runs dataset
(input to 3dDeconvolve), and the “noise” is the errts dataset (the
residuals from 3dDeconvolve). TSNR is computed (per voxel) as the
mean signal divided by the standard deviation of the noise.
TSNR = average(signal) / stdev(noise)
I am a little confused since the errts dataset is as far as I know the dataset we use after all nuisance signals have been regressed out of the data, so why is it referred as the “noise” dataset? From my understanding of the above, SNR is then mean(all_runs)./std(errts) . Is that correct?
Second, what would be considered a good SNR?
Thanks a lot,
The errts dataset is the noise from a task-based first
level regression, and nothing special is changed that
way for resting state data (I guess that is what you have).
But for rest it is not clear what you would otherwise use
as noise terms, the non-drift regressors of no interest?
In any case, this method should still provide a reasonable
idea regarding data quality, which is the purpose.
Thank you Rick. That makes sense.
What would be considered a reasonably good SNR?
“Reasonably good TSNR” depends on the scan parameters.
For 3 Tesla BOLD EPI data with 2-3 mm voxels, TR about 2 s, flip angle 50 degrees or more, echo time about 30 ms – TSNR about 200 is common. But the real thing to check for is if some subjects have TSNR very different from the others. In our recent Shenzhen bootcamp, two guys from Chongqing had TSNR on 100+ subjects in the 180-210 range (from running afni_proc.py overnight, after they learned how on Tuesday), except one subject had TSNR=90. There was a serious problem with that data, and they needed to throw it out of their study.
Is the value of 200 from Robert’s message the mean across the whole brain or just from some ROIs?
Our scanner parameters are roughly the same (3Tesla, voxel size 3x3, TR 2s, TE 30ms, flip angle 90 deg).
Our TSNR results are:
min=0 max=800-900 mean=70-90 stdev=110-180 (across the whole brain, runs and subjects)
The mean seems low. Do you think that is a problem?
Thanks a lot,
That is the average over the masked brain.
Are your subjects prone to motion? That has a big
effect on TSNR. What is the average for a couple
of the low-motion subjects? And what is the “average
censored motion” for them?
Thanks for your reply, Rick.
We used TBV to keep track of motion in real-time in our experiment; I remember in general we had very little motion. We already excluded a few subjects that seemed to move too much.
I did a quick comparison of TSNR between a few “good” and “not so good” subjects. I saw a difference, the lowest was ~65, the highest ~89 (still kind of low compared with what Robert said).
In afni_proc, we used the default value of differential movement 0.3, and in most runs, we were able to keep 99-100% of time points.
Could the low TSNR be because of our task that didn’t generate a high functional contrast?
No, it is more likely there is another reason for the low TSNR. It might be good to run some phantoms on the scanner and measure the TSNR from that. You can also compute it on the unprocessed EPI data, or even the output of volreg (see -volreg_compute_tsnr).
It looks like the reason for low average TSNR is because of zero voxels. The ‘full_mask’ dataset contains ‘0’ and ‘1’ voxels, and TSNR = full_mask*signalMean/noiseStdev. And so when I used 3dBrickStat without the -non-zero option, it computed the mean across all zero and non-zero voxels.
With zero voxels excluded, my TSNR mean is much higher, in the range 350-400 (min is ~20, max is ~900).
Am I right?
Are you using afni_proc.py for this? It should report the
average masked TSNR at the end of the analysis. If you
don’t have that, look near the bottom of the out.ss_review
text file in the results directory.
Thank you, Rick.
So I think I am good. From out.ss_review, my average TSNR is all >200 except for one subject (193).
By the way, what is the difference between the value reported in out.ss_review, and that computed by 3dBrickStat with -non-zero?
I would have to see the actual commands, certainly
the 3dBrickStat one, since it does not come from
afni_proc.py. Perhaps it goes outside the brain.