Smallest FDR q error when running 3dttest++

Hello everyone!

I keep getting the bellow error when running 3dttest++:

Warning: Smallest FDR q [1 task_Tstat] = 0.5656 ==> few true single voxel detections

Is there a specific reason I would be getting this error?



That’s a warning, not an error. It simply means that the t-stat values (or their p-values) within the mask are mostly too small (or too big) so that you won’t have any voxel surviving the FDR correction (with q-value of 0.05, for example).

Everyone please note!

A WARNING message is not an ERROR – it is something that you should be aware of, and evaluate in the context of your particular data analysis.

Hi Gang and Bob –

I should have phrased my question a little better.

I am getting the FDR q warning with my data when running 3dttest++, and I haven’t received this warning before. Because of this, I am assuming there is an error somewhere in my pipeline. I have checked my stim times, and removed subjects who I believe inaccurately performed the task of interest. I’ve also looked at head motion and I don’t believe that is an issue.

What steps could I take to diagnose the error in my analysis?

This message was added a few months ago, so if you updated your AFNI recently, you could start getting the message even in situations where it did not appear before – because it wasn’t in the program before.

Do you see results that make sense? The warning says that in any individual voxel, the strength of the statistics is weak. This does not mean there are no true multi-voxel (cluster) detections – you can have valid clusters if you have a lot of weakly significant statistics right next to one another. That’s what the various cluster-thresholding methods are for. With 3dttest++, you can use the new -Clustsim option to have the program do the cluster threshold calculations for you.

Hi Bob –

I have completed cluster thresholding, and do not get any significant task based activity for any of my tasks, which is why I thought I was getting the FDR q warning. I am thinking there is something wrong with my data analysis and that is why I’m not getting any significant activity anywhere. So far, I have checked the stim time files and head motion, and there doesn’t seem to be any issues there. Additionally, I’ve done an outlier analysis to see if any subjects had weird data, and there are no outliers.

Would it make sense to add additional blur into the analysis? I’m currently at 4mm, but I can increase that if you think that could help.


Do you see individual subject “activations” that make sense? In a lot of subjects? In overlapping places?

If the answers are “yes, yes, yes”, (or even “maybe, maybe, maybe”) then a little more blur – 6 or 8 mm – might help. An additional possibility is to re-run the individual subject analyses with the addition of AnatICOR de-noising and nonlinear anatomical registration. The latter will line up the subjects’ anatomies somewhat better, and can bring out group activations that are otherwise not found – since they wouldn’t have lined up.