Issue with stats in


I am running through an analysis with and keep getting a similar problem with all of the subjects that I run. The command is the following: -subj_id subj -dsets verbs.nii -copy_anat T1_ss_orig_vol.nii
-anat_has_skull no -blocks despike tshift align tlrc volreg blur mask
scale regress -volreg_align_to MIN_OUTLIER -volreg_align_e2a
-volreg_tlrc_warp -align_opts_aea -cost lpc+ZZ -tlrc_base
MNI152_T1_2009c+tlrc -regress_stim_times …/noise_times.1D
…/verbs_times.1D -regress_stim_labels verbs noise -regress_basis
‘BLOCK(32,1)’ -regress_apply_mot_types demean deriv -regress_reml_exec
-regress_censor_motion 0.3 -regress_censor_outliers 0.1
-regress_anaticor_fast -regress_opts_3dD -gltsym ‘SYM: +verbs -noise’
-glt_label 1 verbs_minus_noise -regress_est_blur_epits

And the text output is:

subject ID : subj
TRs removed (per run) : 0
num stim classes provided : 2
final anatomy dset : anat_final.subj+tlrc.HEAD
final stats dset : stats.subj_REML+tlrc.HEAD
final voxel resolution : 2.500000 2.500000 2.500000

motion limit : 0.3
num TRs above mot limit : 1
average motion (per TR) : 0.041809
average censored motion : 0.0391933
max motion displacement : 0.705158
max censored displacement : 0.465089
outlier limit : 0.1
average outlier frac (TR) : 0.00625512
num TRs above out limit : 2

num runs found : 1
num TRs per run : 211
num TRs per run (applied) : 208
num TRs per run (censored): 3
fraction censored per run : 0.014218
TRs total (uncensored) : 211
TRs total : 208
degrees of freedom used : 18
degrees of freedom left : 190

TRs censored : 3
censor fraction : 0.014218
num regs of interest : 2
num TRs per stim (orig) : 110 138
num TRs censored per stim : 0 3
fraction TRs censored : 0.000 0.022
ave mot per sresp (orig) : 0.034695 0.045865
ave mot per sresp (cens) : 0.034695 0.041945

TSNR average : 113.985
global correlation (GCOR) : 0.0921368
anat/EPI mask Dice coef : 0.916151
maximum F-stat (masked) : 18.4122
blur estimates (ACF) : 0.631012 3.635 10.2446
blur estimates (FWHM) : 0 0 0

All of my results more or less look like the attached image, with the most significant voxels from the decon falling outside of the brain. I have checked everything I can think of and cannot seem to find where this error is coming from. Any ideas as to why this might be happening? My colleagues have analyzed the same dataset with spm (with similar parameters as far as I can tell) and the results look as expected.

Thank you very much for any assistance,

Could be a number of things. I’ll say that it’s not THAT unusual to have activation outside of the brain. SPM tends to mask this by default. They also used to mask the cerebellum, not sure if that’s ever changed. When you adjust the threshold slider to a suitable p-value, do you still see activation in areas that you would expect? Have you tried to plot the activity of a particular voxel in activated areas against the ideal or design?

It looks like your has options that are usually more recommended for resting state processing (e.g. despike, motion derivatives), and less so in processing of task-based designs. You might take a look at example 6 in the help as a guideline.

Perhaps post some photos of thresholded maps vs. the SPM? A copy of your X.jpg would also be useful.


You don’t say what that image is, but it looks like one of beta weights, not of a significance volume (t-/F-stat). But what is actually wrong with this? Note that does not mask by default, since we would rather see what is happening outside the brain than simply hide it.

If you display the thresholded Full F-stat, and apply some threshold, how does that look?

  • rick

Hi, Brady-

It is hard to view a dset and understand it when no colorbar info is provided.

Looking at Figs. 8, 12 and 13, your data looks pretty normal:
… and I see Rick has replied already with more comments about this-- indeed, are you looking at the stat or the beta? We generally recommend looking at the beta (=coefficient) and thresholding with the stat when you want to.

In your data, regions outside the brain may have large betas, but they probably also have laaaaarge confidence intervals and smaller stat values. For more on the benefits of that, please check out Gang’s compelling commentary here:


Hi all,

I am so sorry for neglecting to include which image this is! I thought I had it in there but seemed to overlook it. This is a result from a GLT of a verb generation condition minus a noise condition (beta as overlay and t-stat as threshold) with blue (low)-red (high) coloring. I can see that I sounded like a bit of a novice in the post and I apologize. I have experience with this type of analysis but it has been about 6 years so I am trying to catch back up. In my old scripts, I never used but saw the option for automatic TR rejection for motion and outliers. Is there a way to implement this without

As for the original issue, I am now getting what I expect after all of your wonderful advice. I think the despiking and additional motion parameters were causing truncation of our effect and overfitting of irrelevant voxels, respectively. When I reran without these options and looked at the full F-stat for the contrast, everything looks great. It does still seem like the largest betas are outside of the brain, but this makes complete sense now with Paul’s explanation and the respective paper. Everything within the brain is right in the middle of the color bar (close to 0) compared to the betas outside. Additionally, I was originally told that there was no masking in the SPM script that was used but after digging into it myself, I see that the default was never changed so it indeed was applied. I do like the idea of plotting a voxel’s activity against the design. Is there an interface for this type of summary figure?

Thank you again for all of your help,

Hi, Brady-

For any FMRI processing, using will reeeeaaaalllly help you: you can do all the detailed specifying of processing steps and options that you want, and will take care of lots of annoying details for you (i.e., concatenating transforms, building 3dDeconvolve commands, etc.). Processing in this way is very understandable, compact, sharable, and tailorable-- you have a compact command distilling your processing, and this generates your several hundred++ line script for you.

To answer your question fo processing without firstly, again, I strongly recommend you consider processing with it. But note that generates a script for you, that gets executed to do the processing. If you want to emulate some features that does, you can use the script as an educational tool-- note that Rick has made the generated script commented (!) and organized block sections, so it is very readable.

As to plotting “a voxel’s activity against the design” – I’m not sure what that means. You don’t want to plot the betas? Note also that when you plot the betas, if you have used the “scale” block in (or just scaled your data with AFNI), then you can set the colorbar scale to something meaningful. Very strong betas in task experiments are about 2-3% BOLD percent signal change. You can threshold with the stat value, as you wish (p=0.001, for example, and AFNI can do the calculation between stat and p-value internally for you, taking into account DOFs, etc.).


Thank you again for your time. That all makes sense. I looked through the script and maybe missed the point at which the motion outliers based on a specified threshold were identified and noted in the outcount text file. I will go through it again though.

For the second point, I was referring to Peter’s comment on visualizing the activity of an active voxel versus the design. Maybe I misunderstood what he was saying. Sorry for any confusion.


Hi, Brady-

Re. censor things:
I was just looking through an proc* script output for that has been run recently. I think you want to look at the start of the regress block part of the script-- in my proc* script, there were lines like this:

# ================================ regress =================================

# compute de-meaned motion parameters (for use in regression) -infile dfile_rall.1D -set_nruns 1                            \
           -demean -write motion_demean.1D

# compute motion parameter derivatives (just to have) -infile dfile_rall.1D -set_nruns 1                            \
           -derivative -demean -write motion_deriv.1D

# create censor file motion_${subj}_censor.1D, for censoring motion -infile dfile_rall.1D -set_nruns 1                            \
    -show_censor_count -censor_prev_TR                                   \
    -censor_motion 0.2 motion_${subj}

# combine multiple censor files
1deval -a motion_${subj}_censor.1D -b outcount_${subj}_censor.1D         \
       -expr "a*b" > censor_${subj}_combined_2.1D

The complete censor file depends also on the outlier calc from before, too, in the “outcount” section, where I had something like:

# catenate outlier censor files into a single time series
cat rm.out.cen.r*.1D > outcount_${subj}_censor.1D

Hope that’s useful, and please ping back with any follow-up questions.


To visualize the time course:

  1. You can create an ROI in AFNI
  2. Extract the time series (3dmaskave) from your full preprocessed data (usually the all_runs)
  3. Plot the time series compared to your design (1dplot)

You can also set the underlay to your functional dataset, click on an active voxel, and bring up the graph viewer, and under one of the dropdown menus you can plot the ideal for a condition on top of the graph of the voxel activity. Let me know if you have followup questions!

Some tidbits of this in one of Andy’s YouTube videos.