I have two related questions on normalization of EPI images and would appreciate your insights:
When warping EPI images to the standard space, is it preferable to use the original voxel size of the images (e.g., 1.8mm isotropic) or the typically smaller voxel size of the template (e.g., 1mm isotropic)?
I have EPI images acquired with different scanning parameters resulting in different voxel sizes (e.g., 1.8mm isotropic and 1.5x1.5x2mm). Is it better to normalize these two sets of images to a smaller voxel size (e.g., 1mm isotropic) or to a larger voxel size (e.g., 2mm isotropic)?
My short answer to both Qs: how about 1.5mm iso, then? In general, large rounding up seems undesirable: upsampled data doesn't really contain hi-resolution information (even in the final maps look smoother, due to blurring/interpolation), and upsampling a lot drastically increases disk space and memory usage.
The final resolution can be controlled in afni_proc.py with -volreg_warp_dxyz .., and this will basically use the minimum voxel dimension or slightly upsample based on the minimum voxel dimension. An excerpt from the help for this opt:
-volreg_warp_dxyz DXYZ : grid dimensions for _align_e2a or _tlrc_warp
e.g. -volreg_warp_dxyz 3.5
default: min dim truncated to 3 significant bits
(see description, below)
This option allows the user to specify the grid size for output
datasets from the -volreg_tlrc_warp and -volreg_align_e2a options.
In either case, the output grid will be isotropic voxels (cubes).
By default, DXYZ is the minimum input dimension, truncated to
3 significant bits (for integers, starts affecting them at 9, as
9 requires 4 bits to represent).
...
And, if you are combining two different acquisitions for voxelwise analysis and hence blurring your data, consider adding -blur_to_fwhm in your afni_proc.py command to help harmonize some of the smoothness in those datasets:
-blur_to_fwhm : blur TO the blur size (not add a blur size)
This option changes the program used to blur the data. Instead of
using 3dmerge, this applies 3dBlurToFWHM. So instead of adding a
blur of size -blur_size (with 3dmerge), the data is blurred TO the
FWHM of the -blur_size.
Note that 3dBlurToFWHM should be run with a mask. So either:
o put the 'mask' block before the 'blur' block, or
o use -blur_in_automask
It is not appropriate to include non-brain in the blur estimate.
Note that extra options can be added via -blur_opts_B2FW.
Please see '3dBlurToFWHM -help' for more information.
See also -blur_size, -blur_in_automask, -blur_opts_B2FW.
Note: that in doing this you specify the blur size to which the final data should be blurred, rather than the blur that gets applied---the former is typically a larger number.
Note: if you are going to be doing ROI-based analyses, you would not want to blur, so then you would not add this opt (or even the blur block itself).
Thank you for your suggestion and explanations! I have a couple of follow-up questions:
Is it fair to conclude that it’s generally preferable to upsample data (slightly) based on the minimum voxel dimension rather than downsample, in order to avoid the loss of information?
I’ve read that smaller voxel sizes can lead to inflated false positive rates (as mentioned here). However, I also noticed that the cluster size threshold calculated by 3dClustSim increases with smaller voxel sizes. Does this threshold adjustment in AFNI mitigate the issue of inflated false positive rates, or is it still a concern when upsampling data?
In general, a wee of bit of upsampling/rounding makes sense. Any regridding and interpolation of data will introduce some blurring, so be aware that upsampling (one case of that) not only doesn't create information but will typically degrade it slightly in practice. But if data has to go to a standard space, it is a necessary cost to pay. When you have anisotropic voxels to start, having them be isotropic at the end makes sense---but note that esp. when they are heavily anisotropic then you will have some weird features lingering/frozen into your data, as they get warped around; they will look a bit like a Vincent van Gogh painting if you look at the final results (which you always should!). But such is life, and it is good to be aware of these practical realities and not pretend that that is not the case.
Re. that FPR note/webpage: that topic was actually written up in a more detailed brief commentary here by those authors. Please note that those results---and that particular concern that upsampling changes FPR---are software-specific and the software tested there was not AFNI. We actually did see that back in the day, and we checked whether a similar issue occurred/occurs in AFNI: we found that it does not.
Re. clustering and FPR, a topic which always brings a touch of warm nostalgia, I think that actually far more important is having more complete results reporting---such as using transparent thresholding. We discuss this phenomenon of using thresholds/clustering to "highlight, not hide" results in this article. Based on the whole clustering discussion this might initially seem counterintuitive, but as we describe in the paper, it so important in many ways and a much better representation of results for everyone (writer and readers) to understand things better.
Thank you so much for your thorough explanations. I really appreciate it!
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.