Is there any precedent for smoothing more than once? I have fMRI data acquired with 2mm isotropic voxels. Prior to the individual subjects’ GLMs, I smoothed with a modest 3mm Gaussian kernel to preserve spatial resolution. The results of the individual subject analyses and the group level t-tests looked to be a reasonable smoothness.
But results of beta series connectivity analyses look pixelated, as if the data hadn’t been smoothed at all. The two attached images depict group-level t-test results. glm.PNG shows 3dttest++ results with betas generated by 3dDeconvolve as the dependent variable. beta_series.PNG shows 3dttest++ results with Fisher z transformed correlation coefficients as the dependent variable. I can’t think of a good reason for the beta series analysis data being less smooth than the conventional GLM data. Would it make sense to smooth the beta series data prior to computing the correlations even though the data that generated the beta series had already been smoothed?
In the second image, what values are you showing? And what threshold is adopted? What would it look like if you set a low threshold? Is this some kind of seed-based correlation analysis based on trial-level effect estimates at the subject level?
For the second image, the values are the mean of the Fisher z transformed r’s, thresholded at p < .001. The correlation was between the seed (average beta series within an ROI) and the rest of the brain.
No, it’s still that way with liberal thresholds. Someone has suggested that the beta series data are just noisier. That sounds reasonable and possibly justification for smoothing that data again prior to computing the correlations. What do you think?
“Pixelation” can be a result of lots of different things, such as the discreteness of the actual data, the interpolation method for storage and display, but this might be the result of how the color is mapped to the image. I can get images very similar to this if I change the color scale used to the 10-pane version by clicking below the colorbar in the overlay panel and choosing 10. If I switch back to the “**” choice, I get a more smoothly varying image by using a continuous colorscale. Also see the effect of the display resampling interpolation types in the Define Datamode menu for the “OLay Resam mode” and “Stat Resam mode” choices.
I’ve played around with the stat and olay resampling methods and can make the data look smoother that way, but one of the reasons I’m concerned about this is that there are some tiny clusters in the vicinity of each other that are too small to be significant that I think might combine into larger clusters if the data were a little smoother.
Changing to a continuous pane also helps the appearance some, but that’s not going to help my cluster sizes.
Edit to add: What I’m showing you are stats for my control subjects only. There there are obviously significant clusters. I chose this to demonstrate the problem because that’s where it’s more obvious. The issue arises when I want to compare these controls to patients. That’s when I get several small clusters in reasonable locations but that aren’t significant.
A couple quick questions here. Did you process this with afni_proc.py, and if so, could you please post the afni_proc.py command here (that will help us understand the subject-level processing choices).
I’m a little confused about what the group-level tests are. Could you unpack those a bit, preferably with the exact 3dttest++ commands you used? You referred to the images as 3dttest++ results, and then state that the images are for ‘control’ subjects only, so I don’t understand what is actually being shown. Also, in glm.PNG and beta_series.PNG, it is difficult to interpret the outputs without knowing what the colorbar and colorbar range are, as well as the threshold. We also typically find it easier to start with either translucent thresholding (so we see more of what is happening throughout the brain) or with lightly or even non-thresholded data (for the same reason).
Having those details will help; my initial thought is that a second round of smoothing at this point would be undesirable. If we understand the commands leading up to this point, then maybe we will see something else happening to resolve.
Then the GLM was rerun with -stim_times_IM to get the beta series. Correlations with the seed were computed for two conditions and Fisher z transformed.
The t-tests compared patients and controls on difference scores.
cd {$subj}.results.pcb
echo $subj
3dcalc -overwrite -prefix expect2_minus_expect0 -a caudate_expect2_fisherz+tlrc -b caudate_expect0_fisherz+tlrc -expr 'a-b'
cd ../
end
There was much more activation for each group separately than for the difference between groups, so it was easier to convey the apparent lack of smoothness if I just showed the activation for the controls.
A couple things (and some of these come from taking your command and running it with: -compare_opts “example 6b”, which highlights differences between a given command and a particular afni_proc.py example):
I would always use “-html_review_style pythonic”, so you have the best QC HTML output—the 1D plots (regressors, stimuli, etc.) are much clearer.
You are aligning to a template or standard space (with the ‘tlrc’ block), but it looks like you have not selected nonlinear warping. That means your subject EPIs won’t overlap as nicely in the final space. This situation would be like having blurrier data—the specificity of spatial localization will be reduced.
Probably “-volreg_align_to MIN_OUTLIER” would be preferable to using “first”—if motion occurs during the first time point, all bets might be off for motion regressor estimation; though, sometimes people might want “first” because they have pre-steady state time points that have better tissue contrast, so EPI-anatomical alignment might be better—but since you have “-tcat_remove_first_trs 0”, I don’t think this is the case here.
I would add “-radial_correlate_blocks tcat volreg” for more QC related to motion/artifacts.
That blur size is quite small—what voxel size do you have? That would be doing almost no blurring on standard voxels (that have 3-4 mm edges); of course, you might have higher-res data, but I just thought I would ask.
The motion limit above is 0.5—that is pretty large. For task data, that might be OK, but that might leave in more motion in the dataset (esp. if you have small voxels).
You don’t have an “-anat_has_skull …” option. It is often useful to include directly, for not making a mistake whether the dset still has a skull on or not.
Related to #2 and #7: we typically recommend running @SSwarper before afni_proc.py, to both estimate a nonlinear warp to a standard template (which could be MNI152_2009_template_SSW.nii.gz, since you want MNI space, based on the above example) and skullstripping. The help file for this program shows how the results of both skullstripping and nonlinear warp estimate get passed into afni_proc.py
You don’t have any “-align_opts_aea …” to control EPI-anatomical alignment. Often we would specify to use the “lpc+ZZ” cost function, as well as some other checks wtih: “-align_opts_aea -cost lpc+ZZ -check_flip”. that might be useful here. Are you sure the alignment above was good? Looking at the QC HTML will help show you more about that quickly for each subject. The reason for the “-check_flip” is described here: https://pubmed.ncbi.nlm.nih.gov/32528270/
I’m surprised not to see “-regress_motion_per_run” since you have multiple runs; I guess you split the output time series later, after processing—is that the reason it isn’t included?
Often we include “-regress_censor_outliers 0.05” or a similar value to help catch extra motion or other effects in voluimes.
Including “-regress_compute_fitts” would help with memory usage in the 3dDeconvolve step
Have you considered using 3dREMLfit to account for temporal autocorrelation in the noise during the regression modeling, by including -regress_3dD_stop and -regress_reml_exec?
Thanks for these suggestions. Some of these options I was not aware of and will incorporate them into my pipeline for future studies.
I’m not blurring much to preserve spatial resolution. The original voxel size was 2x2x2 mm. This gets back to the original question–the 3mm FWHM kernel seemed fine until I started doing beta series connectivity analyses. It’s not the appearance I’m concerned with. Rather, there are nonsignificantly large clusters in the vicinity of each other that might form a larger cluster if the data were smoother for the beta series analysis. It’s not clear to me why the beta series analysis would appear to be less smooth than the conventional GLM analysis.
We were liberal with our motion limit because the patients tend to move and we were losing too much data otherwise.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.