I typically use afni_proc.py when analyzing resting state data (Example 9 when I don’t have physio data, as is the case here), but I’m losing too many degrees of freedom due to band-pass filtering (e.g, losing 487 degrees of freedom out of 554 solely due to band-pass filtering). My resting state scan only consists of an approx. 6-minute-long run with a TR of 650 ms and a multi-band accel. factor of 8. I’ve never analyzed multi-band data before, so I’m not sure whether I should be analyzing these data differently than I normally would (e.g., skip band-pass filtering). Any ideas?
That is no surprise.
Please see my earlier rant on band passing…
Band passing gets more expensive as
acquisition gets faster.
Thanks for your quick reply! So if I decided to use Example 11b, I just want to be clear which options are necessary vs. those that are just suggestions. If I resample the template_ventricle mask to 2 mm isotropic instead of 2.5, I assume that I would I get rid of the -volreg_warp_dxyz 2.5 since the default output is 2 mm. I assume that I don’t necessarily need to include “-volreg_align_to MIN_OUTLIER” or “-tlrc_NL_warp” (although the latter may be recommended).
What do you mean by “Make WMe and Svent correlation volumes, which are just for entertainment purposes anyway?” I just want to regress out the white matter (eroded white matter averages? I’m not sure why this is preferable. Can you speak more to this?) and CSF restricted to the ventricles. I assume that I don’t necessarily need to run the cluster simulation but that this would be useful if I was going to use FWE correction in later analyses.
The “-volreg_warp_dxyz 2.5” option is both to force
the output to be 2.5 mm^3 and to make it clear what
it will be. If your data generates a default of 2,
sure, either remove the option or change it to 2.
Many people like to have the final dimension stated
The MIN_OUTLIER and NL_warp options are not required,
but are recommended. It is up to you.
Look up the regress_make_corr_vols option, and see
how it is applied in your script. By “just for
entertainment”, I mean the option could be removed
without affecting the analysis. It is just to
compute maps of correlation against average WMe and
The cluster simulation is mostly a demonstration of
it could be done at the group level.
Thanks so much for your response, Rick! Sorry to bother you again, but I just tried running it, and I keep getting the following error: ** error: option -regress_ROI_PC follows unknown arg #29 (-mask_import). I’m running the most current version of afni available on the cluster (December, 2015). Was the -mask_import option not available then or is there something strange going on with my script (basically the same as Example 11b, so not sure where I went wrong there)?
That is right, -mask_import is more recent than that.
It is from about 6 weeks ago, Oct 9, 2016, as is
Example 11b, which should not be in your version.
Thanks so much for all of your helpful responses! Just wanted to check in on a few final things (sorry). I only have one run, so I deleted the “-regress_ROI_PC_per_run” line. Just wanted to confirm that that’s correct.
For the “-regress_ROI_RC” line, I used 3 as the number of principal components because that’s the number used in the example. Is there a better justification for selecting the first 3 PCs vs. some other number?
Lastly, it’s taking about 8 hours per subject to run. Is this to be expected? I chose 11b over 11 because I thought it might be a bit faster to skip Freesurfer, but I wanted to regress out WMe and CSFe restricted to the ventricles. Is there a way to add something to take advantage of parallel processing (like -jobs in the 3dD options)?
Yes, there is no need for any _per_run options when
there is only 1 run.
Sure 3 is a good number of PCs to remove, but nothing
is written in stone. Some studies have used larger
numbers (though I do not know about via afni_proc.py),
but from people I know who have done such things, 3
seems like a good number.
Yes, FreeSurfer would take much longer. But 3dDeconvolve
is not slowing down your processing, it is the non-linear
registration. Just adding -tlrc_NL_warp vastly increases
the processing time. And if your binaries support it,
the non-linear registration uses parallel processing to
speed it up already.
It is also possible that your computer is running out of
RAM during the registration step, which would greatly
slow it down. How much RAM do you have, and how large
are your datasets?
Thanks for your quick response! I’m running the analysis on a computing cluster, so I’m setting the memory limit. I’ve set it to 5,500 MB, but it looks like it only used about 3,638 MB on the last participant, so it might be okay. The datasets are about 512 MB large. I’m going to try using affine registration to see how much it speeds things up. If it’s not significantly faster, I’ll try parallel processing again. I tried it before and it was only about 15 minutes faster.
There is nothing else in single subject analysis that
remotely compares to the time used in non-linear
registration, though that can come a bit from blip up/
blip down distortion correction, too, if you do it.
On my system, an analysis with affine registration
might take 15 minutes. With non-linear registration
it might take 4 hours.
Sorry to respond to this thread so many months later, but I was wondering how I can modify this script to do a simple GLM-based approach of removing WMe and CSFe (in the ventricles). The current script (regressing out the principle components) has been great for my univariate analyses using 3dGroupInCorr, but the resulting data are “too clean” for the multivariate analysis that I’m conducting using PLS (McIntosh et al., 1996).
To be sure, you are currently using -regress_ROI_PC and you
would instead like to simply regress the ROI average, is that
correct? If so, replease -regress_ROI_PC with just -regress_ROI.
For example, if you current remove 3 PCs from the ventricles via:
-regress_ROI_PC FSvent 3
to remove just the average time series, change it to:
Does that seem like what you want?
Thanks! Yes, that’s what I want. I see where to change this in the script–Example 11b–for the CSFe in the ventricles (-regress_ROI SVent), but what about for the WMe? Is that just using a simple GLM-based approach or is that using a principle components approach in the background somewhere?
In that example, WMe should be the default tissue
class for ANATICOR, which is something a little
different. It is just a simple regression, but it
varies per voxel and is the gaussian weighted
(by relative distance) “average” of white matter.
But it is still just one regressor (per voxel).
To remove it, remove the -regress_anaticor_fast
Sorry to follow up with another question, Rick, but I’ve been assuming that I should be using the errts file as my output for the resting state analysis, but it still seems a bit too sparse for the multivariate analysis I’m trying to run. I’m wondering if I should actually be using 3dSynthesize to pull out the regressors of no interest and then subtracting that from my all_runs file using 3dcalc. I really want to do minimal pre-processing on these data for the multivariate analysis, so it’d be helpful to basically have the all_runs data with just the WMe and SVent mean time series regressed out (and possibly motion params). Would 3dSynthesize be the best way to do this or would you suggest something else?
If you only want to remove WMe, SVent and the motion
parameters, then just use -regress_ROI options and no
I assume this means you do not want censoring, either.
Censoring might cause trouble in the subsequent analysis.
Using 3dSynthesize would not be quite appropriate. It
is better to directly regress out what you want to in
the first place.
Thanks so much for your response! I guess I’m just confused about which output to use for my multivariate analysis then since I’d ideally like one file that just has motion and WMe/Svent regressed out with no (or minimal) detrending (like an allruns file just with the motion and nuisance signals regressed out). It seems like no matter how lenient I’m being in the preprocessing, the errts dataset is still too sparse. I even tried polort -1 and regress_no_motion with the assumption that the only thing in the fitts file would be the nuisance signals I was modeling out and the rest would be in the errts. I’m probably just missing something obvious conceptually. Sorry that I’m a bit slow on this!
This seems very strange to me. Note that it is very strange to
to use polort -1, as that makes regression very difficult. For
that reason, my suspicion is that the errts is or is close to
having a mean of zero, and perhaps that is what is causing
How about just adding 100 to the errts and trying the result?
3dcalc -a errts.SUBJ+tlrc -expr a+100 -prefix errts.100.SUBJ
Thanks so much for your suggestion! I’ve tried it to no avail, unfortunately. It’s strange. When I enter the data from just before the regression step (e.g., pb04.$subj.task.r01.blur.nii) into the multivariate analysis software, it’s able to pick up on all of the hypothesized networks in the different components that it outputs (after a few components clearly linked to wm/csf there’s a clear bilateral temporal component, clear visual component, etc.), but once I run the regression analysis in AFNI, seemingly no matter what specifications I use (even if I only regress out WMe and SVent), the output is too sparse for the multivariate software to pick up on any meaningful signal in the group analyses. For instance, if I take the same file and run 3dTproject -polort 0 -input pb04.$subj.task.r01.blur+tlrc.HEAD -ort X.xmat.1D -prefix pb04.$subj.task.tproject, as is the last step in afni_proc.py, I can no longer use the output in the multivariate analysis.
Ah!!!..when I grep the X.xmat.1D file the column labels are “Run#1Pol#0 ; Run#1Pol#1 ; Run#1Pol#2 ; Run#1Pol#3 ; ROI.Svent#0 ; ROI.WMe#0” Do you think that the issue I’m having is due to the detrending since a pnum > 2 is roughly equivalent to a highpass filter (which I especially don’t want given the multi-band nature of the data–see my first few posts on this thread). What polort should I specify instead to rectify the problem (since you said not to use -1)?
Should I also keep this in mind when analyzing my event-related data that will also be acquired using a multi-band sequence?
Sorry for all of the messages! Changing the polort still didn’t work, so I’m wondering if you could let me know why using 3dSynthesize is wrong (in what ways) since that seems to work the best so far.