We have seen some improvements when including bandpass filtering on task data as well as resting state data. With a TR of 0.901 s we figured to basically use a low pass filter by setting the first argument to a very low number (0.001) and the second argument to 1 / (TR *3 ) = 0.37. I.e.:

-regress_bandpass 0.001 0.37

(for a TR of 2 this would be 0.001 0.17).

What I have realized is that this setting causes the voxel.loop segment (3dDeconvolve) and GSLQ loop (when using anaticor) to be extremely slow. Basically only changing one increment every other minute. I know the X-matrix becomes enourmous and very compliated when including bandpass fitlering since the frequencies filtered are hanled in the regression.

Is this expected or are we doing someting wrong? (It’s almost as slow both on network mounted drive and on local SSD disk). This is two runs with a total of 1000 dynamics (500/run). After “Wrote matrix values to file X.nocensor.xmat.1D” it takes a very long time (matrix setup tine = 287.31 s.) then the even slower voxel loop starts with a couple of minutes between each tick of 0123456789.12 etc…

Edit: All cpu cores are on 100 % during the voxel loop (matrix setup is just single core 100 %)

So you intend to apply exactly 1/3 of the degrees of
freedom to bandpassing? Note that a frequency of 0.37
would not cover the main components of heart rate or
breathing, for what little that is worth.

If afni_proc.py is running the bandpassing through
3dDeconvolve, then your version of afni_proc.py (or
perhaps just the resulting script) is old. 3dDeconvolve
is now used just to create the regression matrix, while
the actual projection is done quickly using 3dTproject.

The reason 3dDeconvolve is slow for that is because it
has to compute lots of partial statistics (betas and t-
and F-stats). 3dTproject does not care about that, and
just does the projection, so it is much faster.

Hmm, that’s true. In the afni_proc.py examples 0.01 and 0.1 is used. I know you don’t reccomend BP overall but if we want to use it with a TR of 0.901 what limits would you use? 0.01 1/(TR*2)? What do you mean with that it won’t cover heart rate or respiration? That the band actually covers heart/resp so that they are not regressed out?

This is task based data with stimfiles so it would not run 3dTproject (per our old discussion,right?). Per our old discussion resting state generates the script via 3dD and runs it via 3dTproject. And task based is still run via 3dD since it has to put out statistics. But even so the program runs 3dREML (or similar) since we run anaticor and that GSLQ loop is equally slow.

I had not realized this was for task analysis. Given that,
you are right, it will have to be run via 3dREMLfit (for
ANATICOR), and it will be slow. So it does not imply the
software is old.

Yes, by not covering heart rate or respiration, I meant
that the main frequencies for them are below your pass band
limit. However since this is for task analysis, that is
not such a big concern.

I am not sure what to suggest for BP limits here. At this
level, presumably you will still have sufficient data for
beta estimation, meaning you could defend it, if asked to.

Ok, so the newest version runs, when using anaticor, both 3dD and 3dREMLfit? That is what our version does, I get both the stats file and the stats.REML file.

So the question is if it is worth the extra time and the loss of DOF to run BP filtering. You have me pretty convinced, I’m not a huge fan of BP either. But we did see improvments a while back when we ran some optimization/QA with different setitngs.

Do I got this right? The Nyqvist frequency is 1 / (TR*2) = 0.55 which means that our upper limit would be 0.01 0.55. We use (for a reason I don’t really remember) 1 / (TR *3) which is 0.37. So we could increase it even more (and use up even more DOF)? I think the logic we use is that we can’t detect changes that are faster than 3 * TR but perhaps that is wrong and the true max frequency is 0.55?

In this case it would indeed make sense for 3dDeconvolve
not to run the actual analysis, but afni_proc.py does not
default to that. Add the -regress_3dD_stop option.

While I do not like BP filtering, I stop short of really
trying to persuade anyone. The cost of filtering is very
high. But deciding whether it seems appropriate for the
specific case of one particular analysis is better left
to the those actually involved in it. So if you did see
improvements that seemed appropriate, maybe it is indeed
worth consideration. That is your call.

Yes, Nyquist would be 1/(2TR) = 0.55 Hz, which is the
upper limit of frequencies you can consider (anything
faster cannot be detected). Increasing your 1/(3TR)
limit means removing fewer frequencies. The common 0.1
(or 0.08) limit is applied because that is the frequency
of a BOLD signal, a 12+ second duration event (though
real BOLD signal will be composed of many frequencies,
certainly going higher than 0.1 - which is part of why
such filtering does not seem appropriate to me).

If you raised the limit, what would it be to, and why?
Go with what you feel is appropriate (i.e. defensible).

This made me think of a follow up question related to the relationship between 3dDeconvolve and 3dTproject:

When running a typical resting state pre-processing with Anaticor the X-matrix is generated by 3dD usually with a polort of 2 or 3. I.e. the X.mat contains these drift regressors.

But when running the 3dTproject command with this X.mat (and anaticor regressors) the polort option is set to (in the proc) to -polort 0. Would this not mean that two 0th order regeressors are used? One from the X.mat and one from the option in 3dTproject?

We are running some stand alone regressions and want to clean volreg data with anaticor and were confused when it came to the -polort settings on these two functions. It would feel best if one had e.g. -polort 3 and the other -1 (off).

It would likely not matter to include the constant twice,
assuming it applies a pseudo-inverse. However that is
actually a moot point, since 3dTproject demeans the ort
terms (which zeros out the constant, removing it from
the projection entirely). I expect this is done to make
the computation more well behaved.

That is to say using -polort 0 is generally advisable,
unless the input already has mean = 0.

Thanks! I guess it must be OK since the standard output of afni_proc.py for resting state data is to create an X.mat with all drift regressors (3dD with -polort A) and then to run 3dTproject with that matrix but also with -polort 0.

This made me think of a follow up question related
to the relationship between 3dDeconvolve and
3dTproject:

When running a typical resting state
pre-processing with Anaticor the X-matrix is
generated by 3dD usually with a polort of 2 or 3.
I.e. the X.mat contains these drift regressors.

But when running the 3dTproject command with this
X.mat (and anaticor regressors) the polort option
is set to (in the proc) to -polort 0. Would this
not mean that two 0th order regeressors are used?
One from the X.mat and one from the option in
3dTproject?

We are running some stand alone regressions and
want to clean volreg data with anaticor and were
confused when it came to the -polort settings on
these two functions. It would feel best if one had
e.g. -polort 3 and the other -1 (off).

Thanks!

Nice answer from you
thanks

The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.