AFNI proc -regress_polort option for very long runs

Dear all,

by default, AFNI proc automatically sets the value for


My question is: does the automatic calculation of the polort degree work well (reasonable) when the runs are very long? By very long I mean runs with ~2400 time points (TRs).
For example, I am currently running AFNI proc for such a long run length and it set the polort degree to 34.

Input polort=34; Longest run=4485.3 s; Recommended minimum polort=34 ++ OK ++

I am having a new MacBook M1 with 32 GB Ram but it seems to take forever (more than two hours for one subject now) at

++ detrending starts: 153 baseline funs; 2268 time points

This is a resting-state (sleep) run. Is it possible and reasonable to decrease the “-regress_polort” option? I didn’t apply bandpassing, and I am interested in the frequency range of 0.01 - 0.25 Hz.


Hi Philipp,

To be sure, are you really acquiring a single run that is more than an hour long, without any breaks in the scanning?

In such a case, it might be better not to use polor detrending, but rather sinusoid. It might include a similar number of regressors, but be better behaved.

That would be doing bandpassing, but only as a high-pass filter. But since BP does not really account for quadratic trends, also include polort 2. For example, consider something like:

-regress_polort 2         \
-regress_bandpass 0.01 1

Here the upper frequency of 1 could be anything at least as large as your Nyquist frequency of 0.25 Hz.

You would have to verify how many regressors this would end up corresponding to, it might still be similar.

On a separate note, if you want to decrease the RAM needs for this, consider adding -regress_compute_fitts .

  • rick

Hi Rick,

yes, some of the runs indeed go up to two hours. These are sleep recordings and subjects were allowed to sleep in the scanner up to two hours.
I will now try running my script again adding

-regress_polort 2         \
-regress_bandpass 0.01 1

as you suggested, thank you. The option -regress_compute_fitts does not work or make sense here, because I use a resting-state script.

Update: AFNI proc ends up with 147 baseline funcs for over 2100 time points when using polort 2 and bandpass 0.01-1.

Meanwhile, I would like to ask you another question. When I am intersted in the BOLD signal’s power in the frequency-domain, say at around 0.01 Hz, would you recommend to always use a polort degree of 2 in combination with a bandpass filter (instead of using the automatic calculation of polort depending on the run length)?

I read through the message board, and I also read some AFNI .pdf files, but I don’t really understand how the polort option affects the BOLD signal (in a comparison before/after preprocessing). It is possible to explain this theoretically in easy words?

So, if I am interested in the frequency spectrum of the signal, and not so much in statistical parametric maps, is there a better way to adjust the -regress_polort option than leaving it at the default autocomputation? But, I assume that there is not one correct answer here, and the appropriate polort option rather depends on many factors?

Hi Philipp,

Actually, the point of -regress_compute_fitts is simply to save RAM, it has nothing to do with the analysis type. The fitts dataset is computed in general, and that option means to compute it after running the regression, not during.

Thanks for the update on the number of baseline terms. I did expect them to be close. But the sinusoids should be better behaved for these long runs.

Using polort 2 with that high-pass filter is intending to do the same thing as with the high-degree polort: to model slow fluctuations in the time series, due to things like coil temperature.

There is no intended theoretical difference between the 2 methods. It is just that at such a high degree, polynomials are not so well behaved. But notably, polynomials can drift into the higher frequency spectrum, sinusoids cannot. On the flip side, all such artifacts aren’t necessarily in the lower spectrum. Such is life.

  • rick

Thanks for the explanation.

When adding

-regress_compute_fitts \

to my AFNI proc script, it fails with the following message.

** cannot compute fitts, have 3dD_stop but no reml_exec
** script creation failure for block 'regress'

Any clue what is wrong here? Is something in my script wrong?

... \
-subj_id ${subject}_Run \
-out_dir $directory_run/Results \
-dsets \
	$directory_deoblique/Run_deoblique+orig \
-blocks despike tshift align tlrc volreg mask blur regress \
-copy_anat $directory_sswarper/anatSS.$subject.nii \
-anat_has_skull no \
-tcat_remove_first_trs 4 \
-align_opts_aea -cost lpc+ZZ \
-volreg_align_e2a \
-volreg_align_to MIN_OUTLIER \
-volreg_tlrc_warp -tlrc_base MNI152_2009_template_SSW.nii.gz \
-tlrc_NL_warp \
-tlrc_NL_warped_dsets \
	$directory_sswarper/anatQQ.$subject.nii \
	$directory_sswarper/anatQQ.$subject.aff12.1D \
	$directory_sswarper/anatQQ.${subject}_WARP.nii \
-volreg_post_vr_allin yes \
-volreg_pvra_base_index MIN_OUTLIER \
-mask_segment_anat yes \
-mask_segment_erode yes \
-regress_compute_fitts \
-regress_polort 2 \
-regress_bandpass 0.01 1 \
-regress_anaticor \
-regress_ROI WMe CSFe \
-regress_apply_mot_types demean deriv \
-regress_motion_per_run \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.1 \
-blur_size 8.0 \
-regress_est_blur_epits \
-regress_est_blur_errts \
-html_review_style pythonic \

Update: ok, I see that I had to add

-regress_reml_exec \

when using


Another update:
The processing step

++ detrending starts: 153 baseline funs; 2268 time points

is now finally done after around an hour for one subject. Before, even after two or three hours, this step was still not done (when using the standard 34 polorts).
Therefore, it appears that AFNI is much faster using

-regress_polort 2         \
-regress_bandpass 0.01 1

Thanks Rick!

Oh my. No, that is on me, sorry. Take out the option.

The initial discussion of the polorts and running time threw me off. I then pictured that it was the regression that was taking a long time, but that should not be the issue here, as projection is done with 3dTproject.

Getting back to the time it takes then, your command includes a non-linear warp of the time series (even just to apply the existing warp computed in @SSwarper previously). That it takes more than 2 hours is expected. Warping the time series might take that long on its own, depending on the computer resources.

  • rick

You mean that I should take out the following parts of the script again?

-regress_reml_exec \
-regress_compute_fitts \

Yes. I did not even see the reml_exec part. Computing the fitts is to save RAM when running 3dDeconvolve or 3dREMLfit. I had gathered that was were the script was being slow.

  • rick

Ok, thanks. Anyway, like I wrote before, using a polort degree of 2 in combination with bandpassing somehow works much faster, even though the number of baseline funs in detrending remains more or less the same.

My laptop uses ~15 GB of its 32 GB when running AFNI proc for one subject. I think the amount of Ram needed was the same when using the polort option of 34. However, it seemed to never end. I will consequently stick to the polort degree of 2 for this dataset.

If it uses half the RAM on your laptop, you might have to be careful of running other programs, like web browsers or office programs. If you “run out” of RAM for everything, it can start thrashing (constantly swapping RAM to disk), at which point everything will crawl.

  • rick

Hi Rick,

nono, what I meant to say was the following: the only thing running was my terminal with AFNI proc. And everything together, the system plus AFNI, used around 15 GB. I have everything closed when running preprocessing.

Thanks for your help so far, your suggestion with the manual polort option works great!


Cool, thanks.

  • rick