3dPeriodogram/3dLombScargle - Preprocessing

Dear all,

this is going to be a longer post, but it is necessary to explain the problem and the single steps that probably brought it up.

I am interested in the power-law exponent (PLE). What is the PLE? The PLE, basically, is the slope of a linear regression applied on the log-log transformation of the frequency-domain. Without going into detail here, the general steps to compute the PLE are as follows:

  1. Starting with the preprocessed time-series (e.g., the .errts file), one transforms the time-series into the frequency-domain via 3dPeriodogram or 3dLombScargle.
  2. If desired, smoothing can be applied via 3dTsmooth. 3dcalc is then applied for a log-log transformation of the x- and y-axis of the frequency-domain.
  3. A linear regression between log(F) and log(P) is applied via 3dfim+.

The output of this computation is a single PLE value for every voxel. One can then use ROIs to compute the average PLE over brain areas. Reported PLE values in fMRI depend on many factors, of course, but are often in the range between ~0.4-1.5.

Now I am working on a dataset that results in way too low PLE values for many subjects, e.g., 0.0068, which is an unrealistic PLE value for an awake subject. A while ago, I worked with the same subjects, yet with another preprocessing script. This script resulted in realistic values for all subjects, e.g., PLE values of 0.4 to 0.8 for the same ROIs.

The script that caused “realistic” PLE values is shown in the following code. This script is not an ordinary SSwarper first and AFNI proc second script, since I faced massive problems in the alignment with these clinical subjects. A colleague then offered me this script.


# 3dWarp
3dWarp \
-deoblique \
-prefix Warp_Anatomical_${subject} \
$directory_raw/anat1to3d+orig

3dWarp \
-deoblique \
-prefix Warp_RestingState_${subject} \
$directory_raw/rest1to3d+orig

3dWarp \
-deoblique \
-prefix Warp_Task_${subject} \
$directory_raw/task1to3d+orig

# Align_epi_anat
align_epi_anat.py \
-epi2anat \
-anat Warp_Anatomical_${subject}+orig \
-epi Warp_RestingState_${subject}+orig \
-child_epi Warp_Task_${subject}+orig \
-epi_base 10 \
-edge \
-partial_axial \
-suffix _Align_epi_anat

# Auto warp - nonlinear registration
auto_warp.py \
-base /applications/afni/abin/MNI152_2009_template_SSW.nii.gz \
-input Warp_Anatomical_${subject}+orig \
-skull_strip_input yes

# 3dbucket
3dbucket \
-DAFNI_NIFTI_VIEW=tlrc \
-prefix anat_ns awpy/Warp_Anatomical_${subject}.aw.nii

3dbucket \
-DAFNI_NIFTI_VIEW=tlrc \
-prefix anat_final awpy/anat.un.aff.qw.nii

cp awpy/anat.un.aff.Xat.1D
cp awpy/anat.un.aff.qw_WARP.nii
cd $directory/awpy

# 3dNwarpApply - Apply a nonlinear 3D warp from 3dQwarp etc. to a 3D dataset
3dNwarpApply \
-master $directory/anat_ns+tlrc -dxyz 3 \
-source $directory/Warp_RestingState_${subject}_Align_epi_anat+orig \
-nwarp "anat.un.aff.qw_WARP.nii anat.un.aff.Xat.1D" \
-prefix $directory/Warp_Align_Nwarp_RestingState_${subject}

3dNwarpApply \
-master $directory/anat_ns+tlrc -dxyz 3 \
-source $directory/Warp_Task_${subject}_Align_epi_anat+orig \
-nwarp "anat.un.aff.qw_WARP.nii anat.un.aff.Xat.1D" \
-prefix $directory/Warp_Align_Nwarp_Task_${subject}

#AFNI proc
afni_proc.py \
-subj_id ${subject}_Rest \
-out_dir $directory/Results \
-dsets \
	$directory_warp/Warp_Align_Nwarp_RestingState_${subject}+tlrc \
-blocks despike tshift volreg mask blur regress \
-copy_anat $directory_warp/anat_final+tlrc \
-anat_has_skull no \
-tcat_remove_first_trs 4 \
-mask_segment_anat yes \
-mask_segment_erode yes \
-regress_anaticor \
-regress_ROI WMe \
-regress_apply_mot_types demean deriv \
-regress_motion_per_run \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.05 \
-blur_size 6.0 \
-regress_est_blur_epits \
-regress_est_blur_errts \
-html_review_style pythonic \
-execute


While this script works and resulted in the realistic PLE values named above, Paul Taylor explained me that it has some potential flaws. It is better to return to the “SSwarper first - AFNI proc second” original AFNI way of handling things. B-) I am happy with that, since it makes things easier anyway. Paul Taylor then helped me a lot and we managed to preprocess all subjects “the AFNI way” with very good alignment results. Consequently, the old unorthodox script was no longer needed.

The new script, first of all, applied deoblique via 3drefit on all anatomical and functional scans, since the data was very oblique. Second, I ran SSwarper without any extra options. The results were very nice. The results were then forwarded to the following AFNI proc script:


afni_proc.py \
-subj_id ${subject}_Rest \
-out_dir $directory_rest/Results \
-dsets \
	$directory_deoblique/Rest1_deoblique+orig \
-blocks despike tshift align tlrc volreg mask blur regress \
-copy_anat $directory_sswarper_1/anatSS.$subject.nii \
-anat_has_skull no \
-tcat_remove_first_trs 4 \
-align_opts_aea -cost lpc+ZZ \
-volreg_align_e2a \
-volreg_align_to MIN_OUTLIER \
-volreg_tlrc_warp -tlrc_base MNI152_2009_template_SSW.nii.gz \
-tlrc_NL_warp \
-tlrc_NL_warped_dsets \
	$directory_sswarper_1/anatQQ.$subject.nii \
	$directory_sswarper_1/anatQQ.$subject.aff12.1D \
	$directory_sswarper_1/anatQQ.${subject}_WARP.nii \
-volreg_post_vr_allin yes \
-volreg_pvra_base_index MIN_OUTLIER \
-mask_segment_anat yes \
-mask_segment_erode yes \
-regress_anaticor \
-regress_ROI WMe CSFe \
-regress_apply_mot_types demean deriv \
-regress_motion_per_run \
-regress_censor_motion 0.35 \
-regress_censor_outliers 0.15 \
-blur_size 8.0 \
-regress_est_blur_epits \
-regress_est_blur_errts \
-html_review_style pythonic \
-execute

The alignment between anatomical-EPI was very nice again, nothing to complain here. Applying the usual Periodogram (or 3dLombScargle) scripts after the above steps both result in meaningless PLE values for many subjects. The frequency range I chose was 0.01-0.25 Hz (my TR=2s).

Here are some things I tried out in order to solve the problem.

  • Using the same motion censor level (0.3 and 0.05) instead of 0.35 and 0.15. → No success, the results remain almost identical
  • Using -cenmode NTRP for interpolation of censored TRs. → No success, the results remain almost identical
  • Bandpassing the data (0.01-0.2 Hz). → Using 3dPeriodogram the PLE for subject 1 increases from 0.006 to 122 (not a typo). 122 is incredible high (and incredible unrealistic, now we have the other extreme). With 3dLombScargle I even reached a PLE of 742 (long-range temporal correlations back to the Dinosaur?). This is really strange, either we have a dead subject (= PLE values of 0.0x or 0.00x), or they project themselves back to times when AFNI was a prophecy told by old and wise men, long before the computer existed (=PLE values of 122 or 742). :)-D
  • Remove the CSFe from -regress_ROI WMe CSFe, since I read in the AFNI prog page that the regression of CSFe is not really recommended. → No success, the results remain almost identical

It seems as if the two different options for preprocessing are responsible for the observed PLE differences in some subjects (e.g. 0.006 vs. 0.45). How is this possible? What exactly drives the PLE values in the first (and rather not recommended) preprocessing script to reasonable values. Conversely, what is happening in the second and recommend preprocessing that drastically diminishes PLE values for many subjects?

Thanks for any input,
Philipp

Did you visualize some of your power spectra, both before and after taking the log (or log-log, in your case) of it? Are you sure you aren’t including any super-low freqs that are essentially baseline, that might skew your results? A super high exponent for a power spectrum suggests that baseline might be left in your data.

I would try visualizing your data (here, power spectra, etc.) as a first means for QCing it.

I am also not sure what a statistically appropriate way to average PLEs across voxels would be.

–pt

Hi Paul,

lets start with 3dPeriodogram (and leave 3dLombScargle aside for the moment, just to make things easier). Some facts first:

  • I don’t bandpass the data (via bandpassing in AFNI proc). I avoid a direct bandpassing to keep DOF high.
  • However, once I transformed the preprocessed time-series into its frequency-domain via 3dPeriodogram, I cut the frequency domain via 3dTcat to 0.01-0.25 Hz.
    The lower frequency limit is to avoid scanner noise; the higher frequency limit is due to the nyquist frequency. A side question here: the result of this is not the same as directly bandpassing the data via AFNI proc, correct?
  • This results in artifically low PLE values for a couple of subjects. However, using precisely this method I achieved reasonable results in another dataset as well as in the same dataset before, albeit with the unorthodox preprocessing script for the new/current dataset I am dealing with now.

Here is my 3dPeriodogram script. The length of the time-series is 240 time points with a TR of 2s. Therefore, an -nfft value of 512 should be fine.


3dPeriodogram \
-nfft 512 \
-prefix $directory_results_1/restingstate/$subject/PD_$subject \
errts.${subject}_Rest.anaticor+tlrc

Next, I apply a little smoothing, cut the frequency range to 0.01-0.25 Hz, and log the axes. A note here: how do I know that I have to cut the frequency-domain with 3dTcat in the range of [10…255]? I used an nfft value of 512. The steps in the frequency-domain (the output of 3dPeriodogram) are defined by 1/(nfftTR). In my case, this means that 1/(5122) = 0.0009765625.

The first value in the frequency-domain would consequently be 0.0009765625. We have 256 steps in the frequency domain (half the nfft value). 0.0009765625 * 256 = 0.25. Therefore, the last step/value in the frequency-domain (“on the very right”) is 0.25 Hz, our nyquist frequency for a TR of 2s. Now, if I am interested in the frequency range of 0.01 - 0.25 Hz, I cut out the first 10 time points (0-9) in AFNI in order to keep time point 10 to 256 (since AFNI starts counting with 0, the last time point for 3dTcat would be 255).

0.0009765625*10 = 0.009765625 Hz. My frequency range would consequently start with 0.0097 Hz (the next time step 11 would be 0.0107421875, so I chose the lower one).


3dTcat \
-prefix BP_PD_${subject} \
PD_${subject}+tlrc'[10..255]'

3dTsmooth \
-hamming 7 \
-prefix Smooth_BP_PD_${subject} \
BP_PD_${subject}+tlrc

3dcalc \
-prefix Log_Smooth_BP_PD_${subject} \
-a Smooth_BP_PD_${subject}+tlrc \
-expr 'log(a)'

Finally, before applying my ROIs, I use 3dfim+ for the linear regression to yield the voxel-based PLE values. The “Log_BP_Ideal_frequency_Rest.1D” file is my manually created one-column ideal frequency file. This file starts with the value 0.0009765625 and goes up to 0.25 Hz. Just like with 3dTcat, 1dcat cutted the first 9 values out. Second, 1deval applied the logarithm on the data. The frequency range/values in this .1D file consequently exactly match the frequency steps in the output by 3dPeriodogram after it was cut with 3dTcat.


3dfim+ \
-input Log_Smooth_BP_PD_${subject}+tlrc \
-ideal_file $directory_ideal/Log_BP_Ideal_frequency_Rest.1D \
-out 'Fit Coef' \
-bucket PLE_${subject}

Now I extract all voxel-based PLE values inside a ROI with 3dcalc and compute the mean PLE in this ROI with 3dmaskave.


# 3dcalc
3dcalc \
-a $directory_PD/RestingState/$subject/PLE_${subject}+tlrc \
-b $directory_ROIs/Core.nii \
-expr 'a*b' \
-prefix $directory_results/RestingState/$subject/PLE_SCP_Core_${subject}

# 3dmaskave
3dmaskave \
-mask $directory_ROIs/Core.nii \
-quiet \
PLE_SCP_Core_$subject+tlrc > PLE_SCP_Core_$subject.1D

The resulting average PLE value for this ROI is -0.010082, which is way too low.

Let us now look at the frequency-domain (and its log-log transformation).

  1. I extracted the frequency-domain results from AFNI (the results that were 3dTcat and 3dTsmoothed). These were plotted with Python, shown below. All computations stem from AFNI, Python only plots those.
  2. I extracted the log-log transformation results from AFNI for the same subject and ROI. These were again plotted with Python.

We can see that a PLE result of -0.01 does really not match the visual results of the log-log transformation; the slope of the curve is far higher (as can already be seen in the frequency-domain without log-log as well). I wonder what is happening here? Does the problem stem from 3dfim+?

Thanks,
Philipp

Doesn’t 3dfim+ have a default polort value of 1? and there is default thresholding on, too, I think? From the program help:


[-polort pnum]     pnum = degree of polynomial corresponding to the    
                     baseline model  (pnum = 0, 1, etc.)               
                     (default: pnum = 1). Use -1 for no baseline model.
[-fim_thr p]       p = fim internal mask threshold value (0 <= p <= 1) 
                     to get rid of low intensity voxels.               
                     (default: p = 0.0999), set p = 0.0 for no masking.

3dfim+ wasn’t really designed for this application, so I would check the default behavior…

–pt

Hi Paul,

thanks. Changing those values in 3dfim+ does not really change the results. I will drop using 3dfim+ for this purpose now. Instead, and since you said that 3dfim+ is not really designed for my aim anyway, I am going to compute the slope in Python based on the extracted power-spectra computed in AFNI. (Whatever the reason is that 3dfim+ “fails” on this particular preprocessing+dataset.)

Thanks for your help so far.
Philipp