3dPeriodogram and the Fast Fourier Transform: nfft question

Dear all,

I have a question concerning 3dPeriodogram and its use for the calculation of the Fast Fourier Transform (FFT) to subsequently calculate the Power-Law Exponent (PLE).

Question 1: What value (or range of values) is reasonable for the -nfft option? The number of time steps in the runs is 360 and 1180, respectively. Bandpassing is 0.02 to 0.2 Hz for both runs.
Question 2: How should I adjust my FFT ideal file (the file which contains one column of values) for my FFT script?

The problem is that the result of my FFT and final PLE calculations leads to values which are far too low both in single subjects and when averaging over more than 20 subjects. For example, the average PLE for one ROI at rest is 0.0045, while it should be around +1 or higher. This is the case for both single voxel as well as ROI based average PLE values.

However, when I inspect the results of the FFT via the AFNI Gui, the graph looks fine (it really does look like a propery transform of the time-series into its frequency-domain).

I am not sure if this is due to faulty set values for both the nfft and/or the range (here 4…95) that was chosen from the ideal file for the FFT. Please let me know what you think. Any input is welcome, since I am stuck with this problem for weeks now.

Philipp

I also show you my FFT script subsequently:

Fast Fourier Transform (FFT) - Transformation of the time-series into its frequency domain

directory=/volumes/sandisk/fmri/dataset/subjects
directory_PLE=/volumes/sandisk/fmri/dataset/info
for subject in Subject1 Subject2 Subject3 Subject4 Subject5 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject21 Subject22 Subject23 Subject24 Subject25
do
mkdir $directory/$subject/FFT_PLE_RestingState
for fMRIruns (errts.${subject}_Rest.anaticor+tlrc)
do
cd $directory/$subject/Preprocessing_RestingState
echo “Processing $subject …”

3dPeriodogram
-nfft 192
-prefix $directory/$subject/FFT_PLE_RestingState/FFT.$fMRIruns $fMRIruns

1deval
-a $directory_PLE/FFT_ideal.1D’{4…95}’
-expr ‘log(a)’ \

$directory/$subject/FFT_PLE_RestingState/FFT.1D

done
done

Logarithm of Amplitude/Power log(P) y-axis

directory=/volumes/sandisk/fmri/dataset/subjects
for subject in Subject1 Subject2 Subject3 Subject4 Subject5 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject21 Subject22 Subject23 Subject24 Subject25
do
for fMRIruns (FFT.errts.${subject}_Rest.anaticor+tlrc)
do
cd $directory/$subject/FFT_PLE_RestingState
echo “Processing $subject …”

3dTcat
-prefix BP.$fMRIruns
$fMRIruns’[4…95]’

3dTsmooth
-hamming 7
-prefix Smooth.BP.$fMRIruns
BP.$fMRIruns

3dcalc
-prefix Log_Y.Smooth.BP.$fMRIruns
-a Smooth.BP.$fMRIruns
-expr ‘-log(a)’

done
done

Linear Regression Line between log(F) and log(P)

directory=/volumes/sandisk/fmri/dataset/subjects
for subject in Subject1 Subject2 Subject3 Subject4 Subject5 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject21 Subject22 Subject23 Subject24 Subject25
do
for fMRIruns (Log_Y.Smooth.BP.FFT.errts.${subject}_Rest.anaticor+tlrc)
do
cd $directory/$subject/FFT_PLE_RestingState
echo “Processing $subject …”
3dfim+
-input $fMRIruns
-ideal_file FFT.1D
-out ‘Fit Coef’
-bucket PLE.$fMRIruns
done
done

Here is an update that breaks down the problem into something more comprehensible.

I used the .errts file (the preprocessed output of a resting-state run) as input to calculate both the FFT and the PLE. In my first post in this thread I described the problem that the PLE values always turned out to be way too low (both for single subjects as well as for an average over all subjects).

Then, I read AFNI’s 3dPeriodogram information here once again: https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dPeriodogram.html
AFNI states that “* Tapering is done with the Hamming window (if taper > 0):
Define npts = number of time points analyzed (<= nfft)
(i.e., the length of the input dataset)”

As far as I understand, the -nfft option should match the time points of my input file. In case of the just mentioned .errts file, that would mean 356 time points and consequently a -nfft value of 356. Here are the results for different -nfft values:

nfft 356 (exact match with the time points of the input file):

  • Average PLE inside a mask: -12.3254 [24200 voxels]. Here the PLE is way too high. So I decreased the nfft value from 356 to 250 and got the following value below:
  • Average PLE inside a mask: -3.50108 [24200 voxels]

The second value seems more reasonable. However, a major problem remains: I need a rationale for why I choose a specific -nfft value, as it would amount to some sort of cherry picking if I would just keep on adjusting the nfft value until I get a reasonable PLE value. Or would you disagree?

My question now is: what nfft value should be chosen in general? Should the nfft value simply match the time points of the input file to both the FFT and subsequent PLE calculation? And if that does not work, i.e., the PLE results seem way off then the problem has to be somewhere else?

Thank you,

Philipp

These are hard questions to answer, for a couple of reasons.
First, it has been a LONG time since I used periodograms seriously (say 1985).
Second, since I don’t have access to your data, I’m flying blind here.

Here’s my first stab at suggestions, in no particular order:
[ul]
[li] Look at the outputs of 3dPeriodogram to see what could be causing the weirdness in the fit.
[/li][li] Consider using 3dTfitter with the -L1 option to do a least-absolute-sum fit rather than a least-squares fit – this will reduce the impact of “outliers” in the periodogram “data”.
[/li][li] More intricate: use 1dNLfit to fit the power law decay formula “a*f^(-p)” directly, rather than using the log transformation to make a linear fitting problem. Fitting the desired curve directly is usually preferred over transforming the problem to make the regression linear. However, 1dNLfit only operates on 1D text files, as it is pretty slow to run on a whole 3D dataset – so you’ll either have to ROI average the periodograms first, or extract the periodograms from and ROI, run 1dNLfit a lot, and then average.
[/li][/ul]
I don’t think it is “nfft” itself which is your problem, unless indirectly you happen to be cutting off some wacky data when you shrink nfft.

Something else you might consider would be doing global signal regression during the pre-processing, and eliminating the bandpassing.