Length of rest and task?

This might be a somewhat amateurish question. I have resting state and task datasets of some subjects and I want to compare these two for a rest vs task analysis of their power-law exponents. TR=2 s for both. But the resting state data has 148 time points and task has 330 (110 time points for each of the 3 runs). Can I compare those two (148 vs 330 time points) or should I select 148 time points out of that 330 task time points (probably somewhere in the middle with a code like 3dbucket -prefix cooltask longtask+tlrc[115-262].)

Thanks,
Yasir

Hi, Yasir-

To be clear, are you intending to calculate the power spectrum of each, and then calculate the power-law exponent of that? Otherwise, I am not sure what set of quantities you are fitting with a power-law fit.

–pt

Yes. To be precise:

  • Take the power spectrum of fMRI signal (FFT ==> 3dPeriodogram)
  • Smooth it with a Hamming window of 15/11/7 (not sure which one to use yet)
  • Plot it with their respective frequencies
  • Take logarithms of power spectrum and the frequencies to make it more linear-ish
  • Fit a linear regression line to the logarithms
  • The slope of that line is power-law exponent.

Like this paper: https://www.hindawi.com/journals/bn/2017/2824615/

Oh, I guess I understood your answer wrong. What I’m doing is:

  • Calculate PLE of rest
  • Calculate PLE of task
  • Compare those

Can we generalize an answer to this issue, as in not only PLE but for every measurement between resting and task data.

Hi, Yasir-

Thanks for both those explanations-- that states everything quite clearly.

I think you could equivalently compare the power spectral slopes of runs of different lengths. The potential difference from run length and TR of your rest and task dsets will be having different numbers of points in your frequency graph, as well as slightly different locations.

The Nyquist frequency (maximal frequency) will be: 1 / (2TR).
The number of time points between [0, Nyquist] will be: ~N/2, with the approx coming in depending on whether N is even or odd.
The spacing along the frequency axis will be: ~ [1/(2
TR)] / [N/2] = 1/N*TR.

Effectively, you will have more points to fit in the longer run-- but that fact should appear in the plus/minus of the fitting to a linear slope.

Note that having 3 task runs, you have a choice: you could concatenate them and take the FT of the resulting longer run. Or, you could take the FT of each run separately, and average their spectra. Note that the upper frequency (Nyquist) in each case is the same-- you would either have fewer frequencies between [0, Nyquist] with smaller uncertainty or more frequencies of larger uncertainty. Since you are going to use a windowing function, the result should really be pretty similar, I would think.

Separate question: have any of these time series been censored? If so, you effectively have non-uniform sampling, and the classical FT assumptions no longer work. But you can use the Lomb-Scargle (yes, real name) transformation as a generalization, as long as the censoring is effectively random (that is, not every other time point, or something). They you could use 3dLombScargle to generate the power spectrum.

–pt

Hi Paul, thanks for detailed and fast response;

I didn’t use censoring, only despiking. I don’t know how to deal with censored time points (ple isn’t the only measure, I will be doing lempel-ziv complexity, fractional standart deviation etc…) so I tried to be picky about the subjects with as little motion as possible.

As an extension of that, can we generalize your answer? Can I get away with different lengths for LZC, fSD? Is there a rule of thumb for that?

Yasir

Hi, Yasir-

I am not sure how the Lempel-Ziv complexity could be calculated here? From looking on (a somewhat poorly worded entry in) wikipedia, it looks like that relates to a binary sequence. I am not show that applies to FMRI (time series or power).

For fSD, that is also something I am not familiar with. How do you see the errors and multiple components/entries there? That is, how are you parcellating your problem, and assigning the sub-errors of each?

Also, I am curious how these quantities relate to brain imaging—what biological or data-interpretational meaning do they have? Sorry, I just have no experience with them.

–pt

Hi Paul,

LZC: It looks complicated on paper but it’s actually simple and fun when you get the hold of it. Suppose you have this time series:

0.5, 0.1, 0.1, 0.4, 0.5, 0.9, 5, 0.23, 999

First, you take the median of it (for thresholding)

0.1, 0.1, 0.23, 0.4, 0.5, 0.5, 0.9, 5, 999
The median is 0.5.

Second, we round the values to make them a binary sequence (binary in this case is arbitrary. You can actually make trinary (is that a word?) or even septary (definitely not a word).). The values that are equeal to or bigger than median are rounded to 1s, lower than median are rounded to 0s:

0.5, 0.1, 0.1, 0.4, 0.5, 0.9, 5, 0.23, 999 ==> 1 0 0 0 1 1 1 0 1

After that, we use a seperator.

1 | 0 0 0 1 1 1 0 1 ==> LZC= 1

We look at the values at right and see if it is in the left. It might sound tricky but look at the example and it will become easier:

1 | (0) 0 0 1 1 1 0 1 ==> LZC= 1

1 0 | 0 0 1 1 1 0 1 ==> LZC = 2 (0 was new so we upped the LZC)

1 0 | (0) 0 1 1 1 0 1 ==> LZC = 2 (0 was included on the left so LZC remains the same)

1 0 | (0 0) 1 1 1 0 1 ==> 1 0 0 0 | 1 1 1 0 1 ==> LZC = 3 (0 0 was new so we upped the LZC)

1 0 0 0 | (1) 1 1 0 1 ==> LZC = 3 (1 was included. LZC remains the same)

1 0 0 0 | (1 1) 1 0 1 ==> (1 1 is new) ==> 1 0 0 0 1 1 | 1 0 1 ==> LZC = 4

1 0 0 0 1 1 | (1) 0 1 ==> (1 is included. LZC remains same.)

1 0 0 0 1 1 | (1 0) 1 ==> (1 0 is included (firs two numbers), so LZC remains same)

1 0 0 0 1 1 | (1 0 1) ==> 1 0 1 is new, increase LZC ==> 1 0 0 0 1 1 1 0 1 ==> LZC = 5

Basically, this is how LZC is calculated. If you want to eliminate the effect of the length of the signal, you can normalize LZC by dividing it to loga(n); a is the different numbers in your transformed time series (2 in our case) and n is the number of time points. So that way you can compare different LZC values. But this normalization method doesn’t seem very convincing to me for some reason. It tickles the skeptic inside me.

For a really fun use of LZC: A guy compared pop lyrics from different time periods: https://pudding.cool/2017/05/song-repetition/

For the case of using LZC in biomedical signaling: https://www.researchgate.net/publication/242745630_Application_of_the_Lempel-Ziv_complexity_measure_to_the_analysis_of_biosignals_and_medical_images and https://www.researchgate.net/publication/6723694_Interpretation_of_the_Lempel-Ziv_Complexity_Measure_in_the_Context_of_Biomedical_Signal_Analysis

For implementation of this method to fMRI signal: https://www.biorxiv.org/content/10.1101/2020.06.11.106476v1

For an implementation (and many other cool measures) in MATLAB: https://github.com/SorenWT/dynameas (It would be really cool to have this in AFNILAND. Currently, I am extracting the signal using 3dmaskave and doing further stuff on MATLAB. Maybe a program called 3dLZC? 8-))

The interpretation of LZC: This is a tricky one. Authors from the link above think it can be used to calculate complexity (duh) of fMRI signal and it’s different in so-called core and periphery regions and it changes from rest to task. And median frequency (the frequency that seperates the power spectrum to two equal halves) modulates the complexity. I have some hypotheses too, but they are hypotheses at best (right now).

fSD: This is a simpler one. SD is the good old standart deviation. How can an SD be fractional? Take the SD of slow3 frequency and divide it by SD of whole frequency band. This is the fSD of slow3 (you can do the same to whatever frequency band you want).

Interpretation: It gives you a proxy of brain signal’s adaptability (and if it is too much, unstability). For a heathy case: https://www.jneurosci.org/content/31/12/4496#:~:text=A%20more%20variable%20brain%20is%20a%20more%20effective%20brain&text=With%20relatively%20greater%20variability%2C%20and,from%20one%20state%20to%20another.

For a pathological case: https://www.researchgate.net/publication/328563566_Opposing_patterns_of_neuronal_variability_in_the_sensorimotor_network_mediate_cyclothymic_and_depressive_temperaments

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4855585/#si1

So that’s all I guess. Sorry for the really long post. I got carried away and wrote a lot. And thank you.

Yasir

Hi, Yasir-

Sorry for the delay in reply to this. Since I really know nothing about these quantities, I was trying to read a bit more about them. You example is nice, too, thanks.

One thing to note is that there are apparently 2 different ways of defining LZC: the LZ-76 and LZ-78 (I think the numbers denote papers form 1976 and 1978, respectively). They produce different dictionaries of terms and different complexity values-- this was something that was not explained so clearly everywhere online (what? the web isn’t always clear and precise??!), but this article:
https://www.mitpressjournals.org/doi/pdf/10.1162/089976604322860677
in particular helped clarify the distinction a bit. For example, I think that what you describe with your example is more along the lines of LZC-76.

This is probably something that could be added to AFNI, but I have a bit of a backlog of things to do at this moment…

The fSD that you describe sounds to me like basically fALFF—see for example the Appendix here:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3621593/
(though it was derived in other papers). This is basically a quantity that describes the relative contribution of a frequency band to the overall sum of amplitudes— something akin to (but not the same as) the relative power in a band. AFNI’s 3dRSFC will also calculate fRSFA, because RSFA is analogous to ALFF (just the L2 norm of frequencies instead of the L1 norm).

–pt

Hi Paul,

Thanks for the reply. The paper you send about LZC is very interesting. And ALFF is very interesting too. I discarded the idea of fSD for now because it would take waaaaay too much statistical stuff with all the other measures and states I have but I definitely want to operationalize this in the future. Until then, best of luck, I will go back to lurking in the shadows again.

Yasir