3dRSFC

Hello,

My question has to do with 3dRSFC.

I have resting date fmri data from a single subject.

I then used the following command:
3dRSFC -prefix PREFIX 0.01 0.1 INPUT_NAME

What this does is that it acts as a bandpass filter in order to filter out too high or low signal intensities. It then outputs the alff, falff malff, rsFA. and some other parameters.

The output of this however were values that were assigned to each voxel. I was expecting to get a single value (average for all the voxels), but instead the program assigns a single value to each voxel. My question now is, how would I get statistics in order to make a comparison between different outputs? A paper I read said the following which still has me confused: "In our previous work, the ALFF of each voxel was divided by the individual global mean of ALFF within a brain-mask, which was obtained by removing the tissues outside the brain using software MRIcro "

where does one get a global mean? Is that just a brain with the skull extracted? which can be obtained by BET from FSL?
Even then… how do I do this? given I have a bunch of output files that are images of the brain and not a single value.

Lastly, the next step in this paper was:
“The individual data was transformed to Z score (i.e., minus the global mean value and then divided by the standard deviation) other than simply being divided by the global mean.”
Again, how would I do this? It sounds really straight-forward, but it’s so confusing to me.

ps. I’m trying to follow the flowchart presented by FATCAT here: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/FATCAT/FATCAT_All.html.
But I’m still confused as to how they obtained the statistics? how would I obtain values which I can compare in a paper? And would I have to perform a t-test on it?

Any assistance would be greatly appreciated! I’ve been struggling with this for quite some time.

Thanks in advance.

Hi, Sondos-

My question has to do with 3dRSFC.
I have resting date fmri data from a single subject.
I then used the following command:
3dRSFC -prefix PREFIX 0.01 0.1 INPUT_NAME
What this does is that it acts as a bandpass filter in order to filter out too high or low signal intensities. It then outputs the alff, falff malff, rsFA. and some other parameters.

Agreed. For each time series, a frequency-based parameter is calculated. Note that at present, this will assume that the time series have not been censored, as the Fourier-based bandpassing assumes equal durations between time points (i.e., in FMRI, a single TR between points, whereas censoring would effectively put multiple TRs between non-censored points). On a technical note, 3dRSFC is basically a wrapper around 3dBandpass functionality.

The output of this however were values that were assigned to each voxel. I was expecting to get a single value (average for all the voxels), but instead the program assigns a single value to each voxel.
It is a voxelwise calculation-- each time series is Fourier transformed into the frequency domain, and then various things calculated.

My question now is, how would I get statistics in order to make a comparison between different outputs?
In a group setting, one can do voxelwise approaches like ttests, for example. You said you have just a single subject? What is the aim of the study/analysis?

A paper I read said the following which still has me confused: "In our previous work, the ALFF of each voxel was divided by the individual global mean of ALFF within a brain-mask, which was obtained by removing the tissues outside the brain using software MRIcro "
→ that would be the mALFF quantity, which is already automatically calculated within the masked region in the data set.

where does one get a global mean? Is that just a brain with the skull extracted?
Yep, is your data brain-extracted already? You could input a “-mask …”, which would probably be preferable to using “-automask” on resting state data (as the values are often around zero, anyways, so the automasking might be pretty rough).

which can be obtained by BET from FSL?
Well, that wouldn’t be my first choice function… but the main thing would be to make a brain mask, yes; if you’re using afni_proc.py to process your FMRI data (highly recommended!), then there would likely be a brain mask created during the processing.

Even then… how do I do this? given I have a bunch of output files that are images of the brain and not a single value.
You still get voxelwise values; but again, mALFF is automatically calculated already.

Lastly, the next step in this paper was:
“The individual data was transformed to Z score (i.e., minus the global mean value and then divided by the standard deviation) other than simply being divided by the global mean.”
Again, how would I do this? It sounds really straight-forward, but it’s so confusing to me.

So, I looked up the paper from the quotes, and so I believe it is Zou et al. (2008, J Neurosci Meth). Initially, I though the Z-transform meant was the Fisher Z transform for variables, but that isn’t the case (which is good, because the Fisher Z would have problems for all the quantities described here). In the Zou et al. paper, they refer to the fact that they transformed each quantity x_i to it’s normalized Z_i value as:
Z_i = (x_i - {mean of all xs}) / {stdev of all xs}.

This would be a tcsh executable code to do that, for example with a data set of resting state ALFF values and a whole brain mask:


#!/bin/tcsh

# user input sets
set my_dset = "REST_filt_ALFF+orig"
set my_mask = "mask+orig"

# calculate summary stats within brain mask
set dset_mean  = `3dBrickStat        \
                    -mean            \
                    -mask $my_mask   \
                    $my_dset`

set dset_stdev = `3dBrickStat        \
                    -stdev           \
                    -mask $my_mask   \
                    $my_dset`

echo "\n\n"
echo "The mean of values within the mask is:   $dset_mean"
echo "The stdev of values within the mask is:  $dset_stdev"
echo "\n"

# get new dset of values
3dcalc \
    -echo_edu \
    -a $my_dset \
    -b $my_mask \
    -expr "step(b)*(a - $dset_mean) / ($dset_stdev)" \
    -prefix NEW_DSET_Z

You could copy this text into a file called, say, “do_zcalc.tcsh”, and then set whatever file names are appropriate for your data sets, and then execute it as tcsh do_zcalc.tcsh.

ps. I’m trying to follow the flowchart presented by FATCAT here: [afni.nimh.nih.gov].
… a dangerous pastime.

But I’m still confused as to how they obtained the statistics? how would I obtain values which I can compare in a paper? And would I have to perform a t-test on it?
To be honest, this part would confuse me as well if there is only one subject involved in the study. More info would be required. But this is separate from the technical points of calculating quantities-- it’s much easier for me to offer advice on the calculating, rather than on the interpreting!

Any assistance would be greatly appreciated! I’ve been struggling with this for quite some time.
Just remember- you’re never alone with the AFNI Message Board.

–pt

Thank you for your helpful response. Initially I was trying to run it for one subject but realized I should probably try to run it for multiple subjects.

Hi, Sondos-

OK, just to be clear: 3dRFSC should be run on each individual subject’s time series (and not, say, on some concatenation). But for doing statistics and comparisons, then doing voxelwise comparisons across/between groups makes sense, I think.

–pt

Ah, I see. So even though I have multiple subjects, I would have to run 3dRSFC individually on each and then do an in between and group comparison?

Yes, it would be run on each, providing a parameter value per voxel (e.g., ALFF), which could use for t-test comparisons, for example.

–pt

Hey ptaylor,
I have tried to use your script stated above to do a Z-standardization of ALFF-values in AFNI. I don’t quite get if you use all images or just one here… I have adapted the script in the following way:


#!/bin/bash
PREPROC_DATA=/mypath/to/preprocessed/data
PROGS_DIR=/mypath/to/scripts

# generate a txt file with all datapaths of ALFF BRIK files calculated using 3dRSFC
for sub in ${PREPROC_DATA}/*/01-session/rs-fMRI/SLOMOCO5/ALFF_fns_0.01_0.08_striatum/RSFC_fns_ALFF+tlrc.BRIK; do 
echo $sub > ${PROGS_DIR}/ALFF_analysis/alff_fns_for_z.txt; 
done


# user input sets
cd ${PROGS_DIR}/ALFF_analysis
my_dset=`cat alff_fns_for_z.txt`
my_mask=${MASK_DIR}/striatum-structural-2mm-bin.nii
echo $my_mask


# calculate summary stats within brain mask
dset_mean=`3dBrickStat -mean -mask $my_mask $my_dset`

dset_stdev=`3dBrickStat -stdev -mask $my_mask $my_dset`

echo "\n\n"
echo "The mean of values within the mask is:   $dset_mean"
echo "The stdev of values within the mask is:  $dset_stdev"
echo "\n"

# get new dset of values
3dcalc \
    -echo_edu \
    -a $my_dset \
    -b $my_mask \
    -expr "step(b)*(a - $dset_mean) / ($dset_stdev)" \
    -prefix NEW_DSET_Z


The summary stats (mean, std) should of course be calculated on the basis of the whole dataset. But when I apply 3dcalc, the output NEW_DSET_Z is a 3D file (although I would assume it to be 4D as every subject’s ALFF image should be zstandardized, right?). How could I adapt the script so that every subject’s ALFF image is z-standardized based on the cohort’s mean and std?
Best, Melissa

Hi, Melissa-

Are you wanting to take N subject’s 3D ALFF volumes, stack them together into a N-by-3D dataset, and then calculate the voxelwise normalized “Z” value of each, based on the voxelwise mean and standard deviation?

There are different ways to slice the data, and that matters for order of operations and program(s) used.

-pt