ACF files from afni_proc.py

Dear AFNI experts,

I am new to AFNI and am trying to work out how to do the clustering on group level correctly.

From the handouts related to this topic I learned that afni_proc.py automatically estimates the smoothness of my data. In the $subj.results folder, there is a sub folder “files_ACF” containing an out.3dFWHMx.ACF.errts.r0*.1D file for each of my three runs. All three files have 4 columns, but the number of rows varies between subjects (and sometimes runs). So my first question is: what do rows and columns represent?

Secondly, is there any mask used as default to obtain these smoothness values? If so, what mask would that be?

The handout further states that the average of the 3 ACF parameters across subjects should be computed. Again, I am not entirely sure which of the four columns represents the 3 ACF parameters. Further, given that I currently have values separated for my three runs, would I firstly average within a run (over all rows), then within a subject (over all three runs) and then average across all subjects? Or should I take the values for “blur estimates (ACF)” from the out.ss_reviw.$subj and simply average them? These values seem to be the same as the ones saved in the blur_est.$subj.1D file; however, they don’t match the output saved in blur.errts.1D file.

If I was interested in performing ROI analyses, would I then have to redo the smoothness estimation 3dFWHMx using the errts.$subj.fanaticor file as input and use my ROI (e.g. bilateral hippocampus) as mask to then only estimate the smoothness within this ROI and then also use the same mask in the 3dClustSim specifications? Would I then also have to repeat this procedure for all of my a priori ROIs? In a similar manner, if I use a GM mask in my group-level analysis, would I have to separately estimate the smoothness within my GM mask?

I am looking forward to your clarifications.

Best regards,
Stef

Hi Stef,

The files_ACF files show the autocorrelation values at different radii. Column 0 is the radius, and columns 1-3 are the computed ACF, the modeled ACF (so these should be close), and the Gaussian ACF (the old style). There is one file per run and per type (errts or epits, say), along the corresponding png files that show the same functions as an image.

For example, you could run either of these to plot them for run 1 errts:

1dplot -one "files_ACF/out.3dFWHMx.ACF.errts.r01.1D[1..3]"
aiv files_ACF/out.3dFWHMx.ACF.errts.r01.1D.png

Yes, there should be a mask. Look for the -mask option in the 3dFWHMx command in the actual proc script. It is probably full_mask or mask_epi_anat.

The per-subject ACF parameters are first stored in the blur_est*.1D file, which is then captured in the @ss_review_basic output, which is store in the file out.ss_review*.txt. For example, run:

grep ACF out.ss_review*.txt

If you have many subject directories (sub*, such as sub-001), and the afni_proc.py *.results directories under there, you can average all of the ACF parameters as is done in our complete processing example, AFNI_demos/AFNI_pamenc/AFNI_02_pamenc/global_process_outline.txt.

grep -h ACF sub*/*.results/out.ss*.txt | awk -F: '{print $2}'   \
        | 3dTstat -mean -prefix - 1D:stdin\'

Or just to see, start with:

grep -h ACF sub*/*.results/out.ss*.txt

For going after a list of pre-defined ROIs, it might be okay to use the same smoothness values. They are approximated as being global, and I am not sure how reasonable it would be to compute them over multiple small regions. If you are interested only in the ROIs, maybe a full brain analysis is not the most reasonable choice. You could consider one of Gang’s new Bayesian approaches.

  • rick