counting censored TRs in enorm.1D file

Dear AFNI experts,

Because I want to use wavelet despiking on my resting state data set I ran for now only some preprocessing blocks on my data to get an idea about how bad the motion is. I have the xxx.enorm.1D and the outcount.xxx.1D files as output.
What I am trying to do now is to calculate for each subject 1) how many TRs would be censored based on the enorm if I apply a certain threshold (e.g. 0.3) 2) how many TRs would be censored based on the outliers if I apply a certain threshold (e.g 0.05).

I looked through the documentation for the 1d_tool.py but unfortunately I still don t understand what the right command would be.
for the enorm.1D file I tried: 1d_tool.py -infile xxx.enorm.1D -set_nruns 1 -quick_censor_count 0.3

but the number that I get does is lower than the number of values that are above 0.3 in the enorm.1D file. Also I would want to have also the previous TRs censored. What is the right command to use?

Thank you very much in advance!
Carolin

Hi-

I looked at the script that afni_proc.py creates for some insight into this. In particular, I looked at the AFNI Bootcamp script for the big Tuesday session on using afni_proc.py, which is AFNI_data6/FT_analysis/s05.ap.uber. Running that script creates a file called ‘proc.FT’, and reading that commented script, the first part under the “auto block: outcount” block has useful suggestions for creating the outlier censor files; the “regress” block code has some helpful lines for dealing with enorm and combining the censor files. I am not sure if you have multiple runs or not, but this afni_proc.py command does (it has 3), so part of the work of creating the censoring files is concatenating information from separate runs.

++ For the outcount file:
In the first for loop, 3dToutcount is run a few times to create outcount.r*.1D file. Those exist in the results directory, so you don’t need to create those. I will also leave out the part of checking for pre-steady state TRs. I will also include the creation of the “runs” variable from above in the script (basically, specifying that there are 3 runs of data). That leaves:


# set list of runs
set runs = (`count -digits 2 1 3`)

foreach run ( $runs )
    1deval -a outcount.r$run.1D -expr "1-step(a-0.05)" > rm.out.cen.r$run.1D
end

# catenate outlier censor files into a single time series
cat rm.out.cen.r*.1D > outcount_${subj}_censor.1D

In the above, the 0.05 value is the threshold, which you could set differently. The outcount_${subj}_censor.1D is the output censor file (combined with the enorm-calculated censor file later-- see below).

++ For the motion/enorm censoring:
This is the command that creates motion_${subj}enorm.1D, motion${subj}CENSORTR.txt and motion${subj}_censor.1D:


# create censor file motion_${subj}_censor.1D, for censoring motion 
1d_tool.py -infile dfile_rall.1D -set_nruns 3                            \
    -show_censor_count -censor_prev_TR                                   \
    -censor_motion 0.3 motion_${subj}

Note that the input file is the dfile_rall.1D, with the addition information of the number of runs present. The “-censor_motion 0.3” is what sets the enorm motion censor limit. I think you could do something similar for your processing, rather than just using the enorm file itself.

++ For combining the censoring:
The next command after that uses 1deval to combine the censoring


# combine multiple censor files
1deval -a motion_${subj}_censor.1D -b outcount_${subj}_censor.1D         \
       -expr "a*b" > censor_${subj}_combined_2.1D

How is that? It becomes a lot simpler if you only have one run, but I left in the more general case here.

–pt

Hi Carolin,

In addition to Paul’s useful comments, note that for motion, you probably have a concatenated dfile from all runs of 3dvolreg output, dfile_rall.1D.

To quickly just count the number of time points that would be censored, there is the more direct -quick_censor_count option to 1d_tool.py, as in Example 18:

1d_tool.py -infile dfile_rall.1D -set_nruns 3 -quick_censor_count 0.3

or if the run lengths vary:

1d_tool.py -infile dfile_rall.1D -set_run_lengths 100 80 120 -quick_censor_count 0.3

Those commands could be dropped in a “foreach” loop, as in:

foreach maxm ( 0.2 0.3 0.5 0.7 0.8 1.0 )
  set ncen = `1d_tool.py -infile dfile_rall.1D -set_nruns 3 -quick_censor_count $maxm`
  echo "at limit $maxm, ncensored = $ncen"
end

To combine with outliers would be more work, maybe computing censor files each time more fully, as Paul mentioned. But enorm censoring should put you into a sufficient ball park. This method could be used to make an easy spreadsheet across subjects.

  • rick

That makes sense and it worked! I have what I needed!
Thank you very much Paul and Rick!

Best
Carolin