resting state proc py error

Dear AFNI experts,

Hope you all are staying safe. I downloaded unprocessed resting-state data from the Human Connectome project where they had 4 scans for each participant (Session 1 L to R, Session 1 R to L, Session 2 L to R, Session 2 R to L). These 4 epi data were concatenated using 3dTcat and submitted to the proc py script like below.

afni_proc.py
-blocks despike align tlrc volreg blur mask regress
-copy_anat /Volumes/LAVA/“$subj”/anat/“$subj”_anat+orig
-anat_follower_ROI aaseg anat /Volumes/LAVA/“$subj”/anat/aparc.a2009s+aseg.nii
-anat_follower_ROI aeseg epi /Volumes/LAVA/“$subj”/anat/aparc.a2009s+aseg.nii
-anat_follower_ROI FSvent epi /Volumes/LAVA/“$subj”/anat/“$subj"vent.nii
-anat_follower_ROI FSWm epi /Volumes/LAVA/“$subj”/anat/“$subj"wm.nii
-anat_follower_erode FSvent FSWm
-dsets /Volumes/LAVA/“$subj”/trunc_rest
”$subj"+orig
-out_dir Proc_results
-script py_result
”$subj"_script
-scr_overwrite
-subj_id “$subj”
-align_opts_aea -cost lpc+ZZ -giant_move
-tlrc_base MNI152_T1_2009c+tlrc
-tlrc_NL_warp
-tlrc_NL_awpy_rm no
-tlrc_no_ss
-volreg_align_to MIN_OUTLIER
-volreg_align_e2a
-volreg_tlrc_warp
-volreg_warp_dxyz 2.0
-regress_apply_mask
-regress_ROI_PC FSvent 3
-regress_make_corr_vols aeseg FSvent
-regress_anaticor_fast
-regress_anaticor_label FSWm
-regress_censor_motion 0.2
-regress_censor_outliers 0.1
-regress_apply_mot_types demean deriv
-regress_est_blur_epits
-regress_est_blur_errts

tcsh -xef py_result_“$subj"script |& tee output.py_result”$subj"_script

@ linenum++
end

In the middle of the proc py, I just had an error that I have never seen before (below).

** FATAL ERROR: -polort value can’t be over 20 :frowning:

Could you please let me know what caused this issue and how to solve this?

Best,
JW

Hi, JW-

Out of curiosity, how many time points is that in total?

I don’t think you should pre-concatenate the time series-- that might throw off modeling, because the breaks between separate time series will be unknown (e.g., for potential baseline jumps).

I would also recommend adding “-html_review_style pythonic” in order to have the more informative and more aesthetically pleasing Python-based output QC HTML.

Also, are the EPI distortions strong in different directions? I would have thought it would be difficult to combine the data like that.

Note that there are reverse-phase-encoding (=reverse blip-driven) distortion correction features integrated into afni_proc.py. See here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/programs/afni_proc.py_sphx.html#example-13-complicated-me-surface-based-resting-state-example
and here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/programs/afni_proc.py_sphx.html#blip-note

–pt

Hi pt,

Thank you so much for the suggestion. Below is the 3dinfo of the concatenated epi data.

Dataset File: trunc_rest_100307+orig
Identifier Code: AFN_snRM4ak1fqyJqtzOpLs2Ag Creation Date: Mon Nov 16 21:55:36 2020
Template Space: ORIG
Dataset Type: Echo Planar (-epan)
Byte Order: LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode: BRIK
Storage Space: 12,896,133,120 (13 billion) bytes
Geometry String: “MATRIX(1.998754,0.032776,0.062524,-96.2307,0.005691,-1.840412,0.782849,39.6296,-0.070364,0.782183,1.839358,-110.4278):90,104,72”
Data Axes Tilt: Oblique (23.121 deg. from plumb)
Data Axes Approximate Orientation:
first (x) = Right-to-Left
second (y) = Posterior-to-Anterior
third (z) = Inferior-to-Superior [-orient RPI]
R-to-L extent: -96.231 [R] -to- 81.769 [L] -step- 2.000 mm [ 90 voxels]
A-to-P extent: -166.370 [A] -to- 39.630 [P] -step- 2.000 mm [104 voxels]
I-to-S extent: -110.428 [I] -to- 31.572 [S] -step- 2.000 mm [ 72 voxels]
Number of time steps = 4784 Time step = 0.72000s Origin = 0.00000s
– At sub-brick #0 ‘100307_3T_rfMRI_[4]’ datum type is float: 0 to 3595
– At sub-brick #1 ‘100307_3T_rfMRI_[5]’ datum type is float: 0 to 3643
– At sub-brick #2 ‘100307_3T_rfMRI_[6]’ datum type is float: 0 to 3617
** For info on all 4784 sub-bricks, use ‘3dinfo -verb’ **

----- HISTORY -----
[lab@Administrators-Mac-Pro.local: Mon Nov 16 21:55:36 2020] {AFNI_19.1.18:macos_10.12_local} 3dTcat -prefix trunc_rest_100307 ‘100307_3T_rfMRI_REST1_LR.nii.gz[4…$]’ ‘100307_3T_rfMRI_REST1_RL.nii.gz[4…$]’ ‘100307_3T_rfMRI_REST2_LR_rsm+orig[4…$]’ ‘100307_3T_rfMRI_REST2_RL_rsm+orig[4…$]’

I chose to combine the epi data based on a previous paper that said ‘Each subject was scanned for two resting sessions (REST 1 and REST 2) over a period of two days. During each session, data were collected using both the left-right (LR) and right-left (RL) phase-encoding runs. For each subject, we concatenated the data (REST 1 LR, REST 1 RL, REST 2 LR, REST 2 RL) into a 3,456 seconds time-course (containing 4800 time points, with TR = 720ms).’ (https://www.nature.com/articles/s41598-019-40345-8)

Based on your suggestion, I can think of two options to run this dataset.

  1. Run pro py for each scan: REST1 LR, REST1 RL, REST2 LR, REST2 RL and include each for group analysis
  2. Use blip_forward_dset for REST1 and REST2 proc py and include REST1 and REST2 for group analysis

Could you please let me know if either will work?

Best,
JW

Hi JW,

It would be conceivable to analyze all four of these runs. But indeed, it seems most reasonable to separate them.

  1. Combining them into one long run would not work well. Not only are there unmodeled run breaks, but nothing would be done to register the different distortions. The volreg step would not produce a good result, and correlations would be registration based, rather than BOLD.

  2. Combining across sessions is conceivable, but there will also be distortion differences there, which could still paint a misleading picture with the correlations.

  3. Combining within session (L-R and R-L) would be similarly precarious, because they might not align well enough (assuming there are distortion differences).

At any rate, it seems conceivable to at least pre-process each run separately. You could use R-L for distortion correction of L-R data, say, but you would want to be careful, since correction volumes might be pretty far separated, temporally. Maybe you could choose 10 of each that are close in time for the 2 blip datasets. That would provide some distortion correction for each set, even as each set goes through independent preprocessing.

Also, the volumes are all 2 mm^3. Has some registration step been run already, or are those really original dimensions?

  • rick

Hi Rick,

Thank you so much for the thoughtful comments. These really help. I’ll pre-process each run separately. I only ran 3dTcat so that should be the original dimension?

Could you please clarify one more thing? After running the preprocessing separately, would it make sense to include each into the group analysis? For example in the 3dttest++, would the below be possible?

-setA ConditionA
Subj1_REST1LR /subject1_REST1LR
Subj1_REST1RL /subject1_REST1RL
Subj1_REST2LR /subject1_REST2LR
Subj1_REST2RL /subject1_REST2RL
.
.
.

-setB ConditionB
Subj1_REST1LR /subject1_REST1LR
Subj1_REST1RL /subject1_REST1RL
Subj1_REST2LR /subject1_REST2LR
Subj1_REST2RL /subject1_REST2RL

Best,
JW

Hi JW,

To be sure, so would these be 4 separate (maybe Fischer Transformed) seed-based correlation volumes? They might almost be repeated measures.

In any case, maybe we can try to get Gang to chime in on this aspect.

  • rick

Hi Rick,

Thank you for the clarification. Right, I meant Fischer transformed seed-based correlation volumes (that was computed from errts files). I was wondering if these 4 separate scans for each subject can be submitted to 3dttest+ or 3dLME and can be treated as one run for each subject or should I normalize them into one.

Best,
JW

I was wondering if these 4 separate scans for each subject can be submitted to 3dttest+ or 3dLME and
can be treated as one run for each subject or should I normalize them into one.

What is your inferential hypothesis about the 4 runs? Do you want to compare the 4 runs?

Hi Gang,

No, I don’t want to compare the 4 runs. Each subject was scanned for two resting state sessions (REST 1 and REST 2) over a period of two days. During each session, data were collected using both the left-right (LR) and right-left (RL) phase-encoding runs. I want to treat the 4 runs (REST1 LR, REST1 RL, REST2 LR, REST2 RL) as 1 run. I initially tried to concatenate the 4 epi files into 1 file prior to proc py but pt and Rick suggested me to run proc py separately. So I’ll have 4 Fischer transformed seed-based correlation volumes (REST1 LR, REST1 RL, REST2 LR, REST2 RL) after preprocessing and I was not sure how to submit them into group analysis. I was wondering if it is possible that group analysis programs (3dttest or 3dLME) take multiple runs per subject, or do they only take one run per subject. Could you please clarify this?

Best,
JW

I was wondering if it is possible that group analysis programs (3dttest or 3dLME) take multiple runs per subject, or do they only take one run per subject. Could you please clarify this?

Yes, 3dMVM is the most straightforward approach with something like

3dMVM …
-bsVars ‘1’

Subj InputFile
S1 s1_data1
S1 s1_data2
S1 s1_data3
S1 s1_data4

Hmm… this would only give you an F-stat for the intercept. So, do this:

  1. get the average across the four volumes per subject using 3dMean or 3dcalc
  2. run one-sample t-test with 3dttest++

Hi Gang,

Thanks again for the suggestion. So you are suggesting normalize the four errts files using 3dMean or 3dcalc before submitting to either 3dttest or 3dLME?

Best,
JW

you are suggesting normalize the four errts files using 3dMean or 3dcalc before submitting to either 3dttest or 3dLME?

No. I assume that you’ve already performed seed-based correlation analysis. If that’s the case, perform Fisher transform on the output, and then follow the two steps I mentioned earlier. Does that make sense?