Longitudinal resting state analysis

Dear AFNI experts,

we collected resting state data from our participants before and after they performed a task (including between subject experimental manipulation). Each of these resting state runs lasts 10 minutes. We are interested in changes in resting state functional connectivity between MTL and midbrain and whether there are group differences in this and whether it relates to performance in the task. The task is to rate movie clips. Task data and resting state data have been acquired with the same scanning parameters and we have already processed the task data and would like to use the same processing for the resting state data. We implemented the code as shown below, but with this procedure, I only get one final errts* file. How can I then use that to look at the changes in functional connectivity between pre and post task? Or would I have to specify the afniproc.py differently to achieve this? Of not, we are also interested in using both, the pre and post task resting state data for other analyses, so it would be beneficial to have the errts* file for both runs separately.

Any suggestions are highly appreciated.
Many thanks,
Stef

#!/bin/tcsh
source ~/.cshrc

module load afni19.2.07

--------------------------------------------------------------------

Script: s.2016_ChenEtal_02_ap.tcsh

From:

Chen GC, Taylor PA, Shin Y-W, Reynolds RC, Cox RW (2016). Untangling

the Relatedness among Correlations, Part II: Inter-Subject

Correlation Group Analysis through Linear Mixed-Effects

Modeling. Neuroimage (in press).

Originally run using: AFNI_16.1.16

--------------------------------------------------------------------

FMRI processing script, ISC movie data.

Assumes previously run FS and SUMA commands, respectively:

$ recon-all -all -subject $subj -i $anat

$ @SUMA_Make_Spec_FS -sid $subj -NIFTI

Set top level directory structure

set topdir = /storage/shared/research/cinn/2018/MAGMOT #study folder
echo $topdir
set task = restingstate
set fsroot = $topdir/derivatives/FreeSurfer
set outroot = $topdir/derivatives/$task/afniproc

define subject listecho $

set BIDSdir = $topdir/MAGMOT_BIDS

cd $BIDSdir
set subjects =(ls -d sub*) # this creates an array containing all subjects in the BIDS directory
echo $subjects
echo $#subjects

#set subjects = ( sub-control001 sub-control002 sub-control003 sub-experimental004 sub-experimental005 sub-experimental006)
#set subjects = ( sub-control001 sub-control002 sub-control003)
set subjects = ( sub-control001 )
echo $#subjects

for each subject in the subjects array

foreach subj ($subjects)

#set subj	= "sub-experimental005"
echo $subj

# define PWD in which the script should get saved
cd $topdir/derivatives/$task/afniproc

# Input directory: unprocessed FMRI data
set indir   = $BIDSdir/$subj/func

set fsindir = $fsroot/$subj/SUMA


# Output directory: name for output
set outdir  = $outroot/$subj

# Input data: list of partitioned EPIs (resting state)
set epi_dpattern = $indir"/"${subj}"_task-rest_run-*_bold.nii.gz"
echo $epi_dpattern

# Input data: FreeSurfer results (anatomy, ventricle and WM maps)
# all these files are in the ../derivatives/FreeSurfer/$SUBJ_ID/SUMA directory
set fsanat = ${subj}"_SurfVol.nii"
set fsvent = FSmask_vent.nii
set fswm   = FSmask_WM.nii

# specify actual afni_proc.py
afni_proc.py -subj_id $subj.$task					\
    -blocks despike tshift align tlrc volreg blur mask regress          \
    -copy_anat $fsindir/$fsanat                                         \
    -anat_follower_ROI aaseg  anat $fsindir/aparc.a2009s+aseg_rank.nii  \
    -anat_follower_ROI aeseg  epi  $fsindir/aparc.a2009s+aseg_rank.nii  \
    -anat_follower_ROI FSvent epi  $fsindir/$fsvent                     \
    -anat_follower_ROI FSWMe  epi  $fsindir/$fswm                       \
    -anat_follower_erode FSvent FSWMe                                   \
    -dsets $epi_dpattern                                                \
    -tcat_remove_first_trs 0                                            \
    -tlrc_base /usr/share/afni/atlases/MNI152_T1_2009c+tlrc             \
    -tlrc_NL_warp                                                       \
    -volreg_align_to MIN_OUTLIER                                        \
    -volreg_align_e2a                                                   \
    -volreg_tlrc_warp                                                   \
    -regress_ROI_PC FSvent 3                                            \
    -regress_make_corr_vols aeseg FSvent                                \
    -regress_anaticor_fast                                              \
    -regress_anaticor_label FSWMe                                       \
    -regress_censor_motion 0.3                                          \
    -regress_censor_outliers 0.1                                        \
    -regress_apply_mot_types demean deriv                               \
    -regress_est_blur_epits                                             \
    -regress_est_blur_errts                                             \
    -regress_run_clustsim no											\
	-regress_polort 5 # default: 1 + floor(run_length / 150.0) run_length = 600 seconds						


# execute script
tcsh -xef proc.$subj.$task |& tee output.proc.$subj.$task

end

Dear AFNI experts,

To follow up on this topic: given that all my resting state scans (pre and post task) have a duration of 600s (TR = 2s, hence 300 volumes) and given that the final .errts file has 600 volumes, would it be possible to simply split this file to create separate files for pre and post learning rest by selecting the first and second 300 sub-bricks respectively and save them? can these then be used to calculate changes in resting-state functional connectivity between two ROIs?

I am looking forward to hearing your thoughts on this.
Many thanks,
Stef

Stef,

The script you showed seems to automatically concatenate the multiple datasets. You can deal with each run separately by modifying the code slightly. For example, suppose the two resting-state runs are called *_task-rest_run-1_bold.nii.gz and *_task-rest_run-2_bold.nii.gz. Try something like

foreach subj ($subjects)

Input data: list of partitioned EPIs (resting state)

foreach run (1 2)
set epi_dpattern = $indir"/“${subj}”_task-rest_run-${run}_bold.nii.gz"

specify actual afni_proc.py

afni_proc.py -subj_id $subj.${task}.${run}

execute script

tcsh -xef proc.$subj.${task}.${run} |& tee output.proc.$subj.$task
end
end

given that all my resting state scans (pre and post task) have a duration of 600s (TR = 2s, hence 300 volumes) and
given that the final .errts file has 600 volumes, would it be possible to simply split this file to create separate files for
pre and post learning rest by selecting the first and second 300 sub-bricks respectively and save them? can these
then be used to calculate changes in resting-state functional connectivity between two ROIs?

This is fine too even though the result might be slightly different from the approach of dealing with each run separately.

Hi Stef,

Sorry for being so slow to reply to this.

Yes, you can separate the final errts dataset into two pieces using 3dTcat. It should not be necessary to run afni_proc.py once for each run (depending on your answer to the following question).

Did the subjects leave the scanner between the two runs? If not, this seems okay. If they did, it would be better to allow for more flexibility in aligning the 2 runs.

To be sure, do you plan to compute correlations on each run separately? Or do you plan to correlate run 1 with run 2, say? The latter might be problematic due to (different) censoring between the runs. Correlating one run at a time should be reasonable.

  • rick

Hi Rick, hi Gang,

Thanks for your replies.

The participants did not leave the scanner between the two resting state runs, but there was a 45 minute task (including breaks) in between, so it might be that their head position was different between the runs.

I am planning to compute the connectivity (i.e. correlation) between two ROIs separately within run 1 and run 2 and then look at the changes in the resting state functional connectivity.

What would you recommend? Processing both runs separately or separate the .errts file?

Again, I appreciate your guidance.
Best wishes,
Stef

What would you recommend? Processing both runs separately or separate the .errts file?

Either one is fine. I would not fret about it. The difference is whether you minimize the total sum of squares separately for each run or together in the least squares method of solving the regression model, and it would not make a big deal most of the time. If you’re really concerned, try both and see the differences yourself.