Most robust motion correction in AFNI

Hey everyone:

I’m going to be running an experiment consisting of 3 runs of fear conditioning acquisition data (8 minutes each) which was originally part of a single 24-minute run. We made this change to account for subject comfort and to hopefully decrease the motion that would have been associated with being in the scanner for so long in the 24-minute case. I have two questions I wanted to ask.

Firstly, since we are technically treating each run as part of one single acquisition scan, we are going to be concatenating them an will obviously have issues with inter-scan alignment. Secondly, because we use shocks we have implicit problems with motion within scans anyway which is even more worrying because we often look at subcortical structures (i.e. amygdala) for which bad inter- or intra-scan alignment will be problematic.

In past analyses we have used standard AFNI preprocessing pipelines including 3dVolreg for motion correction and align_epi_anat.py for alignment between T1 and EPI datasets and we include motion regressos in 3dDeconvolve. Since I’ve essentially been told the 24-minute run is anathema and we’ll most likely end up doing the 3 8-minute runs, my boss asked me if there were any 1) Motion correction and 2) Inter-Scan Alignment techniques beyond tools that we (admittedly) used a couple of years ago.

So I suppose my question is, given we’re going to have to get really tight alignment between scans and hopefully scrub out motion as much as possible within scans, are there updated protocols or algorithms in AFNI for doing that or is the standard 3dVolreg, align_epi_anat.py still the standard, even in cases such as this?

Thanks for your insight!
Lauren

I think the quick answer is: use afni_proc.py and your life will be pretty easy with this. You can load in a set of FMRI runs and process them together, with concatenation and motion correction done properly. The primary afni_proc.py example in the AFNI Bootcamp demo data set demonstrates processing a set of 3 FMRI runs together-- see the “Boot up” step here:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/bootcamp_stuff.html


AFNI_data6/FT_analysis/s05.ap.uber

Rick has even put in automatic QC to help you look at the data in its various processing stages, evaluate alignment and how much motion/censoring there was, etc.

–pt

Hi Lauren,

To go along with Paul’s useful comments, it would probably
be good to use -regress_motion_per_run, along with
‘-volreg_align_to MIN_OUTLIER’. Those are both suggested
as defaults at this point.

  • rick

Great, thank you both for your replies. I’ve seen weird techniques in other softwares and it’s nice to be able to use a pipeline I’m familiar with and not employ corrections when I’m not sure how they work.

Appreciate your time. Thank you.

We enthusiastically share those sentiments… :slight_smile:

While you are probably already doing so, to be sure,
consider using both -regress_censor_motion and
-regress_censor_outliers.

  • rick