Tactical/methods question on volume registration with inter-run head shift

Hello AFNI minds,

So since the beginning, I’ve always concatenated runs of a multi-run task (usually two) very early in pre-processing, then I use cumulative times in the 3dDeconvolve modeling, as well as the -runs specifier and invoked text file with 3dDeconvolve, so that (IIRC) 3dDeconvolve will model out the run effects.

What this does is enables the output of the 3dVolReg command (performed on the entire concatenated time series) to more truly show how the subject’s head moved in space across the entire experience of performing the task. I would include the motion parameter outputs in the 3dDeconvolve modeling, to try to control for residual or uncorrected motion. After 3dvolreg and smoothing, I would use align_epi_anat.py to align the MP-RAGE structural to this concatenated functional time series.

Recently, however, I had a case where the subject shifted his head markedly after the first run, where the last two (of three) runs were generally pretty stationary. Moreover, even across the first run, the motion was pretty minimal. This was all evident because I ran 1dplot on the output of 3dVolReg of the concatentated time series. My traditional way still yielded a large head-shift that was also evident visually when I cycled the entire time-series in the afni underlay viewer.

On a lark, I re-did the pre-processing, only this time, I first ran align_epi_anat.py for each task run singly to individually align each task run EPI to the one MP-RAGE, using the -epi2anat operation, and allowed the 3dVolReg to take place as part of this script, instead of as a stand-alone command. Then I concatenated these aligned EPI files, and also the three text outputs of the volume correction. The result was a timeseries that appeared smoother and stiller when cycling across the entire timeseries. the 1dplotted concatenated motion now showed sub-mm motion all throughout. Wow! or so I thought.

So I ran the task regression model on both timeseries. the beta weights for the modeled motion parameters were on average higher for my orginal-way than for my new reverse-aligned data, which would make sense if revised way controlled for motion better. However, not all motion beta weights were lower in my revised preproccessing. Moreover, instead of seeing perhaps a more cleaned up and more statistically signitifcant first-level map of task activation. the two statistical maps look rather different, where my original timeseries actually showed more canonical activations than my revised. I honestly don’t know which is “truth” or whether to pitch the whole scan session.

So my question is, is there something inherent to the way 3dDeconvolve works that would make my recent runwise -epi2ant strategy more (or less) effective in cases like this? I’m just so struck at how different the magnitudes and patterns of task activations are. Is there a better way to handle cases where there is a large inter-run shift, but minimal head motion within run? It just seems to me that what I did is akin to having a two-session experiment within-subject, like dose vs placebo day, and you have to align, and maybe you only had time to get the MP-rage on one of the sessions…

Jim

Hi Jim,

The most simple first thought is that if you need to process this subject differently, more than just a different cost function or something, then it might be best to drop them. But we can ignore that just to understand what is happening.

Also, you might consider trying to use afni_proc.py to do the standard processing. It might make your life much easier.

Regarding the motion parameters, it really does not matter that the second method produced smaller numbers. The same thing would be achieved by de-meaning the parameters per run (which is a side effect of your second method), and doing that would not even affect the motion betas at all (it would be absorbed by the constant polort terms). So it is not safe to judge the success based on those magnitudes. Note that with afni_proc.py, we suggest regression motion per run, in which case your methods should be very similar (with motion regression).

“My traditional way still yielded a large head-shift that was also evident visually when I cycled the entire time-series in the afni underlay viewer.”

This suggests that the between-run displacement did not just cause the image of the brain to move. There are (at least) 2 other potential problems that come with a large displacement:

  1. Distortion: if there was a change in the distortion, rigid body registration will not be able to correct for this, it will just do its best to get close. If this happens, there will almost certainly still be a residual distortion change, which would require a non-linear solution.

  2. Differential shading in the images: there is often strong shading in the images of a multi-channel scanner due to physical proximity to the coils, where the images will be brighter where the subject’s head happened to lie closer to some coil. A large shift will change this non-uniformity pattern, and can therefore affect registration. In particular, since 3dvolreg uses least squares as a cost function, a change in shading would have an impact on registration.

Either or both of these issues might be affecting the cross-run registration, and could leave a residual jump between those runs.

Using epi2anat across runs is indeed like dealing with multi-session data, and suffers from the same problem: a potential distortion difference across runs. Such distortions are typically non-linear, and the EPI->anat registration can only do so much to correct for them. But also, EPI->anat registration will not tend to be as robust as EPI->EPI registration. It is a harder problem. That adds a sort of cross-run “noise” to the voxel positions in the brain, which can both hurt and distort activation patterns.

If you analyzed one run at a time, the patterns should be similar between the methods. But as a multi-run model, the cross-run brain shifts seem to be bigger in your second method (which is not a surprise).

Note that we will be adding an option to afni_proc.py that runs 3dvolreg per run, but then concatenates it with a cross-run affine transformation between the volreg bases. That might help a bit in a case like this. The lpa cost function should mitigate the effect from a (change in) shading artifact, and the affine xform might account for part of the distortion problem, though probably not much. Note that this is basically trusting EPI->EPI registration more than EPI->anat.

Anyway, this is a good example of the “noise” that is somewhat inherent in any multi-session analysis.

  • rick

Thanks Rick,

I’m all set up to run my analyses with cumulative elapsed times modeled with concatenated task runs, and would prefer not to change all that. At what time/point will afni_proc.py be able to accommodate this, such as with your new option? Can it concatenate at the appropriate point in pre-processing already?

Jim

Hi Jim,

That option should be available next week. Well, it should have already been available, but apparently I am easily distracted.

  • rick