afni proc blip up/down and align questions

Good morning AFNI folks,

Two fun things:
I’m looking through the script that afni proc script and I noticed something I think might be a small issue. I’m performing blip up and blip distortion correction, using single band reference images (created by the multiband sequence, very high contrast to noise). I also use the single band reference image from my functional run as the external_volreg_base (it the same as the functional data in all ways, except better contrast). This all works fantastically, and leads to what appears to be good anatomical alignment - however I think it could be improved.

Specifically, the align_epi_to_anat portion of the afni_proc output uses the original external_volreg_dataset, rather than the dataset that has been corrected for distortions by the blip up/down process. It seems that it would be better to use the undistorted, ‘corrected data’, to calculate the alignment with the anatomy, while continuing to use the original version as the volreg alignment target. Perhaps I have done something wrong, and that is why this is happening?

My second question also concerns blip up/down correction - is it possible to use different blip datasets per run? In one cases, I am doing a pre vs post comparison, and therefore want to include both runs of data in a single afni proc. Unfortunately, I can only use a single blip up/down dataset, even though there are likely differences in the head position and therefor distortions between runs. Is there any interest in adding the possibility of multiple blip datasets into a single afni proc run?

Thanks again for afni_proc - it has made my life so easy that I am finding new ways to make it difficult.

Always pushing the envelope… :slight_smile:

Application of the external -volreg_base_dset might require some options.
It isn’t clear what should do by default there. Maybe
there should not be any default, and the user should specify it.
This needs a little pondering, at least as far as how
should think. But so you might be best off doing the naughty step
of modifying the proc scripts to warp the EPI base volumes.

The second one is more complicated, or possibly simple, depending
on one’s perspective. Do you mean that the subject has been out
of the scanner between runs? I would be inclined to run such an
analysis with multiple commands, which would have
the affect of aligning across scans via the EPI/anat registration,
rather than EPI to EPI.

Sorry, but time is a bit tight now. I will try to ponder these
things more.

  • rick

No worries, ponder away. Measure twice, cut once and all that. I appreciate the thoughts thus far. I may try and do some testing to see if the benefits of external volreg blip correction prior to anatomical registration offer significant improvements (dice scores or some cost function perhaps…). The data already looks good, but I am always curious how it could be better.

And regarding multiple blip conditions, yes - in my case the subject is removed from the scanner and placed back in. In other examples, the participants head is shifted (on purpose) and the run is continued. I promise there is a good reason for this.

For my study, the goal is to directly compare a pre vs post condition, and doing it all in one afni_proc command is elegant, and allows me to perform the within-subject, pre_vs_post contrast, get those Betas and corresponding t-statistics for 3dMEMA - as I believe was recommended here on the message board. It seemed best to me to build it all in the same model.

Thanks again, I’ll see if I can’t do some editing and get an idea of the benefits…

Doing it as one model is fine. That would just mean running up through regression for each session, and
then a separate one for the actual regression, that puts it all
together. You would pass it the concatenated motion params,
and possibly the tedort files, if you have them (applied via
-ortvec). That would not be as streamlined, but it should work.

  • rick

Good point- it is a little less streamlined but does solve the problem nicely, and gives me a bit more independent information about each run as well. Back to the code mines!