Comparing methods for volume registration with enorm?

Hi there,

I have been wondering about how to compare different volreg methods to see which produces the best alignment. I have been looking in the GUI between and within blocks/ overlaying them etc. But I wanted a more quantifiable method

I was wondering if this approach, related to running volreg twice and comparing enorms makes sense??

This is what I did

  1. Run my volreg1 using chosen method (generates enorm and dfile 1D files)
  2. Rerun registration (volreg2) on pb01.volreg blocks to generate emorn and dfiles for this ‘fixed’ pb01 block – to see how much redisual movement is left after you did your volreg1
  3. Plot enorms for volreg2 against each other on the same plot (to see how much movement remains after volreg1) and compare enorm means between volreg 1 and 2

(example code below)

Could someone let me know if this is a legitimate way to do this?

Or, is there a better/ recommended way for quantifying volreg success?

Thank you for your help :smiley:

Harriet


afni_proc.py -subj_id $subj
-dsets $data_dir/P007_FR_RUN*
-tcat_remove_first_trs 0
-blocks volreg
-volreg_align_to MIN_OUTLIER
-volreg_interp -Fourier
-volreg_warp_dxyz 0.8
-volreg_motsim


set cap = 1

Re run volreg on volreg’d files to get motion parameters after correction

foreach run ( 01 02 03 04 05 06 )

3dvolreg -verbose -zpad 1
-base vr_base_min_outlier+orig
-1Dfile dfile.r{$run}2.1D
-prefix pb01.P007_FR
{$cap}.r{$run}.volreg_2
-Fourier
pb01.P007_FR_{$cap}.r{$run}.volreg+orig
end

combine new dfiles

cat dfile.r01_2.1D dfile.r02_2.1D dfile.r03_2.1D dfile.r04_2.1D dfile.r05_2.1D dfile.r06_2.1D > dfile_rall_2.1D


Calc enorm

1d_tool.py -infile dfile_rall_2.1D -set_nruns 6 -derivative -collapse_cols euclidean_norm -write motion_P007_FR_{$cap}_enorm_2.1D

Plot

1dplot -one motion_P007_FR_{$cap}enorm.1D motion_P007_FR{$cap}_enorm_2.1D “1D: 2000@0.3” &

For real data, you can compare the results using enorm values and similar kinds of derivative values for motion. See 1d_tool.py for possibilities. For affine methods that use 12 parameters (like 3dAllineate) instead of just the 6 rigid ones that 3dvolreg uses, it’s not clear what the weights should be for the last 6 parameters. No matter which of these metrics you use though, you can’t really compare the quality of the alignment for real data with any of them. Instead you will have to rely on visual verification by looking at the data. You can synthesize a dataset however where you impose some kind of motion and see how well you can detect and fix it. In that case, I would look at the individual alignment/motion parameters.

Building on what Daniel has already noted:

Indeed, judging quality of volreg or alignment with a single quantity is tough: if we had some quantity Q that could compare this well, we would use Q as the cost function itself!

Visualization is still really really important in neuroimaging—don’t feel bad about that because it isn’t a quantity.

To get a sense of what processing stream is dealing with motion “better” is a hard question. One criterion might be looking at correlation patterns across a group-- which are noisier, or smoothed out, or some other feature, likely due to poor(er) alignment? That kind of thing was done to evaluate 3dQwarp, as in this poster by recognizable names:
https://afni.nimh.nih.gov/pub/dist/HBM2013/Cox_Poster_HBM2013.pdf
The top part used FMRI data to show the benefits of nonlinear alignment over the older style, linear affine alignment; additionally, the bottom part shows the high quality of 3dQwarp’s alignment abilities (separate issue to what you are asking about for EPI motion alignment). But the top part might provide a useful metric.

Looking at motion alignment parameters only can be tough—you want a program to give “the right” motion estimates+alignment; some programs could misbehave by under-aligning, and others by over-aligning, and it would be hard/impossible to distinguish those situations just from motion plots, without seeing the images themselves.

–pt

Hi Daniel,

Thank you for dealing with all of my questions across streams :slight_smile:

Just wanted to confirm I have understood what you are saying in your reply:

For real data, to probe volreg success, enorm and similar values (like FD) are ok to use (in conjunction with visualisation) - ??

However, when using affine methods, such as when doing alignment with 3dAllineate, using these enorm/ FD values does not make sense - ??

Thanks again,

Harriet

Thank you for the poster and info ptaylor, I will have a look into the correlational methods you suggested (and make sure to visually investigate my registration :slight_smile: )

Paul and I are saying that no matter what metric you use, you will still have to visualize the data to see if the methods worked. You can only numerically check on success for synthetic data where you know what success looks like, and then you can use almost any of the metrics.