I’d appreciate your help in understanding the dimensions of the aligned output I get from align_epi_anat.py:

I aligned a T1w image to an EPI, using align_epi_anat.py. I used the transformation matrix that I got as output to also align an ROI mask that was defined on the original T1w, so that it would match the EPI (I used 3dAllineate -cubic to do that).
As for the T1:

The dimension of the original T1w: 160256256 (1^3 mm)

The dimension of the aligned T1w: 192225216

How can this be? (or: can this be?)

Just to make sure - having a transformation matrix means that the transformation is linear? Should the fact that it is linear matter for the difference in dimensions?

As for the ROI mask:

The dimension of the original ROI mask: 160256256

The dimension of the aligned ROI mask: 160256256

Now I got completely confused. Shouldn’t the aligned ROI have the same dimensions as the aligned T1? They were both aligned with the same matrix, were they not?

Can you please post your 3dAllineate command and align_epi_anat.py commands here?

Note the “3dAllineate -1Dmatrix_apply …” will apply the specified aff12.1D file, but as to the final grid of the output, that depends on the “-master …” dset.

In order to specify grid resolutions for align_epi_anat.py, there are these options (and you should check out the helpfile for further useful information provided just below these lines):

-master_epi nnn : master grid resolution for aligned epi output
-master_tlrc nnn : master grid resolution for epi+tlrc output
-master_anat nnn : master grid resolution for aligned anatomical data output
-master_dset1 nnn : equivalent to master_anat above
-master_dset2 nnn : equivalent to master_epi above
(SOURCE/BASE/MIN_DXYZ/dsetname/n.nn)
.....

So, I see now that I need to add a -master variable (as indeed it says in the helpfile, sorry for not noticing that). Then the command should be like this:

Also, just to better understand: shouldn’t a linear transformation via matrix mutiplication result in the same output dimensions no matter what the input is? That’s why I was surprised by these results, but I suppose my naive understanding is misleading. Does 3dAllineate transform and then resample to match to the original grid? Or is it not a simple matrix multiplication as I thought it was?
Thanks again for all the help!

And no worries, those are find intuitions and good questions. The linear (affine) transformation describes the mapping rules between sets of points. The grid itself is separate-- that establishes the field of points themselves (that are pulling data from the source grid).

There is matrix multiplication going on. If the source and master dset were both continuous (i.e., infinitely filled in/no gaps among data points), then it would be essentially just that multiplication. But we don’t have data everywhere-- just 1 value per voxel, and AFNI considers that information stored at the center of each voxel. We are mapping between discrete grids, so there are also smoothing rules: I have a point that pulls information from a discrete grid-- I know where I should pull data from if it were continuous, but it ain’t, so I need to have a kernel/rule for interpolating from the discrete points of the source dset around that “idealized” location that the matrix multiplication told me to look at in the source.

–pt

The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.