Hi, Cameron-
Hmm, I see. I will have to dig into the *.MI.1D file a bit more, but my guess is that “MI” stands for “MapIcosahedron” rather than “mutual information”. It looks like it is more of a history of where it came from, listing nodes, rather than being a cost-like value. The full description of that option in the program help describes that a bit more:
-write_nodemap: (default) Write a file showing the mapping of each
node in the icosahedron to the closest
three nodes in the original mesh.
The file is named by the prefix of the output
spec file and suffixed by MI.1D
NOTE I: This option is useful for understanding what contributed
to a node's position in the standard meshes (STD_M).
Say a triangle on the STD_M version of the white matter
surface (STD_WM) looks fishy, such as being large and
obtuse compared to other triangles in STD_M. Right
click on that triangle and get one of its nodes (Ns)
search for Ns in column 0 of the MI.1D file. The three
integers (N0, N1, N2) on the same row as Ns will point
to the three nodes on the original meshes (sphere.reg)
to which Ns (from the icosahedron) was mapped. Go to N1
(or N0 or N2) on the original sphere.reg and examine the
mesh there, which is best seen in mesh view mode ('p' button).
It will most likely be the case that the sphere.reg mesh
there would be highly distorted (quite compressed).
Looking at the top of a *MI.1D file in the AFNI Bootcamp data, the columns are described in comment as follows:
# Col. 0: Std-mesh icosahedron's node index.
# Col. 1..3: 1st..3rd closest nodes from original mesh (rh.sphere.reg.gii)
# Col. 4..6: 4th..6th interpolation weight for each of the 3 closest nodes.
So, again, I think this is more of a “mapping” than a “cost-like quantity”. (The “interpolation weight” is likely the information of relative proximity.)
In terms of wanting to quantify the accuracy of alignment of surfaces, that sounds like a good thing to have. But note that a cost function is a quantity that evaluates the overall quality of matching of 2 volumes: it is compared with alternative alignments during the processing, and in the end the arrangement with minimal cost wins and that is the final image. So, the final cost value itself is the quantitative assessment of matching. If there were a better quantity to assess the alignment, then by definition that quantity should be used as the cost function in the alignment process. And then, by definition, any other quantity to assess the matching must be a poorer evaluation than the cost function itself. Basically, it is inherently hard to quantitatively assess the quality of cost function-based alignment results, because you end up trying to replace the cost function: if you do find a better quantity to assess, then that would become the cost function to use.
In general, it seems like visual verification is still the gold standard for judging alignment, in the end. There are programs/functionalities in AFNI and SUMA to help visualize surface results systematically-- those might be of use?
–pt