Hi-
An anatomical brain mesh is a 2D surface, made up of nodes (points in space) that are connected by edges. It is embedded in a 3D space. A common way of making it is finding the gray-white boundary in a T1w anatomical scan of a subject’s brain; we often also estimate the “outer” gray matter surface, called the pial. In volumetric datasets, the “unit” for storing data is a voxel; in surface meshes, it is nodes.
There are 2 concepts of the mesh that are important here: topology and geometry.
- Geometry is the (x,y,z) location of each point in space—this is what defines the fact that a mesh “fits” the bumps and wiggles of my brain.
- Topology is the connections of the nodes that make up the mesh—which nodes are connected to which others?
Imagine drawing a simple mesh on a balloon. If I inflate the balloon, I change the mesh’s geometry (where each node is in 3D space), but not its topology (the node connections have not changed). In a volume, the “topology” is described by which voxels are neighbors of which voxels; in the mesh, it is described by which nodes are connected to which node. Because we use standard meshes, the multiple surfaces estimated fora subject (i.e., gray-white boundary and pial) have a correspondence of nodes between themselves—so for each node on the gray-white boundary, there is a corresponding node of the same number on the pial, making it useful for projecting data that lie between them, within the GM itself.
SUMA was designed to use standard meshes (https://pubmed.ncbi.nlm.nih.gov/16035046/). When you run, say, FreeSurfer’s recon-all, you get a mesh associated with a brain, but each subject will have different numbers of nodes. AFNI’s @SUMA_Make_Spec_FS program will estimate meshes that have a standard number of nodes, with each node of a particular number corresponding to approximately the same physical location on the brain across all subjects. (there are actually 2 standard meshes: std.60 and std.141, with former being less-dense and corresponding to about the same spacing at a typical EPI volume voxel centroids, and the latter approximately the same spacing as a typical T1w volume voxel spacing.) If you think about a “standard space” in volumetric data, but these properties hold true—2 dsets in the same standard space have the same number of voxels, and a given voxel with indices (i,j,k) should correspond to the same anatomical location; in a surface, 2 standard meshes have the same number of nodes, and a given node with index i should correspond to the same location. If you have an overlay created from subject A, you can display it directly on subject B without resampling, as long as both subject A and B are in the same standard space.
So:
To map (or project data) between a surface and a volume of the same subject, you would use 3dVol2Surf and 3dSurf2Vol; this is actually what the AFNI GUI uses when you have both the afni and suma GUIs open and talking, and you can see the surface mesh lines in the AFNI GUI and the volumetric overlays from AFNI in the SUMA surfaces. This is what you appear to have done in your attached image (your “subject” is just the TT_N27 dataset, which is perfectly fine).
You can use 3dVol2Surf to project your volumetric data onto the std.141 TT_N27 mesh. Then you will have a standard mesh carrying that information. At that point, you can display that new mesh on any other subject. If you wanted to make a volumetric dataset of that new mesh in the subject space of subject A, for example, then you could use 3dSurf2Vol to project that mesh into that specific subject’s volumetric space.
You can project information in different ways—use just one surface and “grab” voxels to take data from or pass data to. In each case, I would use the gray-white boundary surface (e.g., “smoothwm”) and the pial surface, and project using both of those (you are essentially “grabbing” the voxels in between those surfaces, which should mainly be GM).
Does that make sense conceptually, and is that what you would like to do?
–pt