I followed the surface-based analysis in afni_proc.py example 8, during which I mapped all participants volume to standard mesh surface.
Then I conducted the group analysis and got the group statistical results in surface world.
Q1: Trying to rendering the group statistical results in suma, I assume whichever participant standard spec file I use in suma does not matter, since they are all in the standard mesh, right? But which surfvol file should I specify? Every participant has their own SurfVol_Alnd_Exp+orig.
Q2: To report the activation in standard coordinates system, I guess I have to do surf2vol to transform the surface group stat to volume world, but
1)same question, which spec and sv file I should use to transform?
2)and after it’s done, what’s the recommended pipeline to do tlrc on the volume world group stats to common space (TT_N27 or MNI)?
3) Is it ok to show the surface rendering in standard mesh (suma) before the surf2vol transformation but report the activation coordinates after surf2vol transformation and coregistration to common space?
4) if I tlrc the group stat results to TT_N27, how do I show the surface rendering with afni and suma talking? I was thinking of recon-all TT_N27 anatomical file and @SUMA_AlignToExperiment to align the surfvol to TT_N27, and use this SurfVol_Alnd_Exp and corresponding spec file for suma, is it correct?
Thanks in advance!!
Thanks a lot Peter!
While, just to be clear, if I transform a surface data to a volume data with the suma_MNI152* spec and surfVol file, the resulted volume data will be in standard space(in this case, MNI space), right? There is no need to do any warp or @auto_tlrc?
Following the above question, I did the transformation with MNI152 spec and surfVol file, the volume data I got is super big, as 3.3 GB for each hemi. There are 54 subbicks in the group analysis result, since I am using T1 from MNI152 as the grid parent, the volume data would be bigger, but I assume it wouldn’t be such bigger, right? Would you suggest using the T1 as the grid parent?
See below my surf2vol code.
foreach hemi(lh rh)
-spec "$spath"/suma_MNI152_2009/std.141.MNI152_2009_"$hemi".spec \
-surf_A smoothwm \
-surf_B pial \
-sv "$spath"/suma_MNI152_2009/MNI152_2009_SurfVol.nii \
-grid_parent "$spath"/suma_MNI152_2009/T1.nii \
-sdata "$hemi"_group_MVM.niml.dset \
-datum float \
-map_func max_abs \
-f_steps 10 \
-f_index voxels \
There is no reason to do the @auto_tlrc or other warp. The surface is aligned to the MNI template. Your stats were run on the standard surface.
Is the 3dSurf2Vol to visualize? I’ll ask Paul or Rick to chime in on grid parent stuff, I haven’t extensively played with it. I suspect it’ll have minor impact between the SurfVol and the T1. I usually pick the SurfVol.
Tried also SurfVol as the grid parent, still got the unreasonable size of stats volume data (3.3G for each hemi). I could almost imagine how hard afni has to work to read it. Kind of feel sorry for my afni.
You could pick or make a volume dataset that has voxel sizes similar to your functional data. Also consider @surf_to_vol_spackle for going from surface to volume and fill in any holes.
Thanks for your reply!
If I define the voxel size as the functional data has, I guess I need one functional data as the grid_parent.
But my functional data are all in standard mesh surface world, and what I want to do is to leave the normalization as the last step to maximize the accuracy of correspondence across brains.
What I want to accomplish is, by doing surface to volume transformation with a standard mesh spec (e.g, MNI152) to normalise my functional stats data (surface world) to a standard space (volume world).
So the functional data as the grid parent has to be in the same standard space(MNI152), which I do not have.
Do you suggest that I just change the voxel size with 3dresample on the volume data from 3dSurf2Vol?
The grid parent defines the volume space for the output of 3dSurf2Vol, and the number of columns of input data will define the number of volumes to output, which will be of float type. The resulting dataset size should come directly from that.
But still, why are you going back to volume space? If you just want standard space coordinates, do as Peter suggested, and display your results on the standard mesh surface from that space of your choice. Those surface coordinates are already in the given space.
Note that here the transformation between all of the spaces (single subject and template spaces) is done via FreeSurfer’s alignment of each to its own template (on the surface).
Anyway, just show your suma results on the MNI152 standard mesh surface.
The pipeline I adapted before I decided to move my analysis to surface world is using the combination of 3dclust, 3dclustsimand whereami to output the coordinates of peak activation and the label table scriptly.
But I don’t see 3dclust could work on surface data.
What’s the recommended way of output the activation peak coordinates and lable? If I could do this on surface data that would be great!
Since that is not voxel/node-wise, it is done with a different program, SurfClust.