t-value higher in surface computation than volume computation

Hi afni group:

I had two ways to do group analysis in my mind.

The first is based on the volume data, just simply @auto_tlrc the first level glts file for each participant and do anova (in my case, 3dMVM).

The second is based on the surface data. 1) do 3dVol2Surf on the first level glts file of each participant, 2) with the surface data (1d file) do the same anova, 3), do 3dSurf2Vol to transform the surface data back to volume data.

I assumed those two ways might have slightly different values but the same pattern. While when I checked the results from two methods, they did show a very similar contrast pattern, but the statistical value range differ a lot. The activation from surface data were more robust ( a lot).

Here is the codes that I used just in case you want to check the detail.

method 1:


@auto_tlrc -apar Anatomical_reg_norm+tlrc -input GLM_glts+orig -dxyz 3 -rmode NN -suffix _at

3dMVM


3dMVM -prefix $spath/group/group_MVM -jobs 4 \
			-bsVars 'Modality' \
			-wsVars 'Scale' \
			-SS_type 2 \
			-num_glt 25 \
			-mask "$spath"/group/mask_group+tlrc \
			-gltLabel 1 Listen_NS -gltCode 1 'Modality : 1*Listen Scale : 1*NS' \
		
		...
		            -gltLabel 24 US_ Read-Listen -gltCode 24 'Scale : 1*US Modality : 1*Read -1*Listen' \
                         -gltLabel 25 SW_ Read-Listen -gltCode 25 'Scale : 1*SW Modality : 1*Read -1*Listen' \
			-dataTable \ 
			Subj	Modality	Scale	InputFile \
			s2	Listen	NS	"$spath"/sub02/orig_files/GLM_glts_at+tlrc'[6]' \
			s2	Listen	CS	"$spath"/sub02/orig_files/GLM_glts_at+tlrc'[0]' \
			...
                        ...
			s9	Listen	CS	"$spath"/sub09/orig_files/GLM_glts_at+tlrc'[0]' \
			s9	Listen	US	"$spath"/sub09/orig_files/GLM_glts_at+tlrc'[2]' \
			s9	Listen	SW	"$spath"/sub09/orig_files/GLM_glts_at+tlrc'[4]' \

method 2:
3dVol2Surf


cd "$spath"/"$sub"/orig_files/
	set infile = GLM_glts

	foreach hemi(lh rh)
		echo "Working on: $infile -->  $hemi ... "
		rm ../surface_files/"$hemi"_"$infile".1D
	
		3dVol2Surf \
			-spec ../freesurfer/SUMA/std."$sub"_"$hemi".spec \
			-surf_A smoothwm \
			-surf_B pial \
			-sv Anatomical_reg_AlndExp+orig \
			-grid_parent "$infile"+orig \
			-map_func max_abs \
			-f_steps 15 \
			-f_index voxels \
			-oob_value 0 \
			-no_headers \
			-out_1D ../surface_files/"$hemi"_"$infile".1D
	end


foreach hemi(lh rh)

	3dMVM -prefix "$hemi"_group_MVM -jobs 12 \
			-bsVars 'Modality' \
			-wsVars 'Scale' \
			-SS_type 2 \
			-num_glt 25 \
			-gltLabel 1 Listen_NS -gltCode 1 'Modality : 1*Listen Scale : 1*NS' \
			-gltLabel 2 Listen_CS -gltCode 2 'Modality : 1*Listen Scale : 1*CS' \
	...
        ...
            -gltLabel 24 US_ Read-Listen -gltCode 24 'Scale : 1*US Modality : 1*Read -1*Listen' \
            -gltLabel 25 SW_ Read-Listen -gltCode 25 'Scale : 1*SW Modality : 1*Read -1*Listen' \
			-dataTable \
			Subj	Modality	Scale	InputFile \
			s2	Listen	NS	"$spath"/sub02/surface_files/"$hemi"_GLM_glts.1D'[12]' \
			s2	Listen	CS	"$spath"/sub02/surface_files/"$hemi"_GLM_glts.1D'[6]' \
			...
			s7	Listen	CS	"$spath"/sub07/surface_files/"$hemi"_GLM_glts.1D'[6]' \

after 3dMVM, transform the surface results to volume data


#provide an index column for the results for 3dSurf2Vol
foreach hemi(lh rh)
	1dcat "$spath"/sub02/surface_files/"$hemi"_GLM_glts.1D'[0]' \
		"$hemi"_group_MVM.1D \
		>> "$hemi"_group_MVM.dset
end
#put group results back into volume world
foreach hemi(lh rh)
#			rm "$hemi"_group_MVM.dset
			3dSurf2Vol \
				-spec "$spath"/group/freesurfer/SUMA/subAvg_"$hemi"+tlrc.spec \
				-surf_A smoothwm \
				-surf_B pial \
				-sv ../subAvg_SurfVol_at+tlrc.nii.gz \
				-grid_parent "$spath"/sub02/orig_files/GLM_glts_at+tlrc. \
				-sdata_1D "$hemi"_group_MVM.dset \
				-datum float \
				-map_func max_abs \
				-f_steps 15 \
				-f_index voxels \
				-f_p1_fr -0.2 -f_pn_fr 0.4 \
				-prefix ./"$hemi"_group_MVM
end
rm group_MVM.nii.gz
3dcalc \
	-float \
	-a lh_group_MVM+tlrc. \
	-b rh_group_MVM+tlrc. \
	-expr '(a+b)' \
	-prefix group_MVM.nii.gz

The below links show the two renderings
from surface data
https://drive.google.com/open?id=10bTizxI1ge_COQIQ6XC8D430erRaY9NX

from volume data
https://drive.google.com/open?id=1G81PMwHHD8d8k8HuhV7NYstqUzP66jva

They were rendering with the same threshold and cluster size.

Thank you so much!!!

Meng

I assumed those two ways might have slightly different values but the same pattern. While when I
checked the results from two methods, they did show a very similar contrast pattern, but the statistical
value range differ a lot. The activation from surface data were more robust ( a lot).

It is hard to assess the similarities between the two results since you didn’t show the color scheme. It would even be better to compare the two without any thresholding. Nevertheless, it might not be too surprising that the surface-based approach is more sensitive due to its higher accuracy in terms of spatial alignment across subjects.

Adding on to Gang’s comments, I think you have three basic ways to analyze this kind of data that can be mapped to the surface, and each of these has a multitude of variations in important details. I think your question has less to do specifically with 3dMVM than with the underlying methods.

  1. Volumetric analysis. That is your @auto_tlrc method, but you can improve this with better volumetric registration to a standard space. auto_warp.py and @SSwarper can do this using nonlinear alignment. Your results will align better too across subjects than with the simpler affine only alignment of @auto_tlrc.

  2. Volumetric analysis for the linear modeling, but statistical analysis on the surface. That is your second method. The correspondence across subjects is dramatically increased for this because the domain is limited with correspondence by nodes of the surface. This will follow the gyri and sulci more reliably.

  3. Surface analysis for linear modeling. Here, you will likely find the best correspondence of these three situations because smoothing is done on the surface, respecting the topological boundaries better. Statistical analysis is still done on the surface. In this case, the EPI data is mapped to the surface after registration to an anatomical dataset. The volumetric data is mapped to the surface at that point, usually with an averaging through the cortex. Smoothing is then limited to the surface and the linear modeling is done then on the surface. This method is generally our recommendation for surface-based analysis. You can find examples in afni_proc.py’s help for surface-based examples and in the AFNI class data under FT_analysis.

Note surface analysis is appropriate when you’re interested, well, in the surface, namely the cortex, as the surface correspondence done by packages like FreeSurfer is based on the cortex. When you’re interested in the rest of the brain, amygdala, striatum,…, then volumetric analysis is more appropriate. Of course, there’s nothing stopping you from doing both. Regarding surface-based analysis, the data is originally volumetric, so there are choices on how to map data to the surface.

Thank you both Gang and Daniel, I think I’ll stick with the surface-based approach.

One tiny question: Does it make sense to mapping the group analysis results that are totally from volume world to surface rendering, just for showing the cortex contrasts?

Like, select the statistical results from method 1 as the overlay, and open suma, have the spec and sv as the group average, does it make sense?

I find suma can effectively show most kinds of data. As long as you make clear the analysis was done volumetrically and how you mapped data to the surface, it is fine to display results with suma onto some standard surface. Note when suma talks with afni, the overlay in the afni GUI is sent to the suma GUI and colors the surfaces. By default, that coloring is done just at the intersections of the surface nodes with the volume. The Vol2surf plugin in afni gives controls that are similar to the 3dVol2Surf program, and offers a variety of ways to more effectively show data from within the volume onto the surface - like reaching into the volume to find a maximum, minimum, mean or median. Also regarding the other direction of mapping data from the surface to the volume (on the command line only), the program, 3dSurf2Vol, and the script, @surf_to_vol_spackle give a multitude of choices.