Hi AFNI gurus (I guess Paul in this particular case?):
Since “3dSpaceTimeCorr” loops through ijk and calculates spatial correlations between connectivity patters of the same ijk of two data sets, I am wondering if there is also a convenient way of fixing one connectivity pattern and simply looping through ijk in one data set.
For example, by seeding in a fixed location (say, coordinates x=1 y=2, z=3), I have already got a connectivity pattern (CP) in data set A. This CP is simply a 3D spatial distribution of correlation coefficients or Fisher’s z values, and let’s called it “CP_A+orig”. Now, I want to “apply” this “CP_A” on data set B, looping through ijk’s just in B and assign to each of these ijk’s a spatial correlation coefficient between that voxel’s CP and the “CP_A”. In this situation, I am expecting the same location of (x=1, y=2, z=3) will “light up” in data set B.
To realize this, I guess that I could clumsily run through a bunch of “3ddot”; but before doing that or launching Matlab, I wanna check with you and see if there is a convenient way already available with AFNI that I am not aware of.
Thank you very much for your help!
That sounds pretty doable within 3dSpaceTimeCorr; I can add a flag that would allow the user to input an “x y z” location (and/or an “i j k” location) in dataset A, and then the WB/FOV correlation map from that voxel would be correlated with ones generated from each location in the B dataset.
Does that sound fine?
Thank you Paul for this super quick response! Besides the input of a user specified coordinates, could you please also add one more option allowing users to directly provide a 3D data set as the template to correlate with? Also, besides Pearson correlation, could you please add outputs of Euclidean distance between the template and the CP of each voxel?
Thanks a lot!
The originally asked-for option has been added, and we’ll do a build of the binaries this evening, so you should have them available soon.
As to the other options, those will take a little more time+pondering… Should be doable.
Thank you so much Paul!
Could you please reply on this thread once you have added the other options? The most useful option for me would be allowing users to provide a 3D pattern directly, because this pattern can be derived from a different group of subjects. Or, besides allowing specification of a single seeding voxel, allowing specification of a seeding mask is also very useful.