# Fisher transform

Hello,

I’ve come across something in my resting state analysis that I think seems kind of strange. I ran 3dfim+ with the Correlation setting using the average in a seed region as the reference, and I used 3dcalc to use the fisher transform for z scores. However I noticed that the z scores I’m getting seem extremely low. In the seed region, it’s only a z score of 1.6. Hm…I was expecting something higher. The equation I’m using for the fisher transform is: log((1+a)/(1-a))/2

However after some google searching, apparently FSL uses the following for the fisher transform: sqrt(N-3)*log((1+a)/(1-a))/2, where N is the number of images. But that doesn’t make a whole lot of sense to me either because in my case N = 1500, and that hugely inflates the numbers.

Does anybody have any thoughts? Thanks.

Hi-

The Fisher Z transform can be calculated in terms of a Pearson r as either:
Z = 0.5*log((1+r)/(1-r))
or
Z = atanh(r),
where the latter is the inverse of the hyperbolic tangent function. They should both give similar results up to a large number of decimal places even for a high correlation.

Agreed, I don’t see where the \sqrt() factor would come into play.

In terms of your specific question, your Z = 1.62 corresponds to a Pearson correlation of:
r = np.tanh(1.6) = 0.92167,
which is pretty high. If you are taking an average time series from a region of interest, your expected correlation with any time series within even that region would depend a lot on the size of the region and its homogeneity/noise. It would be good to doublecheck the calculation but it might not be beyond the realm of possibility to get the value you are seeing.

–pt

That does make sense. I’m still kind of confused though. The t-scores I’m used to seeing from 3ddeconvolve are usually around 3, 4, 5 or so. The t distribution isn’t that different from the z distribution, so why are the resulting scores (and I guess the pvalues) so different?

The formula for converting between t and Pearson r is given on one of Gang’s pages:
https://afni.nimh.nih.gov/sscc/gangc/tr.html

You can see that t<->r conversions depend on the DOF involved-- so, that can have a big effect (which the r<->Z conversion doesn’t have), esp. if you have a large value. Perhaps that is a root of difference?

–pt

Thanks I’ll give that a shot and see.

And just to be sure, 3dfim+ with the -correlation option outputs r? not r^2?

It seems like it outputs r? If somebody can confirm or correct me, I would appreciate it. Thanks.

You’re confusing the Fisher-transformed Z-value with the Z-statistic. The reason the Pearson correlation coefficient is converted to Fisher Z-value in the context (like yours) is for the convenience of Gaussianity assumption at the group level. In other words, the Fisher Z-value is meant to be used as an effect estimate for further parametric analysis, not for significance or p-value; and there is no point talking about comparing Fisher Z-value to t-statistic or p-value. On the other hand, a Z-statistic, similar to t-statistic, provides some evidence about the significance for a hypothesis.

okay. I think I understand, so if i want to talk about significance in my case, what would be the best way of doing so?

I’m thinking, extracting the mean seed region and then using it as the reference in a GLM?

Thanks.

if i want to talk about significance in my case, what would be the best way of doing so?

I don’t remember whether 3dfim+ outputs t-stat for each regressor. If it does not, you can use the link Paul provided before to get the t-stat (https://afni.nimh.nih.gov/sscc/gangc/tr.html). However, it would be more preferred to just use 3dDeconvolve instead of 3dfim+ for that purpose.

I’ll try to use 3ddeconvolve then. That’s probably the best thing to do like you said.

I’m not sure how I would do that however. I assume I want to use the errts file (from modeling out movement, etc…) from where I’m extracting my time course as my input. And of course I have to feed in my time course somehow. I’m trying something like this:

3dDeconvolve -input …/errts.T01.tproject+tlrc
-polort -1
-num_stimts 1
-stim_file 1 TimeCourse01.txt -stim_base 1 -stim_label 1 TC
-fout
-tout
-fitts B1_F1_fitts
-errts B1_F1_errts
-bucket B1_F1_stats

It gets me an empty output though… any advice?

Thanks.

First of all, don’t use -stim_base because that would treat the regressor you feed in as part of the baseline model, and that’s why you didn’t get any output. Secondly, it’d be better to directly include those motion regressors in the same model with 3dDeconvolve. In other words, a one-step approach with all possible effects incorporated in the model is more preferred than first removing some confounding effects followed by a second step. The reason is that one full model accounts more properly for the potential correlations among those effects.

P. S. For resting-state data analysis, it’s recommended that you follow Examples 9/10 in the help of afni_proc.py.

Hi,

Concerning Fisher transforms. AFNI proposed that we do it before running seed-based analysis. Why is it so important to have Gaussianity for seed-based analysis?

I have come upon some situations where I think it creates false results. Under the seed, the correlation coefficients should be really high, because the seed should be well connected with itself. Let’s say ~1. However, after the Fisher correction, in my case, it becomes 1.6 in one group, and 1.1 in another, and it results in significant difference in the seed region between the two groups. Obviously, I can’t state that connection of the seed with itself is different in both groups! Then how do I know that Fisher transforming the data is a good idea?

Thank you very much!

Concerning Fisher transforms. AFNI proposed that we do it before running seed-based analysis.

No, that’s not true. Fisher transformation is typically performed after the seed-based correlation analysis at the individual level, but before group analysis. Such practice is typically adopted in the whole neuroimaging community.

Why is it so important to have Gaussianity for seed-based analysis?

The group analysis such as Student’s t-test assumes Gaussianity of the input data.

I have come upon some situations where I think it creates false results. Under the seed, the correlation
coefficients should be really high, because the seed should be well connected with itself. Let’s say ~1.
However, after the Fisher correction, in my case, it becomes 1.6 in one group, and 1.1 in another, and it
results in significant difference in the seed region between the two groups. Obviously, I can’t state that
connection of the seed with itself is different in both groups!

Without knowing your analysis steps and without access to your data, it’s hard to tell why and how you got what you’re seeing.

Then how do I know that Fisher transforming the data is a good idea?

Fisher transformation is not something opaque: https://en.wikipedia.org/wiki/Fisher_transformation

Hi,

Concerning Fisher transforms. AFNI proposed that we do it before running seed-based analysis.
No, that’s not true. Fisher transformation is typically performed after the seed-based correlation analysis at the individual level, but before group analysis. Such practice is typically adopted in the whole neuroimaging community.

Yes, sorry, I wrote too fast, that is indeed what we are doing.

I have come upon some situations where I think it creates false results. Under the seed, the correlation
coefficients should be really high, because the seed should be well connected with itself. Let’s say ~1.
However, after the Fisher correction, in my case, it becomes 1.6 in one group, and 1.1 in another, and it
results in significant difference in the seed region between the two groups. Obviously, I can’t state that
connection of the seed with itself is different in both groups!
Without knowing your analysis steps and without access to your data, it’s hard to tell why and how you got what you’re seeing.

We are doing a typical uber_subject preprocessing, on two groups (patient subjects vs healthy controls), on resting state data, 5 minutes.
The only thing that could be tricky is that the healthy subjects come from a different data bank (different scanner, not exactly the same TR). We are only exploring the feasibility of such a study. So far, the results follow the literature, so I would think that it won’t be a problem.
I did the seed-based with 3dfim+ with 5mm seeds (voxels are 3.5x3.5x3.5mm), then fisher transform, and then 3dttest++ on the two groups.

Then how do I know that Fisher transforming the data is a good idea?
Fisher transformation is not something opaque: [en.wikipedia.org]

Yes, thank you, I will read it carefully. But I am not worried about the mathematical side of the fisher formula, but rather it’s effect on seed-based analysis on fMRI data.

Should I just ignore results that are close to the seed position? Is it something that happens a lot?

Should I just ignore results that are close to the seed position? Is it something that happens a lot?

It’s hard for me to gauge the potential issues without access to the data. It might be fine if the results in other regions look fine as you expected.