Hi AFNI experts,
I am writing because of a weird behavior with afni_proc and the conversion from original space to MNI space. I am collaborating with someone who wrote a piece of code in AFNI to preprocess MRI data and I am trying to replicate their experiement on a new dataset that I acquired myself.
Surprisingly when we run the same piece of code on our repsective machines, using the same input DICOMS for the EPIs and the T1, we do not get the same output volumes… Inspecting the output from afni_proc, I have observed that the nifti data are identical in both codes on each step of the pipeline until they get resampled in MNI space. In other words, all the +orig volumes are identical, but the +tlrc are not. my guess would then be that we are not using the same MNI templates… However as I said we are using exactly the same code, and especially this line :
which specifies the template as one the templates included by default in AFNI. I am using AFNI_19.1.07 ‘Caligula’, and they may be using an older version, but I don’t see why the templates would be different… Any idea?
Thanks in advance
What does “afni -ver” show?
There was a problem with using auto_warp.py to do a non-linear transformation to standard space, which existed in versions AFNI_19.1.16 and AFNI_19.1.17.
Does that apply to you?
EDIT: Oops, it does not apply, since you nicely included the AFNI version in your post.
But this point is worth keeping here…
Reading this a little more closely, you are not actually talking about a problem with the result, just that the results from different machines are not identical. Is that correct?
Unless the machines are almost identical, it is not surprising to get different results, particularly when going through the registration steps. In fact, sometimes random noise is added, so that peculiarities with skull stripping (for example) have less of an affect on the result.
On a related note, version AFNI_19.1.11 has a minor update that restricts the optimization criteria for convergence on some registration steps. This was done to make results on different machines more similar (note: identical is not an option).
But overall, expect results to be a little different across machines.
Indeed I was talking about results from different machines being not identical. Thanks a lot for the precisions, I expected something of the kind of purpoisefully added noise but could not find that in the documentation.
I understand now that there will be some variations from one machine to the other, but what I find odd is that if I run the processing twice in a row on the same machine, I get the same images. Is it possible that the added random noise somehow tied to the machine?
If I may add to my previous message, the reason why I am so concerned with tese differences is that we are computing connectivity matrices on the EPIs, and the matrices we get for each subject are very different. Although the correlations between matrices are high (0.95), the values of the elements can vary by a lot… Is this a known phenomenon?