AFNI does not give logical result

I use AFNI and uber_subject.py and when ı look my afni_proc.py script it’s look ok and also it performs all the processing without any error. At our project we are trying to compare non-stationary NOTMOCO and stationary data MOCO (MOCO has motion 0.1mm-0.3mm band and NOTMOCO has motion 1mm-2mm band and MOCO is the motion corrected NOTMOCO data created by MR own software). I use motion censor limit 3mm and at the result script it says there is no censored TR. But MOCO gives more meaningful clusters and more activated voxels than NOTMOCO and this result is just opposite at GIFT,CONN and REST. How can it be possible ? What should I do ?

It would be helpful to post the entire afni_proc.py command used in your analyses. I’m assuming that you’re running two processing pipelines, one for MOCO and one for NOTMOCO.

-Peter

afni_proc.py -subj_id $subj
-script proc.$subj -scr_overwrite
-blocks despike tshift align tlrc volreg blur mask regress
-copy_anat $anat_dir/anat+orig
-dsets $epi_dir/r01+orig.HEAD
-tcat_remove_first_trs 5
-volreg_align_to third
-volreg_align_e2a
-volreg_tlrc_warp
-blur_size 6.0
-regress_censor_motion 3.0
-regress_bandpass 0.01 0.1
-regress_apply_mot_types demean deriv
-regress_est_blur_errts

I use Bash On Ubuntu and ı performed 2 paralel terminals. But each one has just different adresses. This one is for MOCO. It has only 2 differences from NOTMOCO which are $epi_dir and $anat_dir.

I don’t understand what you mean by this phrasing:

“non-stationary NOTMOCO and stationary data MOCO (MOCO has motion 0.1mm-0.3mm band and NOTMOCO has motion 1mm-2mm band and MOCO is the motion corrected NOTMOCO data created by MR own software)”.

What does “motion xx-yy band” mean? What does “created by MR own software” mean?

In any case, setting the motion censoring limit to 3 mm is absurdly large. For resting state, we recommend 0.2 mm as the motion limit, and usually don’t recommend bandpassing, unless your TR is less than 2 s.

In our project, we are trying to understand head motion’s effect on brain activations. MOCO data are motion corrected NOTMOCO data. Our NOTMOCO data’s head motion is at 1-2mm motion range and MOCO (motion corrected NOTMOCO data. this motion correction process is done by MR) So NOTMOCO data is ın other word raw data and MOCO data is the motion corrected data. Because of that, we are expecting NOTMOCO should have more activations than MOCO but in AFNI, MOCO have more activations but in GIFT ,CONN and REST, NOTMOCO has more activations than MOCO. I couldn’t understand why it is happening…

Ok, I’m guessing that you’re using a Siemens scanner? My first guess is that the slice timing information isn’t included in the DICOM headers, so you need to specify it to afni_proc.py using something like this:


-tshift_opts_ts -tpattern alt+z

The default on Siemens is to use alt+z with odd number slices and alt+z2 on even number slices.

Whereas all the other packages require you to specify the slice pattern.

What’s your TR? What are other properties of the scan? Voxel size? I’m guessing that if you’re using MOCO, then you’re running a product sequence with no multiband or other fancy things like that.

At first thanks for your attention
TR= 2800 ms, TE=25ms, flip angle = 90°, field of view =192 mm, 36 slices covering the whole brain, slice thickness = 3 mm, in-plane resolution =2×2 mm. Resting state data was collected for 9 min 44 s resulting in 205 volumes of BOLD fMRI data per subject. Resting-State fMRI scans were performed in 1.5 Tesla Siemens MR device.
And for AFNI format ı use the code line below.

to3d -anat -prefix anat *.IMA for anatomic set
to3d -prefix r01 -time:zt 36 205 2800 alt+z *.IMA for EPI set.

Worth verifying with your scan center, but I believe your timing should be “alt+z2” for even number of slices. Depending on which scanner, which software version, and which sequence of course. The incorrect timing information on slice time correction could play decent havoc on your analyses. Particularly with such a large TR window.

Also, as Bob mentioned a censor threshold of 3.0 is massive, and doesn’t correspond to 3mm/3degrees of movement, we do the calculations on euclidean distances, hence the recommendation of 0.2 or so. That would at least put it in the range of other software packages like CONN for censoring motion out. You may also want a -regress_censor_outliers 0.1.

Change those two things and re-run and get back to me!

-Pete

I made the changes but it still same as before.

afni_proc.py -subj_id $subj
-script proc.$subj -scr_overwrite
-blocks despike tshift align tlrc volreg blur mask regress
-copy_anat $anat_dir/anatZ+orig
-dsets $epi_dir/rZ+orig.HEAD
-tcat_remove_first_trs 5
-volreg_align_to third
-volreg_align_e2a
-volreg_tlrc_warp
-blur_size 6.0
-regress_censor_motion 3.0
-regress_censor_outliers 0.1
-regress_bandpass 0.01 0.1
-regress_apply_mot_types demean deriv
-regress_est_blur_errts
and ı changed my code line from “to3d -prefix rZ -time:zt 36 205 2800 alt+zt *.IMA” to “to3d -prefix rZ -time:zt 36 205 2800 alt+z2 *.IMA”


-regress_censor_motion 3.0

Should be


-regress_censor_motion 0.2

But we use exactly same value with other packages. Our project is about motion so ı don’t want to use motion correction.

The value in AFNI isn’t in millimeters or degrees (or radians) of rotation. So using the “same” value as other software packages is fairly meaningless. You can approximate, but if you wanted roughly 3mm or 3 degrees, than you want a value of ~ 0.3 for AFNI.

If you don’t want to do ANY type of motion correction, then you can remove the “volreg” block from AFNI. That’s the step that actually goes through and realigns all of the individual TR images to whatever base you specify (you specified third via -volreg_align_to).

Sorry ı couldn’t explain myself. Also ı couldn’t understand why AFNI gives lower activations at NOTMOCO than MOCO. If i use 3 in AFNI this will do not censor any TR both at MOCO and NOTMOCO so why NOTMOCO gives lower activations. We use 6 FWHM , 3mm censor motion , 0.01 Hz-0.1 Hz bandpass filter . Did I enter those parameters correctly ? Do I supress any activations at NOTMOCO with my code which was genareted by this afni_proc.py ? Is this result seems ok ? Sorry again and thanks for spending time on my questions. =))

Since you do not have any stimulus timings, what do you mean here by “activations”? Inter-voxel or Inter-region correlations? How did you compute “activations” from the results of the afni_proc.py run?

I do ReHo , Roi based analyses (correlation) and ALFF. Then ı do ttest by uber_ttest.py. At the ttest dataset ı use clustering at the AFNI GUI and when look clustering report MOCO has more significant voxels (total number of voxels at the report ) than NOTMOCO.

3dReHo -prefix ReHo11_{$subj} -inset errts.{$subj}.tproject+tlrc -mask mask_group+tlrc
3dmaskdump -noijk -mask mask_group+tlrc ReHo11_{$subj}+tlrc | 1d_tool.py -show_mmms -infile ->tt.txt
grep mean tt.txt | cut -f’2 4’ -d ‘,’ | cut -f2 -d ‘,’ | cut -f2 -d ‘=’ > std.txt
set std=cat std.txt
grep mean tt.txt | cut -f’2 4’ -d ‘,’ | cut -f1 -d ‘,’ | cut -f2 -d ‘=’ > mean.txt
set mean=cat mean.txt
echo {$mean}
echo {$std}
3dcalc -a ReHo11_{$subj}+tlrc -b mask_group+tlrc -expr ‘((a-’{$mean}‘)/(’{$std}‘*b))’ -prefix ReHo11_Normalized2
for ReHo automating ı take std and mean value by code. I tried manually but the result didn’t changed. I use ReHo11_Normalized2 for ttest.

3dUndump -prefix {$roi} -master errts.{$subj}.tproject+tlrc. -srad 6 -xyz {$roi}.txt
3dmaskave -quiet -mask {$roi}+tlrc. errts.{$subj}.tproject+tlrc. > timeCourse.txt
3dfim+ -input errts.{$subj}.tproject+tlrc. -polort 2 -ideal_file timeCourse.txt -out Correlation -bucket {$roi}_Corr
3dcalc -a {$roi}Corr+tlrc. -expr ‘log((1+a)/(1-a))/2’ -prefix Corr{$roi}m{$subj}Z
and this is for Roi based analyses. I use Corr
…_Z for ttest last one.

We can’t answer your question, as we don’t really know much about those other software tools you mention (GIFT, CONN, and REST). In particular, we don’t know about how they compensate for movement or baseline drifts or other artifacts.

Thanks for your attention first :). I just wonder that is there any mistake at my codes? I posted my codes but if you want i can post it in one piece. One more question can i use pre-processed data created by SPM ? How can i turn it into AFNI format? When i tried that, it created separate datasets from every volume. Sorry for my questions but i am new at neuroscience and AFNI and i have nobody to ask AFNI. My researcher friends and consultant researcher are using other packages. Please help me. i am stucked here.

Best regards
Abdullah

Since MOCO means motion-corrected NOMOCO, you are asking
afni_proc.py to perform motion correction a second time.
It is not clear why this would be useful, unless you think
the other software did not do it correctly.

Why would you expect the activation to be different at all?
Are you sure the test is not to SKIP motion correction?
That would be a real MOCO vs NOMOCO test. This is more like
MOCOx2 vs MOCOx1.

So what is the point of this study, if the only difference
is one set of motion correction vs two?. The main effects
will be to add additional blur, and possible additional
noise due to the secondary motion correction.

And since the original volumes were resampled the last time
motion correction was run, the subsequent “corrections” are
non-zero only because of the resampling (and method diffs).
The only reason one might think this would not ADD motion
noise is because the blur effect will probably be much
stronger than the inappropriate motion “correction”.

Regarding your to3d command, are you sure the alphabetical
ordering of the IMA files is correct? I suggest you use
Dimon to create the AFNI datasets:

Dimon -infile_pattern '*.IMA' -dicom_org -gert_create_dataset -sp alt+z2

See if the volume order is different from your original one.
Dimon will say whether -dicom_org was useful.

In a later comment, you say you do not want to use motion
correction. Maybe you do not want a ‘volreg’ block at all.
But in that case, it might be difficult to get afni_proc.py
working to deal with the other transformations.

Yes, the 0.2 value for censor is basically in millimeters/
degrees. They are considered to be in the same range, since
a 1 degree rotation is about 1 mm maybe 2/3 of the way to the
cortex.

But your 3mm limit is probably a cumulative limit for dropping
subjects in the other software. The limit you are giving to
afni_proc.py is a per-time point one, which is very different.
Maybe the other packages do not censor?

Before you try to understand the activations, try to understand
the processing. Without that, there is no context to understand
a difference in activations.

It is possible that NOMOCO gives lower activations because it
uses less blur, now that you blurring MOCO more, due to the
extra motion “correction” resampling.

  • rick

Thanks for that summary. I almost understand every step of pre-processing but I have 2 more questions.

1- How can I compare MOCO-NOTMOCO correctly with same parameters ? (When I look files_acf folder at notmoco it says FWHM=10.06 and and for MOCO 10.35–I chose 6)

2- How can I use pre-processed data by SPM ?

Thanks for all your help. I will keep this favor in my mind.

Regards
Abdullah

Hi Abdullah,

If the MOCO has been interpolated an extra time, it
should have a higher blur estimate, which you would
probably need to use.

But still, what is the real point of comparing to an
extra motion correction?

How to apply pre-processed data from SPM is less
clear, since you would need to describe exactly what
was done by SPM. Do you intend to just run a linear
regression and group analysis with AFNI? To do even
a linear regression, you would want accurate motion
parameters (and for MOCO, those might come from the
conversion from NOMOCO to MOCO).

Do you really want to compare one motion correction
with two?

  • rick