AFNI_proc.py - A faulty warped brain

Dear all,

I set up my own AFNI_proc.py to preprocess fMRI data of one subject. The problem is that the outcome of the preprocessed data, which was preprocessed by AFNI_proc, produces a faulty/strangely misaligned brain as you can see on the screenshot below. This problem does not exclusively appear for one single subject only, but for all subjects of the dataset that I tried to process.

This is my AFNI_proc.py setup (the whole Regress part is not shown here, as the problem already occurs very early in the processing pipeline. There is also only one run to reduce the processing time for fixing this problem – don’t be surprised). Where does the problem occur? -despike and -tshift seem to work fine; when I open the corresponding files via AFNI to inspect them visually, the brain is perfectly normal in its shape and position. However, in the subsequent file, that is, “pb03.Subject_1.r01.volreg+tlrc”, what you see is the brain in the here provided screenshot. The brain is misaligned/rotated on all planes.

My AFNI_proc.py setup:

Set directories

directory_subjects=/users/philipp/desktop/fmri/dataset/subjects/subj2_exp1
directory_fMRIruns=$directory_subjects/raw
directory_anatomical=$directory_subjects/raw
directory_stimulionsets=/users/philipp/desktop/fmri/dataset/info
cd $directory_subjects

Correct Dataset Centers (@Align_Centers) for Preprocessing

cd $directory_subjects/raw
@Align_Centers -base MNI152_2009_template.nii.gz
-dset $directory_anatomical/3dto3d+orig -prefix centered_3dto3d
@Align_Centers -base MNI152_2009_template.nii.gz
-dset $directory_fMRIruns/run1to3d+orig -prefix centered_run1to3d

AFNI_proc.py

afni_proc.py
-subj_id Subject_1
-out_dir $directory_subjects/Preprocessing
-dsets
$directory_fMRIruns/centered_run1to3d+orig
-blocks despike tshift align tlrc volreg mask blur scale regress
-copy_anat $directory_anatomical/centered_3dto3d+orig
-tcat_remove_first_trs 4
-align_opts_aea -cost lpc+ZZ -giant_move
-tlrc_base MNI152_2009_template.nii.gz
-tlrc_NL_warp
-volreg_align_e2a
-volreg_align_to MIN_OUTLIER
-volreg_tlrc_warp
-volreg_post_vr_allin yes
-volreg_pvra_base_index MIN_OUTLIER
-blur_size 4.0 \

Do you have an idea where the problem could be? Is there something wrong within my code for AFNI_proc.py itself? I used “Align_Centers” just before running AFNI_proc.py. However, that did also not help to solve the problem.

Thanks,

Philipp

Hi-

A couple things might be happening here.

First, while your EPIs might new newly recentered, the anatomical dataset might not be, and that can cause issues. You can check this in the GUI, but underlaying/overlaying the datasets.

Second, your EPI dset has pretty low tissue contrast (e.g., it is hard to see GM-WM boundaries and distinctions), as well as large bright patches around the edge. It is often the GM-WM-CSF patterns that drive alignment between volumes, and here those patterns are hard to pick up on. This might present a problem with any alignment. To deal with this, it might be possible to check changing a cost function (e.g., to nmi), but would you be able to upload a dataset if I share a Box drive with you (the input EPI and anatomical volumes), and I can take a look?

–pt

Dear Paul,

thanks for your input. In case you will find something out about the data files that I send you, please let me know.

Meanwhile, I changed the following line in my AFNI_proc code:

-align_opts_aea -cost lpc+ZZ -giant_move \
to
-align_opts_aea -cost lpa -giant_move \

This solved the problem. Now, the brain in the file “pb03.Subject_1.r01.volreg+tlrc” is normally aligned as well as in a normal position.
Nonetheless, I am still not sure if the three blocks “align tlrc volreg” do a good job and/or if my raw data is already qualitatively in a bad shape which would require addtional preprocessing even before the “align tlrc volreg” parts of the preprocessing pipeline.

As you suggested, I took a look at comparing the anatomical image as underlay with the functional images as overlay via the AFNI GUI (I am talking about the raw data here). The matching looked okay or even fine to me. For example, the functional images of run 1 are not spatially off the anatomical image.

I would proceed with the regression part of the general linear model now. Later, I would maybe come back to this thread just in case new problems occur which I am unable to solve and that might be related to problems in the preprocessing part of my data. And like I said, in case you already found problems with the raw data files that I send you (e.g. bad quality that could cause problems), please let me know.

Thanks,

Philipp

Hi Philipp,

When testing such registration for use by afni_proc.py, it can be helpful to run align_epi_anat.py directly.

  1. copy the align_epi_anat.py command out of the proc script
  2. copy the inputs to it to a new directory, along with the short AEA command script
  3. add -multi_cost with any of the cost functions you would like to test (lpa, lpa+zz, nmi, lpc+zz, whatever you choose)
  4. run that separately to see the results with various cost functions

Then if you find a more appropriate cost function, give that to afni_proc.py.

This should save some time, compared with running afni_proc.py repeatedly.

  • rick

Hi, Philipp-

There are a couple issues here. But the main one is that while the first few EPI volumes have good tissue contrast, they are “pre-steady state” and get removed (rightfully) during processing—the problem is that the remaining volumes have very poor tissue contrast, and so they provide little detail for the cost function to align well with the anatomical. You can see in the attached image—the top row volumes have recognizable EPI-like contrast (that is the [0]th volume, which is pre-steady state), while the bottom row has post-steady state, and has poor tissue contrast—the ventricles almost look dark, even, which might prove extra challenging for the lpc-based cost function.

The good news is that we can try another cost function, and also that they EPI and anatomical line up well to start (a good thing for helping alignment). I tried changing to the “nmi” cost function, and to not including “-giant_move” (to address those respective points in the previous sentence), and results looked pretty good. So please try changing this line:


-align_opts_aea -cost lpc+ZZ -giant_move \

to this:


-align_opts_aea -cost nmi \

in your afni_proc.py command.

–pt

I should also note that instead of calculating nonlinear alignment within afni_proc.py via:


-tlrc_base MNI152_2009_template.nii.gz \

… we typically nowadays recommend running @SSwarper to calculate both the skullstripping (SS) and nonlinear warp to standard space.

The command would be run ahead of time, e.g.:


@SSwarper                                    \
        -input  DSET_ANAT                  \
        -base   MNI152_2009_template_SSW.nii.gz  \
        -subid  ${subj}                          \
        -odir   OUT_DIR

… and then, as noted here in this sliiightly modified clip of the @SSwarper help file, you provide both the skullstripped anatomical (and turning off further skullstripping) and the warp dsets:


|  afni_proc.py                                                  \
  |    [...other stuff here: processing blocks, options...]        \
  |    -copy_anat OUT_DIR/anatSS.${subj}.nii                               \
  |    -anat_has_skull no                                          \
  |   ....
  |    -volreg_tlrc_warp -tlrc_base MNI152_2009_template_SSW.nii.gz              \
  |    -tlrc_NL_warp                                               \
  |    -tlrc_NL_warped_dsets                                       \
  |       OUT_DIR/anatQQ.${subj}.nii                                       \
  |       OUT_DIR/anatQQ.${subj}.aff12.1D                                  \
  |       OUT_DIR/anatQQ.${subj}_WARP.nii

–pt

Dear Paul and Rick,

a big thanks to both of you for your help and information, which I can use in the future. Rick, I also watched many videos of your talks about AFNI on youtube. These were also very helpful to get into AFNI. I wonder why they do not have more views, they were nicely done.

Philipp

Thanks a lot, Philipp, I am very glad they were helpful!

  • rick

I return to my topic with a new problem. This problem appeard subsequently in the preprocessing pipeline, more precisely in the regression part.

Let me provide you with short background information about what I intend to do:

  • One subject
  • Two runs
  • Two kinds of stimuli (Self vs. No_Self). Both stimuli are presented as trials in each of the 2 runs (unpredictable for the subject which type comes next; self or non-self)
  • 20 trials per run (10 Self; 10 No_Self)

(I only use one subject and two runs in order to reduce the processing time. I have an older laptop with a small HDD; a new Laptop should come soon and then I can hopefully preprocess a whole dataset with many subjects. Meanwhile, I try to atleast go through a whole preprocessing session with 1 or 2 subjects and 1 or 2 corresponding runs. Nonetheless, the following problem described below likewise appears with more than one subject and more than 2 runs. The .1D stimuli-onset textfiles were accordingly adjustet from 6 rows for 6 runs to 2 rows for 2 runs, so that AFNI does not complain here or induce related errors/problems.)

Here is my AFNI_proc.py

Set directories

directory_subjects=/users/philipp/desktop/fmri/dataset/subjects/subj1_exp1
directory_fMRIruns=$directory_subjects/raw
directory_anatomical=$directory_subjects/raw
directory_stimulionsets=/users/philipp/desktop/fmri/dataset/info
cd $directory_subjects

AFNI_proc.py

afni_proc.py
-subj_id Subject_1
-out_dir $directory_subjects/Preprocessing
-dsets
$directory_fMRIruns/run1to3d+orig
$directory_fMRIruns/run2to3d+orig
$directory_fMRIruns/run3to3d+orig \ # Ok, in this try I used 3 runs, all fine, the .1D files were adjusted, this is not the problem.
-blocks despike tshift align tlrc volreg mask blur scale regress
-copy_anat $directory_anatomical/3dto3d+orig
-tcat_remove_first_trs 4
-align_opts_aea -cost nmi -ginormous_move
-tlrc_base MNI152_2009_template.nii.gz
-volreg_align_e2a
-volreg_align_to MIN_OUTLIER
-volreg_tlrc_warp
-volreg_post_vr_allin yes
-volreg_pvra_base_index MIN_OUTLIER
-blur_size 6.0
-regress_stim_times $directory_stimulionsets/allrun_self.1D $directory_stimulionsets/allrun_nonself.1D
-regress_stim_labels Self No_Self
-regress_basis ‘BLOCK(2,1)’
-regress_opts_3dD
-gltsym ‘SYM: Self -No_Self’
-glt_label 1 S-N
-jobs 4
-regress_motion_per_run
-regress_apply_mot_types demean deriv
-regress_est_blur_epits
-regress_est_blur_errts
-regress_bandpass 0.01 0.2
-html_review_style pythonic
-execute

Now, the problem is that AFNI gets stuck on the following processing part/message output. Please see screenshot number 1 in the attachments. From here on, nothing happens anymore, even after two to three hours of waiting. It seems that AFNI is stuck without a precise recognition of the problem, since AFNI does not stop to produce an error message. I am able to open all preprocessing files from the -block (despike tshift align tlrc volreg mask blur scale) beside, of course, the regress files. That is to say, opening the output for the scale files to inspect them via the AFNI GUI, to provide an example, works just fine. The processed files look OK to me to. The problem seems to be related to the regression part.

Then, in a next step, I added ptaylor’s suggestion to use @SSwarper first as you can see in my new updated script below. Furthermore, -tlrc_NL_warp was added to the script. I removed -tlrc_NL_warp on my older script above to increase processing time (it takes around 2 to 2.5 hours on my old laptop just for one run!). And you don’t wanna know how long SSwarper took for this updated script. :smiley:

Here is the updated AFNI_proc.py script:

Set directories

directory_subjects=/users/philipp/desktop/fmri/dataset/subjects/subj1_exp1
directory_fMRIruns=$directory_subjects/raw
directory_anatomical=$directory_subjects/raw
directory_stimulionsets=/users/philipp/desktop/fmri/dataset/info
cd $directory_subjects

Run SSwarper

@SSwarper
-input $directory_anatomical/3dto3d+orig
-base MNI152_2009_template_SSW.nii.gz
-subid Subject_1
-odir $directory_subjects/raw

AFNI_proc.py

afni_proc.py
-subj_id Subject_1
-out_dir $directory_subjects/Preprocessing
-dsets
$directory_fMRIruns/run1to3d+orig
$directory_fMRIruns/run2to3d+orig \ # Only two runs now to save some time. .1D textfiles with stimuli-onset times were adjusted accordingly.
-blocks despike tshift align tlrc volreg mask blur scale regress
-copy_anat $directory_anatomical/anatSS.Subject_1.nii
-anat_has_skull no
-tcat_remove_first_trs 4
-align_opts_aea -cost nmi
-volreg_align_e2a
-volreg_align_to MIN_OUTLIER
-volreg_tlrc_warp -tlrc_base MNI152_2009_template_SSW.nii.gz
-tlrc_NL_warp \ # tlrc_NL_warp added to provide the proper full program.
-tlrc_NL_warped_dsets
$directory_subjects/raw/anatQQ.Subject_1.nii
$directory_subjects/raw/anatQQ.Subject_1.aff12.1D
$directory_subjects/raw/anatQQ.Subject_1_WARP.nii
-volreg_post_vr_allin yes
-volreg_pvra_base_index MIN_OUTLIER
-blur_size 6.0
-regress_stim_times $directory_stimulionsets/allrun_self.1D $directory_stimulionsets/allrun_nonself.1D
-regress_stim_labels Self No_Self
-regress_basis ‘BLOCK(2,1)’
-regress_opts_3dD
-gltsym ‘SYM: Self -No_Self’
-glt_label 1 S-N
-jobs 4
-regress_motion_per_run
-regress_apply_mot_types demean deriv
-regress_est_blur_epits
-regress_est_blur_errts
-regress_bandpass 0.01 0.2
-html_review_style pythonic
-execute

First of all, SSwarper runs fine. After running SSwarper, AFNI_proc.py proceeds fine too (at least in the beginning). Later, however, it gets stuck just like my older script before, and again without providing any error messages concerning what may went wrong. The step in the preprocessing where it “stops” is exactly the same as with the old script. All the data/information on the screenshot that I attached are exactly the same for both scripts, i.e., both scripts fail at the same processing step.

I am stuck here and would kindly ask you for further help. Of course, we can focus on fixing the problem with the second/updated script, since it includes the usage of SSwarper as suggested here and the problem is the same.

Please let me know what you think. Thanks,

Philipp

Hi Philipp,

There are a few things to note here.

  1. You seem to be on a mac, which rarely kills a program that is allocating “too much” memory. Rather, it adds virtual swap space and just takes. for. ever. (give or take 3 weeks).

  2. There are 2360 time points across those 3 runs? That is quite a few time points. What polort is 3dDeconvolve using?

  3. Worse, there are 1482 baseline regressors. That is scary.

Oooooooohhhh! You are using large-scale bandpassing in a normal linear regression model. That is why there are so many baseline regressors, and part of why it is taking so long.

Since this is a task analysis, try it without “-regress_bandpass 0.01 0.2”. With so many time points, I am not positive whether it will work on that laptop, but removing almost 1500 regressors will help reduce the memory requirement (and we tend to prefer not bandpassing anyway).

Also, add -regress_compute_fitts to the command.
Instead of having 3dDeconvolve compute that, it will be done with 3dcalc (after the regression is done). That will save quite a bit of memory for 3dDeconvolve.

Also, since you are running out of memory, consider closing any MS Office applications, or anything else that might use a lot of RAM.

Hopefully some of that will help.

  • rick

Hi, Philipp-

Just a note about your @SSwarper comment (and “-tlrc_NL_warp”—these both refer to performing nonlinear (NL) warping, via two slightly different mechanisms): the nonlinear warping can be a computationally expensive endeavor, yes, but the 3dQwarp program that underlies all AFNI NL warping was programmed by Bob to be inherently parallelizable. That is, it can use multiple CPUs/threads on a given computer to work faster. You don’t need to do anything to get this performance enhancement (it uses OpenMP under the hood), except have a computer with multiple cores/CPUs.

If you run “afni_system_check.py -check_all”, you can see how many CPUs are available on your computer near the top; or, to pick out the line that says how many that is specifically, this:


afni_system_check.py -check_all | grep "number of CPUs:"

You can also see how many AFNI is set to be using with this command:


afni_check_omp

You can control how many CPUs AFNI will use (up to the number available on the machine!) by setting an environment variable OMP_NUM_THREADS in your terminal startup files or in your scripts. This (and more!) is described in this AFNI Academy video—
https://www.youtube.com/watch?v=bpeNBQmUlxk&list=PL_CD549H9kgq6ZPwLkllfpQZ1pjIqLaDF&index=6
… which really was just meant to be a very short one. I swear.

–pt

Rick and Paul,

thank you both for the tips. Rick, I removed the bandpass-filtering code and added “-regress_compute_fitts” as you suggested. It worked out. AFNI is no longer stuck on the previously shown point, and it ends the preprocessing successfully. I got all the results and the colorful brains. Finally.

Thanks again, I am really happy now!

Philipp

That is great news, thanks!

  • rick