Best way/tutorial to make activation brain images using AFNI commands

Hello folks,

I am working on a bunch of analysis on a new sequence and would like to compare sensitivity to detection on several measures.
So it would be very interesting to be able to concatenate images with additional data written on the images.

So far, I’ve managed to plot the data, but I would like to add the thresholds and color bar values beneath the images .

I would love to understand how can I script something to make images like such

I have found some new improvements by ptaylor, but I cannot find a reliable way to produce these images.

So far, code below can produce images like displayed below :
I would love to add the colormaps and relevant informations (is there a gold standard for information to be displayed ?)

Can anybody help/guide me ?

Thanks for any help.

Simon

AFNI version info (afni -ver):

I run Ubuntu 22.04.4 LTS
AFNI Version AFNI_25.2.16 ‘Gordian I’
And Python 3.10.12
I use tcsh scripts and am trying to create a .sh functions to plot the said images

 @chauffeur_afni                                                 \
            -ulay ${background}                                     \
            -ulay_range "2%" "98%"                                  \
            -olay ${tempFolder}/meanInputDataSet+tlrc               \
            -func_range_perc_nz 1                                   \
            -set_subbricks 0 0 0                                    \
            -box_focus_slices ${tempFolder}/meanInputDataSet+tlrc   \
            -cbar "Reds_and_Blues_Inv"                              \
            -pbar_saveim       ${tempFolder}/color_bar              \
            -opacity 4                                              \
            -prefix  $tempFolder/tempPlots                          \
            -thr_olay 0                                             \
            -save_ftype JPEG                                        \
            -montx 9 -monty 1                                       \
            -montgap 3                                              \
            -set_xhairs OFF                                         \
            -pbar_posonly                                           \
            -label_mode 1 -label_size 3     


colorbar_tool.py -in_cbar ${tempFolder}/color_bar.jpg \
            -in_json ${tempFolder}/color_bar.json \
            -prefix ${tempFolder}/test1.jpg

2dcat                                  \
        -gap 5                             \
        -gap_col 66 184 254                \
        -nx 1                              \
        -ny 4                              \
        -prefix ${output_prefix}           \
        $tempFolder/tempPlots*jpg ${tempFolder}/test1.jpg

Hi-

The command used to make the image you pointed to (the full F-stat shown over the MNI template, with transparent thresholding on) in one of the AFNI Bootcamp data processing examples is this:

@chauffeur_afni                                                              \
    -ulay              MNI152_2009_template.nii.gz                           \
    -box_focus_slices  MNI152_2009_template.nii.gz                           \
    -olay              stats.FT+tlrc.HEAD                                    \
    -cbar              Plasma                                                \
    -pbar_posonly                                                            \
    -ulay_range        0% 120%                                               \
    -func_range        270.089661                                            \
    -thr_olay          25.314201                                             \
    -olay_alpha        Yes                                                   \
    -olay_boxed        Yes                                                   \
    -set_subbricks     0 0 0                                                 \
    -opacity           9                                                     \
    -pbar_saveim       "QC_FT/media/qc_07_vstat_Full_Fstat.pbar.jpg"         \
    -pbar_comm_range   "99%ile in mask"                                      \
    -pbar_comm_thr     "90%ile in mask, alpha+boxed on"                      \
    -pbar_thr_alpha    Yes                                                   \
    -prefix            "QC_FT/media/qc_07_vstat_Full_Fstat"                  \
    -save_ftype        JPEG                                                  \
    -blowup            2                                                     \
    -montx             7                                                     \
    -monty             1                                                     \
    -montgap           1                                                     \
    -montcolor         black                                                 \
    -set_xhairs        OFF                                                   \
    -label_mode        1                                                     \
    -label_size        4                                                     \
    -no_cor                                                                  \
    -cmd2script        run_qc_07_vstat_Full_Fstat.tcsh                       \
    -c2s_text          'APQC, vstat: Full_Fstat'                             \
    -c2s_mont_1x1                                                            \
    -do_clean

You can adjust the paths from there, as well as likely the func_range and thr_olay, which come from percentiles within the dset. There are some additional bells-and-whistles there (like the -c2s_* and cmd2script options), but I left those in for no deep reason.

As a sidenote, I got that by running:

apqc_make_tcsh.py -subj_dir . -uvar_json out.ss_review_uvars.json -do_log

... where the "-do_log" option is the important one to log all commands run.

The image shown in the above post actually comes from a fairly different command, a wrapper for @chauffeur_afni called @djunct_edgy_align_check.

--pt

ps: this question+answer are cross-posted with this thread.

1 Like

Thank you very much, I was hoping that you might stumble on my post, thank you for your time.

Would this script also enable me to cat the colorbar in the image ?
I have trouble using the colorbar_tool.py function as my @chauffeur_afni does not output any .json file.
Is it regular behavior ?

What it the best way to add the color bar with colorbar ranges and thresholds ?

thanks again

P.S : indeed I posted in two different websites to be sure to reach relevant people.
Shall I trash the one on neurostar ?

Let me start by noting my AFNI version:

$ afni -vnum 
AFNI_25.2.17

Some colorbar updates have been pretty recent-ish to this version. What is your AFNI version number?

I ran this command, where I changed the output names, replacing subdirectories that might not be there in your case with "AAA_":

@chauffeur_afni                                                              \
    -ulay              MNI152_2009_template.nii.gz                           \
    -box_focus_slices  MNI152_2009_template.nii.gz                           \
    -olay              stats.FT+tlrc.HEAD                                    \
    -cbar              Plasma                                                \
    -pbar_posonly                                                            \
    -ulay_range        0% 120%                                               \
    -func_range        270.089661                                            \
    -thr_olay          25.314201                                             \
    -olay_alpha        Yes                                                   \
    -olay_boxed        Yes                                                   \
    -set_subbricks     0 0 0                                                 \
    -opacity           9                                                     \
    -pbar_saveim       "AAA_qc_07_vstat_Full_Fstat.pbar.jpg"                 \
    -pbar_comm_range   "99%ile in mask"                                      \
    -pbar_comm_thr     "90%ile in mask, alpha+boxed on"                      \
    -pbar_thr_alpha    Yes                                                   \
    -prefix            "AAA_qc_07_vstat_Full_Fstat"                          \
    -save_ftype        JPEG                                                  \
    -blowup            2                                                     \
    -montx             7                                                     \
    -monty             1                                                     \
    -montgap           1                                                     \
    -montcolor         black                                                 \
    -set_xhairs        OFF                                                   \
    -label_mode        1                                                     \
    -label_size        4                                                     \
    -no_cor                                                                  \
    -cmd2script        run_qc_07_vstat_Full_Fstat.tcsh                       \
    -c2s_text          'APQC, vstat: Full_Fstat'                             \
    -c2s_mont_1x1                                                            \
    -do_clean

I got the sagittal and axial slice views, as well as the colorbar image with transparent thresholding noted, as follows:
AAA_qc_07_vstat_Full_Fstat.pbar


To set the colorbar ranges and thresholds is hard to generalize. What is your application of interest?

In many areas of hte field, people use voxelwise thresholds of 'p=0.001'. Is that relevant for your data?

Actually, what is your data? Do you have just stats images to overlay and threshold (like here, because this is a Full F-stat; there is no GLM coefficient or beta weight to show), or do you have an effect estimate coefficient plus a stats dataset to present?

--pt

ps: re. other post, it might be worth noting just that the conversation is continuing here.

Hello,

Thank you for your reply.
My data is related to this pubication : Multiband multi-echo simultaneous ASL/BOLD for task-induced functional MRI.

I want to plot the M scaling factor which roughly represents the maximum %BOLD signal change in each voxel. It is calculated from mean and Beta_coef of BOLD and ASL signals.
Being a constructed measure, the threshold I will input will be arbitrary (until I find a good indicator to calculate a threshold).

What I usually do is pick a subject, view the data on AFNI GUI and apply the thresholds and ranges read from the GUI to all other subjects in my script.

As I have several measures (3 per subjects) and it is a group study, I want to create the images dynamically.
In order to do so I would like to be able to dynamically add all these information (subject number, thresholds and range) on the plot, around a miniature of the colorbar.
So I was wondering if there is a way to plot the brain maps and these information through an AFNI script ?
Is it only possible through html ?

Thanks again for your help

Howdy-

That all sounds good. Indeed, that is the strategy I would employ to make systematic images that I could flip through---first starting in the AFNI GUI to see what I would want to do, and then translating that to an @chauffeur_afni command to be able to across all datasets.

To this question: So I was wondering if there is a way to plot the brain maps and these information through an AFNI script ? Is it only possible through html ?

Well, if you are referring to the way that descriptive text and colorbar values are placed around, then yes, I think the HTML layer would be one way to go. There isn't a very native way to do that at present. With a view toward automating more parts of figure-generation, I'd pondered wrapping the text-around-colorbar aspect, but I haven't gotten to that yet---so many little things to consider like scaling sizes, etc.

So, at present, I visualize the chauffeur images (or the 2dcat-concatenated ones) as a set and flip through them in a computer application like "eog" on Linux or "open" on macOS. That works well for seeing a lot of them, and being able to regenerate them. And if the colorbar, ranges and thresholds are all constant, then I just need to see those once anyways. To generate figures, I put the images into a libreoffice ODP along with the generated colorbar, and then add the text there for a figure.

--pt

1 Like

This is great, thank you very much for your help.
Your method is similar to what I've been doing so far. It would be nice to have it automated but I guess it is a lot of work.

I think these explanations could help other people understanding how things work too.

Thanks again

1 Like

FWIW after creating brain and stats-on-brain images with @chauffeur_afni, i sometimes use ImageMagick to add text directly to an image itself. such a command might look something like this for a recent ImageMagick install:

magick input.jpg -pointsize 30 -fill red -annotate +20+30 "Your Text Here" output.jpg

which in this example adds the string in the upper left image corner.

-Sam

1 Like

Hello,

Thank you so much, I think this is exactly what I was looking for !!!
This is perfect, it's really much better to launch a script and check the images without fiddling with it.

Should we consider using this dynamically in @chauffeur ?
Or is Imagemagick too versatile ?

Thanks for your help folks

So I'm nearly achieving xactly what I want, but I have trouble defining the sizes in 2dcat.
I can't keep the original size of the colorbar_modified.jpg, 2dcat resizes all images to the same ratio.

Is there a way to keep the images sizes in 2dcat ?
Is there a linux function that might just do that ?

Here is what i can achieve so far :)

Here is the function so far :
https://github.com/sboylan/BOLD-ASL_toolbox/blob/main/plot_brain_maps.sh