replicating AFNI GUI's brightness reduction for each of three echos using plugout_drive

Hey,

I have EPI data with three echos. When I load these files into AFNI and open a new controller for each of the three echos, I can clearly reduction in the brightness of the data for each of the three echos. This is clearly shown in this image https://i.imgur.com/nSN0ffb.png In this image echo 1 is the first row, echo 2, the second and echo 3 the final row.

I’m making images of each slice (for each orientation : axial, coronal, and sagittal) for each echo. To do this, I’m using AFNI to load the EPI file and open the three image windows. Then in a loop over the number of TRs, I use plugout_drive to set the underlay to the next TR and save three JPEGs, one for each orientation. Subsequent to this, I use ImageMagick to annotate each image with the echo from which it was created.

When all images for the three orientations for the three echos are created and annotated, I use ffmpeg to make a movie, where each frame in the movie consists of the three images, one for each echo, stacked horizontally. The goal of this movie making is to easily see gross movement by the subject.

While I can get all this to work, including making images with AFNI, annotating them and creating the movie, what I cannot replicate is the reduction in brightness over the three echoes shown in the image above. All my images from each echo look very similar in terms of brightness. I think this is because I can’t replicate how AFNI maps data values for each slice to the colorbar of the underlay, but am unsure.

The full script is at movie making BASH script

Can any one provide any advice on how I might replicate this reduction in brightness programmatically using AFNI and plugout_drive?

Thanks in advance,

Colm.

Hi, Colm-

I suspect that using @chauffeur_afni:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/tutorials/auto_image/auto_%40chauffeur_afni.html
would simplify your life here. It automatically makes sets of axial, coronal and sagittal images. It can make montages, and/or you can specify slice locations. You can specify ulay and olay ranges at the command line. You can combine it with 2dcat:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/tutorials/auto_image/auto_2dcat.html
too.

This is a wrapper for making movies of a single slice—it will make 3 movies, one of axial, sagittal and coronal—which you might find useful, and in fact it might do what you want directly?


@djunct_4d_imager                 \
        -inset  4D_DSET        \
        -prefix PREFIX           \
        -do_movie AGIF

… which will produce these outputs:


+ an image of the same central slice across volumes along the time
      axis, with the brightness range constant across volume
      ("*onescl*" images); that is, the same grayscale in each panel
      corresponds to the same numerical value.  
    + an image of the same central slice across volumes along the time
      axis, with the brightness range possibly *varying* for each
      panel across volume ("*sepscl*" images); that is, the grayscale
      value in each panel can (and likely will) correspond to *a
      different* numerical value.  Useful, for example, for checking
      details in DWIs, where the expected scale of values can change
      dramatically across volumes.
    + (with option flag) a movie version of the "onescl" images,
      showing one slice at a time.
    + (with option flag) a movie version of the "sepscl" images,
      showing one slice at a time.

–pt