QC not produced by afni_proc

I ran afni_proc.py-based analysis and it produced no QC folder. I may have done something wrong in setting up the analysis , but I can’t figure out what. I would appreciate suggestions…

Here is the relevant part of the output:


apqc_make_tcsh.py -review_style pythonic -subj_dir . -uvar_json out.ss_review_uvars.json
++ Found 40 files for QCing.
Traceback (most recent call last):
  File "/home/pawel/abin/apqc_make_tcsh.py", line 563, in <module>
    tspace    = lat.get_space_from_dset(ap_ssdict['template'])
  File "/home/pawel/abin/afnipy/lib_apqc_tcsh.py", line 291, in get_space_from_dset
    dset_fullpath = com.so[0]
IndexError: list index out of range

afni_proc call from my script


afni_proc.py                                                              \
        -subj_id                 ${subj}                                      \
        -script                  ${odir_ap}/proc.$subj.${state} -scr_overwrite \
        -out_dir                 ${odir_ap}/${subj}.${state}.results          \
        -blocks                  ${appy_blocks}                 			  \
        ${appy_blur_param}  \
        -dsets                   ${subj_epi}                                  \
        -copy_anat               "${dset_subj_anat}"                          \
        -anat_has_skull          no                                           \
        -anat_uniform_method     unifize                                         \
        -anat_unif_GM            yes                                           \
        -anat_follower_ROI       WMmask epi ${odir_aw}'/'WM_anat_mask         \
        -anat_follower_erode     WMmask                                       \
        -radial_correlate_blocks tcat volreg                                  \
        -radial_correlate_opts   -sphere_rad 14                               \
        -tcat_remove_first_trs   2                                            \
        -volreg_base_dset        "${EPIname}[0]"                              \
	-volreg_align_e2a                                                     \
        -volreg_tlrc_warp                                                     \
	-align_epi_strip_method  3dSkullStrip                                 \
        -align_opts_aea          -cost ${cost_a2e} -feature_size 4            \
                                 -partial_coverage -deoblique on              \
                                 -giant_move -cmass cmass                     \
                                 -check_flip                                  \
	-align_epi_ext_dset      "${EPIname}[0]"                               \
        -tlrc_base               ${refvol}                                    \
        -tlrc_NL_warp                                                         \
        -tlrc_NL_warped_dsets                                                 \
            ${odir_aw}/${subj}*_warp2std_nsu.nii.gz                           \
            ${odir_aw}/${subj}*_composite_linear_to_template.1D               \
            ${odir_aw}/${subj}*_shft_WARP.nii.gz                              \
        -regress_anaticor_fast                                                \
        -regress_anaticor_radius  20                                          \
        -regress_anaticor_label   WMmask                                      \
        -regress_motion_per_run                                               \
        -regress_apply_mot_types  demean deriv                                \
        -regress_censor_motion    ${motion_threshold}                         \
        -regress_censor_outliers  ${outlier_threshold}                        \
        -regress_est_blur_errts                                               \
        -regress_est_blur_epits                                               \
        -regress_run_clustsim     no                                          \
        -html_review_style        pythonic                                    \
        -execute

Hi, Pawel-

From our recent emails, I think you have a pretty modern version of AFNI, so that shouldn’t be the issue… What is the output of:


cat out.ss_review_uvars.json

(that is, what are the contents of that file in your *.results directory, created by the afni_proc.py script?

–pt

Thank you, looking there was pretty revealing.

Under “template”: I had the correct file but the path was for HPC, not for my local computer, on which I ran the analysis.

Normally my scripts are safe against running part of the analysis in one context and another part in the other, but earlier today I was experimenting with something and apparently somehow got these folders crossed.

I mean, I still have no idea how EXACTLY it could have happened, i.e. where afni_proc got this HPC path from. I see no plausible source, and, interestingly, if I search for a part of that HPC path in afni_proc.py’s output or proc, it’s not there. But some other things were messed up from trying small bits at a time and this must have been a side effect.

l’ll re-run it properly and I hope everything will be fine.

Hi, Pawel-

The template is the trickiest part of the HTML generation, in the sense of moving your analysis around to different computers in stages (which I think is what has caused your issue here). Every other piece of information needed to make the HTML is in the *.results directory together—so as long as you move your full *.results directory, you could re-generate the QC on any machine, except for the template, which will often have a different path.

The way the QC scripting tries to get around this badness is as well as it can—if you look near the top of the @ss_review_html script, which creates the QC images, you will see the “Top level: find main dset” section. This first tries to see if the template dset exists at the place specified by the full path of whatever is in the template variable; if that fails (e.g., if you have moved systems), it will try to use “@FindAfniDsetPath” on just the template name to find it on whatever this new system is (which includes the current directory, your AFNI_SUPP_ATLAS_DIR, AFNI_GLOBAL_SESSION, AFNI_ATLAS_PATH and AFNI_PLUGINPATH); if that fails to find the dset in any of those locations, then it has no idea where to look and rightfully gives up. (Or at least that is the way it is supposed to work—if your template of interest happens to be in one of those secondarily-checked locations on your system, and yet this is failing, please let me know!)

So, to be able to move around on different systems, if you have your template of choice in one of those special locations (AFNI_GLOBAL_SESSION is where I have mine, typically), then you should be OK.

If you were wondering how the APQC got the name+location of your template anyways, indeed it is not quite a straightforward process. If you look toward the end of your afni_proc.py-generated proc* script, you will typically see these two sections:


auto block: generate review scripts
auto block: finalize

The first runs gen_ss_review_scripts.py to go through and “figure out” (and yes, some of the internal logics are nearly consciousness-worthy) various pieces of information, which get stored in the out.ss_review_uvars.json: a dictionary of possible known items, which is used to generate the QC. The second section is what then generates the QC script and puts everything together into the HTML.

And to answer your specific question of, “where does the template name come from?” the declassified answer from government sources is that gen_ss_review_scripts.py looks in the history output of one of your “-tlrc_NL_warped_dsets …” from the initial AP command, and figures out the information from there. Magic!

If you want to re-run the APQC HTML generation (getting a new @ss_review_html and QC_/ directory, with the old QC_/ directory moved aside to old_QC_*_TIMESTAMP/, for recordkeeping), you can download this script that I made/use:


wget [https://raw.githubusercontent.com/afni/afni/master/src/ptaylor/supplement/redo_apqc.tcsh](https://raw.githubusercontent.com/afni/afni/master/src/ptaylor/supplement/redo_apqc.tcsh)

You can run it in a current *.results directory with no arguments, and it will regenerate the QC directory there. Or, you can run it with any number of *.results directories listed as arguments, and it will go to each and regenerate QC, such as:


tcsh redo_apqc.tcsh  RESULTS_DIR0  RESULTS_DIR1 RESULTS_DIR2 ...

This might be useful if you plop your template on your local machine into a special directory where @FindAfniDsetPath can find it.

I am not sure what this means, from your last post:
“But some other things were messed up from trying small bits at a time and this must have been a side effect.”
… but hopefully the above sorts out the issue.

–pt

Hi Paul,

thank you, this makes me rethink my workflow. I am also concerned whether this issue might affect the analysis itself - I’ll get to it at the end.

Let me describe the workflow quickly - you may or may not remember it from the previous conversations, but I also want to make it clear in case anyone else reads it and learns from it.

So the pipeline is based on MACAQUE_REST_DEMO, and, like the demo, it consists of three stages 1. aligning the individual anatomy to the NMT template via animal_warper (AW), 2. preprocessing and regression via afni_proc (AP), 3. actual correlation stuff. However I restructured the scripts so that they can run in a far more parallelized way on HPC, beyond what OpenMP allows within some AFNI programs. So in 1/AW all monkeys are processed in parallel as separate jobs, in 2/AP it goes much further as each monkey, session and blurred/not-blurred analysis are processed in parallel. This is possible in HPC, while when running locally things run serially similar to the original demo. All of that is controlled automatically by my script.

Also I can run each stage separately by setting script variables (sort of similar to $1 of the demo script), and occasionally I would run only a subset (i.e., one stage, one monkey, one session) for testing purposes.

I have been mostly working on stage 2/AP recently, as I am struggling with getting anat-EPI alignment to work in some cases (Daniel about it, knows as he has been helping me a lot).

For this reason, what I would typically do is have the results of 1/AW unchanged and originally produced on HPC, and I would modify 2/AP and test-run it on a subset of data locally. This worked well, I am pretty sure also including QC, as this is the place I would normally look first for a quick look at the alignment. I did not realize there were any risks, as both environments have a copy of my data and a copy of the template, and the script is provided with the appropriate, environment-specific paths to both.

What I did yesterday was: I made changes to 1/AW and ran it locally to check a new idea. Results were bad, so I reverted the folder with the AW results to the original state. Then QC in 2/AP stopped working, as I described in the beginning of this thread. I realized though that “reverted” may have been incomplete (“some other things were messed up”), this is why I did not go into details of that, but I copied older 1/AW results from HPC to revert it for sure, and re-run 2/AP locally on a subset overnight. And I got no QC again.

So I will look into the environment variables, until now I was not aware that some information is passed outside the directories where AW and AP place their results, and/or through these directories but in a hidden way like file histories.

But now I have this question whether the issue of the template location may affect the analysis itself. Because I have been puzzled for quite a while by inconsistencies between HPC and local results.

I test-ran 2/AP with a new set of parameters on a subset of data locally, but using 1/AW results previously calculated on HPC. The EPI-anat alignment would look OK (if not perfect), so I would run 2/AP on the entire dataset on HPC. The alignment for the same session would be completely off - which looked as if the procedure was sensitive to the computer or OS it is running on. Could it be that it was another manifestation of the template location issue?

Hi, Pawel-

Indeed, I typically process my data in a “level-by-level” manner, checking each step before proceeding. So, I would run @animal_warper on each N subjects, check and be happy with the results, and then run afni_proc.py and check all results, etc. This is easier for me conceptually and QC-wise—differences among subjects are more apparent this way, for example. Even more specifically, I try to set up processing for 1 subj in a cohort (or 1-3, just some small number), and set everything to run pretty well for that subject, and then expand that to the full cohort—that seems an easier way to troubleshoot.

And yes, processing on biowulf/HPC makes sense. It can be tough to check things there sometimes, but hopefully the automatic QC images made by these programs helps that process: you can scp them to another computer and go through them quickly. And I am sure Daniel’s input is useful, too!

In terms of hopping between systems: most things in AW are all local, and most things in AP are all local; the template is the hardest part. I would put the template+atlases into a global sessions directory on each computer, set the global sessions environment variable in ~/.afnirc on each computer, and then refer to the template name as much as possible. Both AW and AP make use of “@FindAfniDsetPath” under the hood, so you should be able to specify the dset name as much as possible. For AW, this even applies to: the brainmask follower, atlas_followers, template_followers and seg_followers. That should be more flexible across more systems.

I am not sure what this means:
<>

If you had different NMTv2 versions on your system, this could cause confusion potentially. But you can avoid that by… not having that be the case.

Daniel mentioned the alignment behaving differently thing—this is something I would like to look more at perhaps separately and individually. If talking about anat-to-template alignment, perhaps there are slightly different templates on each system? If talking about EPI-to-anat alignment, it is hard to see how the template could affect it. One would want to rule out having slightly different dsets in different cases (e.g., deobliquing a dset on one system, but not on the other), but barring that, this is something I would like to hunt down if it comes from the alignment (alignment is stochastic, because of random seeding, so that is one source of potential variability across systems).

–pt

Thank you. This is how I have been basically working, with the exception that we are still scanning subjects for the project. I have been happy with AW results in all monkeys (they all have their anat scans done), and with AP results from one monkey. Once I started expanding to another monkey (we have full functional datasets from 2, just starting another 2), things started going sideways, epi-anat would fail for specific sessions or runs of that second monkey (and on HPC only)

One of the ideas that Daniel suggested to solve that was to use MDEFTs acquired during functional sessions as anat. Before that, I was using MPRAGEs which were acquired separately (and in another scanner). So my first step was, OK, but will these MDEFTs align to the template, this is why went back to AW, and they did not. (both the EPI’s and MDEFTs are oblique and partial which apparently causes a lot of problems, plus these are monkeys with huge muscles and surface coils which makes skull-stripping within epi-anat alignment a challenge too).

I am going to sort out the environment variables first (after I figure out which of them means what), and then I will make a step back and ensure I am working from the same versions of the template and the source files on both systems. I was pretty sure they were identical, but maybe a difference crept in.

As for random seeding explaining different alignment results, I thought about that too - but then repeated runs converged to a (visually) identical solution on each system, different between the systems.

Re. figuring out env vars:

This is what I typically have on computers where I work (at the bottom of my ~/.afnirc), where I actually do literally have REF_TEMPLATES in all caps (but that is some personal choice you can obviously make!):


AFNI_GLOBAL_SESSION = /home/ptaylor/REF_TEMPLATES  
AFNI_ATLAS_PATH = /home/ptaylor/REF_TEMPLATES  

If on HPC, you might use something like /data/GROUP_DIR/USERNAME/REF_TEMPLATES, say.

If you are working exclusively with macaques, then you might want to have something like this, too, so that your “whereami” functionality would utilizing a macaque atlas:


AFNI_ATLAS_LIST = /home/ptaylor/REF_TEMPLATES/CHARM_in_NMT_v2.0_sym_05mm.nii.gz

And here is the full reference list, for details on these and more:
https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/educational/readme_env_vars.html

Re. alignment: that sounds different than the issue I had first assumed (exact same dsets with exact same code version/command on different systems giving different results). Indeed, mixing data across sessions can be tricky, because of coordinate and obliquity oddities/differences, esp. having those filtered through sphinx-to-normal re-positioning. But this is something we can discuss more in a separate thread.

–pt

NMT2 has several versions, I am using the 05mm version.

So for the environment variables, I should use
~/REF_TEMPLATES/NMT_v2.0_sym_05mm

not
~/REF_TEMPLATES

correct?

The env variables I listed are directories, not specific dsets. Which env variables are you referring to, specifically?

I would unpack NMT2 dsets into the directory I want to use as a global session (e.g,. /home/username/REF_TEMPLATES), and then refer to the full REF_TEMPLATES path.

The NMT contains different sets of data (the fullhead, the 0.5mm resolution, etc.). Each of those sets of dsets has different names, I believe:


NMT_v2.0_sym_05mm/CHARM_in_NMT_v2.0_sym_05mm.nii.gz
NMT_v2.0_sym/CHARM_in_NMT_v2.0_sym.nii.gz
NMT_v2.0_sym_fh/CHARM_in_NMT_v2.0_sym_fh.nii.gz

… so those CHARM* could all be unpacked into a single directory. There should be no ambiguity amongst those filenames.

–pt

Oh I see. I had the template versions unpacked to separate subdirectories.

I don’t think those env vars will hunt into subdirs.

Some templates+atlases come as a pack of many files (e.g., MNI dsets, the NMTv2, etc.). For easier organization or add/subtracting those to my global sessions, say, I typically keep the “full”, unpacked version in a subdir REF_TEMPLATES/, and just copy out the ones I want into REF_TEMPLATES itself.

–pt