Due to my previous questions regarding bugs in the update we realized we had not followed the development of new recommendations for a typical task pre-proc pipeline. Rick kindly refereed me to:
afni_proc.py -show_example 'example 6b'
And I also read the SSwarper documentation. All the changes make sense to me but it leaves me with a couple of questions.
Just a quick note: Using the SSwarper to pre-warp and skullstrip basically doubles the pe-proc time compared to our old afniproc scripts. This analysis took in total 73 min of which 52 min was from SSwarper on a server with 40 cpu-threads. I hope it’s worth it :D.
As recommended I used the align option “-check_flip” to get a warning if the flipped version of the epi(?) gave a lower cost than the original unflipped EPI. The warning says that the data does not need flipping but I’m not sure weather this is due to an actual lower cost or due to the error below where it states that it can’t read the cost file and gives the same cost (1000) for both flipped and unflipped data:
++ Writing -allcostX1D results to '__tt_lr_flipcosts.1D'
+ - histogram: source clip 288.416 .. 1065.28; base clip 196.319 .. 848.294
+ - versus source range 1.69773e-05 .. 1148.31; base range 0 .. 934.575
+ 53147 total points stored in 561 'RHDD(11.1151)' bloks
++ -allcostX1D finished
ERROR: error reading cost list file
No flip cost is 1000.000000 for lpc+zz cost function
ERROR: error reading cost list file
Flip cost is 1000.000000 for lpc+zz cost function
Data does not need flipping
Can you please take a look at our resulting pre-proc (task) script and see if you see any “fatal errors” in the settings?
I noticed that the despike block was removed in the 6b example. Did I miss the memo regarding despiking being bad?
As far as I understood the 6b example you only get the “anaicored” stats output since you use regress_3dD_stop. We want to get both with and without anaticor so I removed regress_3dD_stop. In a previous thread I was recommended by one of you to use -regress_anaticor_fast since it was supposed to be both faster and better but I don’t see that in 6b. Is there a reason for this?
We have previously been using -regress_motion_types demean deriv. In 6b this is not specified and I think the default is only to use the de-meaned versions of the motion parameters. Is deriv not recommended?
Even though I run SSwarper with -deoblique and in I use -deobliqe on in the algin option in the following afni-proc the output of afniproc stills warns about:
++ 3dAutomask: AFNI version=AFNI_20.0.12 (Feb 19 2020) [64-bit]
++ Authored by: Emperor Zhark
*+ WARNING: If you are performing spatial transformations on an oblique dset,
such as ./epi.r01+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
Is this just one of those AFNI-warnings that always appear or have I done something wrong? =)
The first few lines in the code are to get the slice timings from the .json generated by dcm2niix and to preform the SSwarp.
I will comment on a subset of points, and leave it to finer minds to reply about the rest.
Re. the SSW part:
in short, yes, the @SSwarper results are typically worth it. Note that I have been processing groups of subjects here on our Biowulf machine, using 16 CPU threads, and most brains took about 70-75 mins, though a small percentage of stragglers took much longer (~2.5 hours). After a while, adding so many CPUs might not result in much benefit; I guess I could/should test this here to see what a good number would be for balancing speed/usage. But I typically use 16 CPUs (or thereabouts).
Re. Q5, 3dAutomask+deobliquing:
Looking at the proc* script from a Bootcamp example, i think that align_epi_anat.py by default will automask the EPI before trying to align it; therefore, before any deobliquing specifications have reached it, an automasking is performed (and using @SSwarper -deoblique only affects the anatomical vol, not the EPI). I don’t believe that this is a “scary” warning in this case (but I also don’t believe it is unexpected, even with those specifications).
Re. Q0, check-flipping:
I am highly curious about that error. I will run the check flip on the dset you uploaded and see if that provides the same error-- I have not seen that previously.
Note also that the APQC HTML output provides images for you to quickly check and visually verify the flipping results-- the only real way to tell. Sadly, human eyes have not outlived their usefulness in data processing, despite what you read in some corners!
I looked and the unflipped was indeed better. But would be really nice if you could check if there is a bug or implementation fault from my side!
I also noticed that in the suggested proc blurring was used with a 4 mm setting. We kind of have a debate here regarding blurring/smoothing.
In a sense the data is already blurred biologically, MR scanning and then normalization. And another thing is that in AFNI we correct for false positives via FWHMx+clustsim where spatial smoothness/autocorrelation is measured and converted into tables of p-values vs cluster-sizes. If we use blur we usually get stronger stats and perhaps bigger clusters but we are then penalized with more conservative clust-tables since we have introduced smoothness. Is it a net gain using blur compared to penalty we get in the per-voxel p-value vs cluster size in the clust tables (clustsim output)?
Since blurring is in the current recommended afni-proc example. (I know this is a difficult question and might be per case basis and depend on the task and MR resolution at hand -but generally…).
The blurring “default” is 4mm for no deep reason-- that is, it is not necessarily a “recommended” value, it is just a place holder. Note that for typical EPI voxel sizes with edges of ~3-4mm per side, this would result in very minimal smoothing. If you want noticeable smoothing, maybe having a value 2x your voxel edge length would be more noticeable.
As to the pros/cons of smoothing in terms of strengthening some signals and potentially washing out some small regions… yep, life is full of tradeoffs! But note that Bob developed the ETAC methodology specifically to address+reduce the arbitrariness of smoothing values on final cluster results; see this paper: https://www.biorxiv.org/content/10.1101/295931v2 https://www.liebertpub.com/doi/10.1089/brain.2019.0666
This methodology would involve not smoothing your initial data, and then using different blur sizes later on, and combining all results “equitably” (while also combining results across different semi-arbitrary voxelwise threshold values, too!).
That’s all I wanted to hear :). We have tried to dabble with ETAC but found it still to be very conservative. But that was in the early days, perhaps time to give it a new go! =)
Something I noticed as a bonus:
Not sure this was from @SSwarper of afniproc since my script is running them in sequence but one of those processes created temp-files in the directory where I run my main bash-script (which makes me believe it’s the @SSwarper since afni-proc cd’s into the .results folder):
Running subjects in parallel on the same machine or from another server in the same network mount could (with very unlucky timing since they are short-lived) cause a collision of file names (since they don’t seem to have a unique identifier) and perhaps result in fatal error or read the wrong files. Unless your @SSwarper is super smart and checks for files prior to generating them. I just wanted to flag this since with this new recommended approach I guess more people will make scripts like mine (@SSwarper in sequence with afniproc) and I would guess we are not the only ones with more than one analysis machine with access to the same network home directory.
(QuickFix for that issue, since my collegue actually runs this from two servers with the same target directory. Just create and cd into the output dir of @SSwarper)
startdir=$(pwd)
datapath=/home/robka/full_task_preproc/data/C_sub${sub}_EB
T1_data=$datapath/T1.nii
mkdir pre_warp_$sub_id
cd pre_warp_$sub_id
@SSwarper -input $T1_data -base /usr/local/abin/MNI152_2009_template_SSW.nii.gz -subid $sub_id -deoblique -giant_move -odir $startdir/pre_warp_$sub_id
cd $startdir
The program doing that is @adjunct_edge_align_check, which makes edge-y images to check alignment.
I guess it should make images in the odir directory of @SSwarper, and then it would be safe. Indeed, that should be adjusted our our (=my) end, and I might add a random string to the temp name, as well.
While I’m waiting to hear from the potential flip-cost-file-read error and for someone else to answer the other 2-3 questions I looked at the output of the follower dataset. I followed the example and the relevant code bits are here:
Everything looks amazing, especially the anat_final. I’m not sure what to expect of the follower dataset though? I interoperate the code
anat_follower anat_w_skull anat $T1_data
as that AFNI will also warp the original dataset to MNI-space and create a anat_w_skull dataset that is also warped (only difference between this an anat_final should be the skull?). Even though that is not what AFNI does I feel that something is wrong since the dataset follow_anat_anat_w_skull+tlrc looks faulty. It’s not centred in the viewer and the brain is warped in a way that does not look correct:
Why is this? (same script as the first post if the code in this post is not enough). Top row is anat_final bottom row is the follower dataset follow_anat_anat_w_skull.
Why does the follower look weird here? I think it has to do with having an oblique input to SSW and using that “deoblique” opt.
What is obliquity? You see your data and it looks ~nicely aligned with axial slices in places we would expect to see them, so we can recognize the parts of the brain easily, BUT there is an additional matrix stored in the header to apply to the data to put it back into “scanner coords” (i.e., where the brain was in ‘scanner space’ when acquired). There are 2 ways to “deoblique” the dataset:
the 3drefit way: purge the obliquity transformation from the header, and pretend like it never existed. This would be like pretending your subject’s head was in a different (and probably preferable, from a viewing standpoint) location/angle when the data was aquired-- namely, the location that it appears to be at present. The image you see in the AFNI GUI doesn’t change in this deobliquing case: we now treat the oblique coords as if they were the scanner coords.
the 3dWarp way: apply the obliquity matrix to the dataset to put it into scanner coords. In this case the dataset gets resampled/regridded, and the appearance of the brain in the GUI will likely change-- it might be rotated/shifted. This dataset is now back in the scanner coords.
There are two opts in @SSwarper for dealing with obliquity, if you wish (but it’s not necessary often to use one), one for each of the above, and they are here:
-deoblique :(opt) apply obliquity information to deoblique the input
volume ('3dWarp -deoblique -wsinc5 ...'), as an initial step.
This might introduce the need to overcome a large rotation
during the alignment, though!
-deoblique_refitly :(opt) purge obliquity information to deoblique
the input volume (copy, and then '3drefit -deoblique ...'),
as an initial step. This might help when data sets are
very... oblique.
You chose “-deoblique”, which is the 3dWarp-way, which resamples the dset before performing alignment, so your input T1w volume and the volume actually aligned to the template are in different spots-- that is why having your T1w as follower failed. You could use either “-deoblique_refitly” or no deobliquing to be able to have the T1w vol follow along.
Wow, what en helpful answer. Assuming the follower T1 looks good after I implemented -deoblique_refitly as an SSWarper option this really helped me understand what deoblique means as well.
Sorry for a rude bump of my own post here but there were a few things that were not cleared out:
The errors in the output related to the check-flip option:
++ Writing -allcostX1D results to '__tt_lr_flipcosts.1D'
+ - histogram: source clip 288.416 .. 1065.28; base clip 196.319 .. 848.294
+ - versus source range 1.69773e-05 .. 1148.31; base range 0 .. 934.575
+ 53147 total points stored in 561 'RHDD(11.1151)' bloks
++ -allcostX1D finished
ERROR: error reading cost list file
No flip cost is 1000.000000 for lpc+zz cost function
ERROR: error reading cost list file
Flip cost is 1000.000000 for lpc+zz cost function
Data does not need flipping
I noticed that the despike block was removed in the 6b example. Did I miss the memo regarding despiking being bad?
As far as I understood the 6b example you only get the “anaicored” stats output since you use regress_3dD_stop. We want to get both with and without anaticor so I removed regress_3dD_stop. In a previous thread I was recommended by one of you to use -regress_anaticor_fast since it was supposed to be both faster and better but I don’t see that in 6b. Is there a reason for this?
We have previously been using -regress_motion_types demean deriv. In 6b this is not specified and I think the default is only to use the de-meaned versions of the motion parameters. Is deriv not recommended?
Thanks. We do not actually consider bumping rude. It is hard to keep track of all the threads, so it is helpful when we are reminded of something lingering.
Is that part of an align_epi_anat.py command? Can you post it (or you can mail me the full output.proc.* file).
Was it really removed from 6b? I don’t recall it being in there. And that example is only about 2 weeks old, as a more complete version of example 6.
We usually apply despike in either resting state, or when we expect the subjects to be high-motion ones. For a straightforward task case, we usually do not apply it (though it is fine to do so).
Example 6b has -regress_3dD_stop because it also has -regress_reml_exec, and the expectation is to use the 3dREMLfit stats, rather than those from 3dDeconvolve. So the only point of -regress_3dD_stop is to save disk space.
Similar to despike, anaticor is more commonly applied in the case of resting state analysis, though it can be used for task as well. But the (voxelwise) anaticor regression cannot be done with 3dDeconvolve, for that 3dREMLfit is required.
Similarly, we tend to apply the ‘deriv’ parameters in the case of resting state analysis, not typically for task. Though again, it is okay to do so.
In a resting state analysis, we are more careful with motion and censoring in general. You will notice that all of these options (despike, anaticor, motion deriv regression) are applied in example 11.
I think it’s the align part since it says “+ * Enter alignment setup routine” a few rows over the errors. It seems like the automatic review part (reading the cost file) fails since it prints the same cost for boths flips which I guess are the default or starting costs of 1000. I’ll email you the full output but it should also be in the output we allready uploaded. I also think Paul Taylor was investigating this.
The other questions probably popped up due to me assuming that EXACTLY these settings were recommended for task-processing. E.g. that you saw through optimizaiton that despiking was not optimal to use for task. But I now see that this is not the case =). Thanks! I’ll just comment on the anaticor part anyway:
In an older post of mine I vaguely someone saying that anaticor_fast was just as good and faster and now the recommended option. So I was just curious to why the 6b script (which only produces anaticored stats) did not use it. But perhaps this it nitpicking. With a newer scanner (Siemens 3T prisma) would we expect ANATICOR to help that much? ANATICOR corrects for spatial drift in the coil-sensitivity, right? So far we see no “improvement” in the group stats allthough I acknowledge that weaker clusters could be more “true” than stronger clusters ;).
I get the main point, despiking won’t hurt. Neither wont “deriv” motion (apart from losing DOF). It would be awesome though if you could check the flip-errors we get!
Re: differences between afni_proc.py example 6 and example 11:
The help text for afni_proc.py now says Example 11 is no longer recommended for a complete analysis.
Is Example 11 still recommended for resting state or not? It’s not clear what is meant by “for a complete analysis” or what is wrong with example 11.
That one should probably be labelled as okay, so I will change it. Some of the example have gone through minor growing pains.
I will also add at least some detail on why examples get a thumb down.
Thanks,
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.