epi_b0_correct.py help improvements + feature request

In the help for epi_b0_correct.py there are a couple of areas that I think are confusing. First, in the section listing options we find the following:

-in_epi_json  FJSON  : (opt) Several parameters about the EPI
                         dset must be known for processing; these MIGHT
                         be encoded in a JSON file accompanying the
                         frequency dset. 

I believe this should say 'JSON file accompanying the epi dset" rather than the freq dataset. The JSON associated with the frequency dataset may be of use, but will not contain the parameters needed for correcting the EPI image.

Second, later in the Siemens section, there is a (very useful) discussion of scaling output:

Therefore, we also want to divide
    by the EPI's echo time difference (which might be saved as
    'EchoTimeDifference' in an EPI's JSON sidecar).  For example, the
    standard value of this at 3T is about 2.46 ms (= 0.00246 s), but
    check what it is in your own data!

Here, I believe we have the opposite problem. The echo time difference should refer to the echo time difference between the two magnitude images that were collected as part of the fieldmap. That will be in the FREQ JSON sidecar (if you are lucky), or more likely can be calculated from A) looking at the sequence or B) comparing the EchoTime values of the first and second magnitude image. In any case, the provided difference (2.46 ms) is right for a typical fieldmap sequence.

Finally - a request, would it be possible to augment how masking and smoothing are performed? At the moment, the mask is created, used on the the frequency dataset and then the result is smoothed. This leads to the outside-of-brain zeros attenuating the magnitude of distortion at the edge of the brain, where it is often most pronounced. One possible fix would be to extend the pixel values at the edge of the mask to the edges in the phase encoding dimension. That is, if there was some anterior voxel at the edge of the brain mask, its phasediff values would be copied, in plane, along the anterior direction, with the same done along the posterior edge of the brain mask. Smoothing could then be applied as before.

This may be useful for location with large amounts of distortion - in which brain data has been pushed many voxels beyond the edge - the smoothed distortion correction map has limited effects on those voxels. Not required, just think it may be useful.

Hi, Logan-

Thanks for the help file corrections-- both of those have been pushed in. And in fact, I see that EchoTimeDifference has even been changed to separate “EchoTime1” and “EchoTime2” values now, based on what other software desired.

Re. the mask: spookily enough, Vinai and I were just looking at some data, and this was brought up. The balance is between getting the (hopefully) well-estimated measures within the brain (-> hedge with larger mask) and NOT bringing in noisy ones from outside the brain (hedge on smaller mask). But I agree, the results are very sensitive to the edge of the brain right where distortions are largest. At the moment, the -automask_peels and -automask_erode options provide the only real control on this:

-automask_peels   AP : if automasking a magnitude image to create a 
                         brain mask, AP is the 'peels' value of 3dAutomask
                         (def: AP = 2)

  -automask_erode   AE : if automasking a magnitude image to create a
                         brain mask, AE is the 'erode' value of 3dAutomask
                         (def: AE = 1)

but we’ll see about this still…


Spooky indeed.

It is a challenge, certainly - but I thinking to dodge the edge issue somewhat by just copying the values that are present at the edge (still within brain) out into the ether in the phase encode direction (forward and backwards). In this way, as long as your mask is within the brain, it would be pushing and pulling things in the right direction, because it would be extending those (good) within brain values. This could be favorable if masking was too extreme.
This may be the method that FSL uses, not sure.

Another idea comes to mind, and that is aligning a T1 image with the fieldmap, and then using it to calculate a brainmask, resampled to fieldmap resolution of course. Though, of course this could be performed by the user, as you allow direct mask input…

I’m lucky in that 3dSkullStrip works well on the magnitude images I have, so it doesn’t cause too much trouble, and despite some wisps that extend out beyond the brain, I’m getting most of the epi data aligned quite well. Appreciate all the work on this!

Hi, Logan-

Sidenote re. dealing with wisps in masks:

Those can be eradicated with something like:

3dmask_tool \
   -dilate_inputs -2 2 \
   -input MASK \
   -prefix NEW_MASK

This will “inwardly erode”, so the wisps dry up and disappear, and then then dilate outward again by the same amount, producing a smoother outer surface; the value ‘2’ used twice there could be adjusted, bien sur.