I have several objects and I used uber_subject.py for each one of them. Now I want to get the union of the full_masks created by uber_subject.py.
I have gathered all my masks in a single folder and I then use this command:
3dmask_tool -inputs full_mask.*+orig.HEAD -union -prefix result_mask
But I get this error:
++ processing 110 input datasets...
++ padding all datasets by 0 (for dilations)
*+ WARNING: grid from dset full_mask.S108836 does not match that of dset zyxt
*+ WARNING: grid from dset full_mask.S111991 does not match that of dset zyxt
*+ WARNING: grid from dset full_mask.S114047 does not match that of dset zyxt
*+ WARNING: grid from dset full_mask.S116155 does not match that of dset zyxt
*+ WARNING: grid from dset full_mask.S119292 does not match that of dset zyxt
** FATAL ERROR: nvoxel mis-match
** Program compile date = Dec 31 2016
How can I exclude the subjects that cause these problems?
And if I exclude them will the result mask be accurate?
Thank you very much for your time.
To at least find out which ones are inconsistent, try:
3dinfo -prefix -d3 -nijk -same_grid full_mask.*+orig.HEAD
If the grids of the bad subjects are only a little off,
you might want to reprocess them but with the
added option of -volreg_warp_dxyz to specify
exactly how big the voxels should be.
In any case, you should figure out why the original
dimensions vary, presumably from the scanner.
Thank you very much for your answer. Using your suggestion I found that I have accidentally added some corrupted masks into my folder. But now I get lots of errors like this:
*+ WARNING: grid from dset full_mask.S211480 does not match that of dset zyxt
Almost all of the rows are identical, except for a few that differ. And the difference is very little :
full_mask.S175640 -3.312500 -3.312500 3.312500 196608 0
full_mask.S175741 -3.312500 -3.312500 3.310000 196608 0
Should I be worried about this WARNING ?
Maybe we should back up first. Without seeing the non-cubic
voxel sizes, I did not realize everything was still in orig space.
Given that, how are you doing comparison across subjects?
Were these datasets registered? Exactly how, given that they
are still in +orig space?
Ignoring that, you could almost fake the grid, given that an
accumulated affect of the difference would only amount to
0.1 mm across 40 slices, say. But still, it would be more
acceptable to resample them to the same grid first, meaning
they should be reanalyzed. Even resampling is questionable,
but since the grids seem so close, there should be very little