I am having trouble saving accumulated 3d rendered images (im trying to show the brain being sliced and rotated as a little video). I’ve tried clicking the save anim MPG button in the Display options menu and then click the Sav:mpeg button with saving from 0 to my #of images selected. It takes a while but it eventually outputs a mpg file but the file cant be opened and has “0 bites”
Hmmm, I just did a test run on my computer and it worked.
What is the output of each of the following on your computer:
afni -ver
and
which mpeg_encode
?
Also, are there any error messages in the terminal when you try to save it?
Finally, how many images is that being saved? It might be a memory glitch or something if it is a huge video. Trying first with a few images might be a good test.
I am trying to save a giant file (629 images), but I just tried it with 10 and it was super fast but still 0 bytes. This is the output to the screen from running that:
++ Running ‘/home/applications/afni/abin/mpeg_encode -realquiet test.QWA.PARAM’ to produce test.mpg
. DONE
This is the output of which mpeg_encode:
/home/applications/afni/abin/mpeg_encode
This is the output of afni-ver:
Precompiled binary linux_openmp_64: Dec 22 2016 (Version AFNI_16.3.18)
I am working on my institution’s High computing cluster and don’t have permission or ability to update afni. I should probably try running this on my local machine where afni is up to date, that might be the problem.
Thanks so much for your help!!!
Well, that is a veeeery old version of AFNI. I think it would really make sense to contact The Powers That Be on your local cluster and ask them nicely to update the binaries-- surely everyone will benefit? They should pretty much just need to run “@update.afni.binaries”; if worse comes to worst for them, they could also download the modern binary tarball for that version: https://afni.nimh.nih.gov/pub/dist/tgz/linux_openmp_64.tgz
unpack it, and copy it to the same location.
I am guessing that a more modern version should work fine, but there might be some other odd compatability issue on the cluster (perhaps??).
Thanks for the info! I agree, I will definitely look into requesting an AFNI update from the PTB (Powers That Be).
All the rendering goes much faster using the most updated AFNI but I found that the 0 bytes problem is fixed by leaving the “blowup” option as 1 rather than changing to 8, as I was doing before. Thanks so much for the help and quick responses!
Hmm, interesting. The “blowup” feature if for saving the image at a higher res; it might not actually make much difference here, anyways.
I note that I also got a similar “0 byte” file phenomenon when trying to use a blowup factor of 8 when saving the rendered dataset, but that was signified with an error message about core dumping in the terminal:
++ Running '/home/ptaylor/afni_src/linux_ubuntu_12_64/mpeg_encode -realquiet test_save.VXJ.PARAM' to produce test_save.mpg
*** buffer overflow detected ***: /home/ptaylor/afni_src/linux_ubuntu_12_64/mpeg_encode terminated
Aborted (core dumped)
. **DONE**
This occurred even in my very modern version of AFNI. I assume that you actually might have gotten the same message?
Note that when I tried with a blowup factor of 2, 3 or 4, I could still save an MPG successfully, but any higher value lead to core dumping.
–pt
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.