Hello, I am trying to run procpy, but it crashes pretty early on (already when it copies anatSS it creates an empty file). It sends the following outputs (this is just a sample, but it continues along these lines):
++ 3dTcat: AFNI version=AFNI_23.2.04 (Aug 9 2023) [64-bit]
*** Datablock write error: Write error in brick file: Is disk full, or write_protected?
++ elapsed time = 59.4 s
++ 3dAutomask: AFNI version=AFNI_23.2.04 (Aug 9 2023) [64-bit]
++ Authored by: Emperor Zhark
*+ WARNING: If you are performing spatial transformations on an oblique dset,
such as ./epi.r01+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./epi.r01+orig.BRIK is 5.100235 degrees from plumb.
*** failure while reading from brick file ./epi.r01+orig.BRIK
*** desired 816316416 bytes but only got 380739584
*** Unix error message: Undefined error: 0
THD_load_datablock
3dAutomask main
** FATAL ERROR: Can't load dataset './epi.r01+orig.BRIK': is it complete?
That error happens early on, and implies that the dataset is corrupted or imperfect (e.g., perhaps an interrupted copy procedure?).
What are the outputs of:
# A) show the size of the datasets
ls -l epi.r01+orig.*
# B) try reading the header
3dinfo epi.r01+orig.HEAD
# C) try accessing the data with a simple calc
3dBrickStat -min -slow epi.r01+orig.HEAD
It does get interrupted very early on. I am using multi echo data, and already when creating the pb files (even before creating the epi file), the first channel (001) seems ok (it has the right size and it opens in afni), but channels 002 and 003 are already too small and cannot be read in afni.
I attaching a few outputs to give some more information:
the info on the epi files:
the sizes and permissions of the files:
Mac radcor.pb00.tcat % ls -l epi.r01+orig.*
-rwx------ 1 ramotlab staff 65167360 Aug 22 15:06 epi.r01+orig.BRIK
-rwx------ 1 ramotlab staff 12768 Aug 22 15:05 epi.r01+orig.HEAD
and this is the info from the header:
Mac radcor.pb00.tcat % 3dinfo epi.r01+orig.HEAD
++ 3dinfo: AFNI version=AFNI_23.2.04 (Aug 9 2023) [64-bit]
Dataset File: epi.r01+orig
Identifier Code: AFN_PNHgZuEXjyP0eqMcb2V-pg Creation Date: Tue Aug 22 15:05:48 2023
Template Space: ORIG
Dataset Type: Echo Planar (-epan)
Byte Order: LSB_FIRST [this CPU native = LSB_FIRST]
Storage Mode: BRIK
Storage Space: 816,316,416 (816 million) bytes
Geometry String: "MATRIX(1.595221,-0.14237,-0.001417,-93.55726,0.14237,1.595222,0,-125.4891,0.001409,-0.000163,1.599999,-54.9976):128,128,72"
Data Axes Tilt: Oblique (5.100 deg. from plumb)
Data Axes Approximate Orientation:
first (x) = Right-to-Left
second (y) = Anterior-to-Posterior
third (z) = Inferior-to-Superior [-orient RAI]
R-to-L extent: -93.557 [R] -to- 109.841 [L] -step- 1.602 mm [128 voxels]
A-to-P extent: -125.489 [A] -to- 77.909 [P] -step- 1.602 mm [128 voxels]
I-to-S extent: -54.998 [I] -to- 58.602 [S] -step- 1.600 mm [ 72 voxels]
Number of time steps = 173 Time step = 2.01000s Origin = 0.00000s Number time-offset slices = 72 Thickness = 1.600
-- At sub-brick #0 '#4' datum type is float: 0 to 65035
-- At sub-brick #1 '#5' datum type is float: 0 to 65535
-- At sub-brick #2 '#6' datum type is float: 0 to 64980
** For info on all 173 sub-bricks, use '3dinfo -verb' **
----- HISTORY -----
[Mac: Tue Aug 22 15:05:48 2023] {AFNI_23.2.04:macos_10.12_local} 3dTcat -prefix radcor.pb00.tcat/epi.r01 'pb00.BR119.socialcognition.s2.faceloc.r01.e01.tcat+orig.HEAD[0..$]'
This is what is written in the terminal when I try to open the corrupted pb file in afni:
reading pb00.BR119.socialcognition.s2.faceloc.r01.e03.tcat+orig(816 million bytes)..................
*** failure while reading from brick file /test_preprocessed/socialcognition/BR119/s2/BR119.socialcognition.s2.faceloc.results/pb00.BR119.socialcognition.s2.faceloc.r01.e03.tcat+orig.BRIK
*** desired 816316416 bytes but only got 344178688
*** Unix error message: Resource temporarily unavailable
THD_load_datablock
AFNI_dataset_slice
AFNI_slice_flip
FD_warp_to_mri
AFNI_brick_to_mri
ISQ_getimage
ISQ_make_image
ISQ_show_image
ISQ_redisplay
drive_MCW_imseq
AFNI_set_viewpoint
AFNI_initialize_view
AFNI_finalize_dataset_CB
MCW_choose_CB
AFNI:main
and same when I tried to read the anatSS file. It is almost the right size, but still too small and probably corrupted. The same anatSS file is ok and uncorrupted in the data folder, only when copying it to the 'test_preprocessed' folder with procpy, it gets corrupted.
Any idea what could be the problem? I tried to run the same script on the same data on a different computer (with an earlier afni version though), and it works!
Thanks for sending that. It looks like there is just a copy error for some of those initial/input files (epi*) that are input into afni_proc.py. There isn't a way to fix that kind of data corruption retroactively---I think they just have to be recopied before you can run afni_proc.py on them.
I ran procpy from a different computer with these same inputs and it worked fine. So I know the inputs themselves should be ok.
Is there a way to precopy the input files to the .results folder manually, and still have procpy running? (Now when this folder exists it just aborts)
When you say you ran them with the same inputs, do you mean you copied these exact inputs to another computer, and it worked? The issue here seems to be not these type of datasets or any of their actual data properties, but that these specific datasets are corrupted. I would recopy the same data from another source to this present mac computer where the issues are, to start with the same data here but a fresh copy. After that fresh copy, you can even compare the file sizes to see if there is a difference (which I would expect within the dataset files).
I am following up on our previous thread, after I re-installed a new version of AFNI and R, and now according to afni_check my AFNI should work perfectly fine... but still the same issue remains!
The inputs and the outputs are saved on a lab server (not locally on the computer), which can be accessed also from other macs in the lab. I reran the analyses on a new folder in the server. When I run procpy from another mac on these very same inputs (reading and writing to the same folders in the server), everything works fine.
However, when I run on this mac, there is a problem with the files - the copied files seem to be empty or too small in size. This is an excerpt of the outputs I get:
preforming proc.py to scan faceloc
** warning: removing first 4 TRs from beginning of each run
--> the stimulus timing files must reflect the removal of these TRs
-- template = 'MNI152_2009_template_SSW.nii.gz', exists = 1
-- will use min outlier volume as motion base
-- including default: -find_var_line_blocks tcat
-- tcat: reps is now 173
-- multi-echo data: have 3 echoes across 1 run(s)
++ updating polort to 3, from run len 347.7 s
-- importing NL-warp datasets
-- volreg: using base dset vr_base_min_outlier+orig
++ volreg: applying blip/volreg/epi2anat/tlrc xforms to isotropic 1.5 mm tlrc voxels
++ mask: using epi_anat mask in place of EPI one
-- masking: group anat = 'MNI152_2009_template_SSW.nii.gz', exists = 1
-- have 1 ROI dict entries ...
-- will use tedana from MEICA group
** consider option: "-mask_epi_anat yes"
-- no 3dClustSim (since no blur estimation)
** masking single subject EPI is not recommended
(see 'MASKING NOTE' from the -help for details)
-------------------------------------
** warning have only 1 run to analyze
-------------------------------------
--> script is file: /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/proc.BR119.socCog.s2.faceloc
to execute via tcsh:
tcsh -xef /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/proc.BR119.socCog.s2.faceloc |& tee /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/output.proc.BR119.socCog.s2.faceloc
to execute via bash:
tcsh -xef /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/proc.BR119.socCog.s2.faceloc 2>&1 | tee /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/output.proc.BR119.socCog.s2.faceloc
echo auto-generated by afni_proc.py, Wed Sep 20 09:53:11 2023
auto-generated by afni_proc.py, Wed Sep 20 09:53:11 2023
echo (version 7.60, August 21, 2023)
(version 7.60, August 21, 2023)
echo execution started: `date`
date
execution started: Wed Sep 20 09:53:12 IDT 2023
afni -ver
Precompiled binary macos_13_ARM_clang: Sep 11 2023 (Version AFNI_23.2.09 'Marcus Didius Severus Julianus')
afni_history -check_date 14 Nov 2022
-- is current: afni_history as new as: 14 Nov 2022
most recent entry is: 08 Sep 2023
if ( 0 ) then
which tedana
/opt/homebrew/bin/tedana
if ( 0 ) then
if ( 0 > 0 ) then
set subj = BR119.socCog.s2.faceloc
endif
set output_dir = /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results
if ( -d /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results ) then
set runs = ( `count -digits 2 1 1` )
count -digits 2 1 1
set echo_list = ( `count -digits 2 1 3` )
count -digits 2 1 3
set echo_times = ( 13.2 34.72 56.24 )
set fave_echo = 01
mkdir -p /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results
mkdir /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/stimuli
3dcopy anatSS.BR119.socCog.s2.nii /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/anatSS.BR119.socCog.s2
++ 3dcopy: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
3dcopy /Users/ramotlab/abin/MNI152_2009_template_SSW.nii.gz /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/MNI152_2009_template_SSW.nii.gz
++ 3dcopy: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
3dcopy anatQQ.BR119.socCog.s2.nii /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/anatQQ.BR119.socCog.s2
++ 3dcopy: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
3dcopy anatQQ.BR119.socCog.s2.aff12.1D /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/anatQQ.BR119.socCog.s2.aff12.1D
++ 3dcopy: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
3dcopy anatQQ.BR119.socCog.s2_WARP.nii /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/anatQQ.BR119.socCog.s2_WARP.nii
++ 3dcopy: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
3dTcat -prefix /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/blip_forward BR119.socCog.s2.AP+orig
++ 3dTcat: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
++ elapsed time = 0.3 s
3dTcat -prefix /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/blip_reverse BR119.socCog.s2.PA+orig
++ 3dTcat: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
++ elapsed time = 0.9 s
3dTcat -prefix /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/pb00.BR119.socCog.s2.faceloc.r01.e01.tcat BR119.socCog.s2.faceloc_chan_001+orig[4..$]
++ 3dTcat: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
++ elapsed time = 21.5 s
3dTcat -prefix /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/pb00.BR119.socCog.s2.faceloc.r01.e02.tcat BR119.socCog.s2.faceloc_chan_002+orig[4..$]
++ 3dTcat: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
*** Datablock write error: Write error in brick file: Is disk full, or write_protected?
++ elapsed time = 70.3 s
3dTcat -prefix /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results/pb00.BR119.socCog.s2.faceloc.r01.e03.tcat BR119.socCog.s2.faceloc_chan_003+orig[4..$]
++ 3dTcat: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
*** Datablock write error: Write error in brick file: Is disk full, or write_protected?
++ elapsed time = 60.1 s
set tr_counts = ( 173 )
cd /Volumes/Labs/ramot/Micaela/test_preprocessed/socCog/BR119/s2/BR119.socCog.s2.faceloc.results
@radial_correlate -nfirst 0 -polort 3 -do_clean yes -rdir radcor.pb00.tcat pb00.BR119.socCog.s2.faceloc.r01.e01.tcat+orig.HEAD
++ 3dTcat: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
*** Datablock write error: Write error in brick file: Is disk full, or write_protected?
++ elapsed time = 59.7 s
++ 3dAutomask: AFNI version=AFNI_23.2.09 (Sep 11 2023) [64-bit]
++ Authored by: Emperor Zhark
*+ WARNING: If you are performing spatial transformations on an oblique dset,
such as ./epi.r01+orig.BRIK,
or viewing/combining it with volumes of differing obliquity,
you should consider running:
3dWarp -deoblique
on this and other oblique datasets in the same session.
See 3dWarp -help for details.
++ Oblique dataset:./epi.r01+orig.BRIK is 5.100235 degrees from plumb.
*** failure while reading from brick file ./epi.r01+orig.BRIK
*** desired 816316416 bytes but only got 402276352
*** Unix error message: No such file or directory
THD_load_datablock
3dAutomask main
** FATAL ERROR: Can't load dataset './epi.r01+orig.BRIK': is it complete?
** Program compile date = Sep 11 2023
** FATAL ERROR: Can't open dataset 'radcor.20.r01.automask+orig'
** Program compile date = Sep 11 2023
-- detrend -polort 3, new eset = det.r01
Any insights of what might be the problem? or what additional checks I should try?
*** Datablock write error: Write error in brick file: Is disk full, or write_protected?
That should be something testable by running some of the AFNI commands just without afni_proc.py.
You can check that you have free space with
df -mh
To check about the input files, are you able to open those (your EPI and anatomical dsets that are inputs to afni_proc.py) in the GUI itself, without a crash?
To check about the permissions on the programs, what is:
ls -l `which afni`
ls -l `which 3dTcat`
To check about the permissions on the data files, what is the output of this, with your input data filenames substituted in:
ls -l DSET_EPI DSET_ANAT
And on the (partially?) written outputs in the *.results directory, what are the permissions there:
Hmm. Nothing is owned by root. The file permissions only allow user, not group, permissions, but I guess that should be OK.
I assume that the /Volumes/Labs disk you are writing to is an alias for the large one here:
//ramotlab@isi.storwis.weizmann.ac.il/Labs
... which appears to have lots of space still. If that is an external drive, is it possible there is a bad connection to it? Could you try running the same afni_proc.py command writing locally to that computer?
Bingo! it is running ok on the local computer, so the problem must be in the connection to the server. I will take it to our IT service
Thank you for your help!!
So apparently it was not working well all the way through also when I was writing to the local folder... for some reason after running 'tedana' the analysis didn't continue to generating the QC folder.
The procpy command I am using is the following (note that it was originally written for a previous version of afni):
In that version of AFNI, I made an inconsistency in how 1d_tool.py handles an empty file (in this case, the 'stim' regressors in a resting state analysis). The problem only exists in that exact version, AFNI_23.2.09, and was fixed a couple of weeks ago.
Indeed, there is no such distributed package. You are probably using build_afni.py.
Consider something like:
build_afni.py -build_root ~/afni_build
I think your version of build_afni.py will automatically backup and install. The previous version would just give you rsync command suggestions to install.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.