I’ve got a high resolution data set (six runs of 0.8 mm^3 voxels, 200x84x162), and 3dDeconvolve crashes even when I allocate 64 GB of memory. Is there a solution to this?
If not, one thought is to analyze each run individually and average them together. If I just wanted to take the betas to a group analysis, I would have no problem with that, but this is a functional localizer, so I need the t scores. I have a vague recollection of reading on this board 10+ years ago that it’s okay to average z-transformed t values, but I want to confirm that.
You could also use 3dZcutup to break them into smaller blocks and run 3dDeconvolve on each of the blocks and then use 3dZcat to glue everything back together. However, I’ve not tried this.
Each run is fine individually, so that isn’t necessary. My concern is with how to create ROIs by combining all six datasets for each subject. Is the average t (or z-transformed t) the same as the t that would have been obtained if all runs had been analyzed together?
I may not have been clear enough so here’s some sample bash code that does what I mean, I’ve put echo commands before the 3dZcutup, 3dTcat, and 3dDecon commands since I don’t actually have any data with which to test this. Yo umay need to cut it up even more to get it to fit into memory depending on how many TRs you have, (which you didn’t state).
#!/bin/bash
# 162 slices is mulitple of 9
inc=18
for run in $( seq 1 4) ; do
starts=( $( seq 0 $inc $( expr 162 - $inc ) ) )
for (( gg=0; gg<${#starts[li]}; gg++ )) ; do
[/li] start=${starts[$gg]}
(( end=start + inc -1 ))
echo 3dZcutup -prefix run$( printf "%02d" $run).zcut.$( printf "%02d" ${gg} ) -keep $start $end my_brik+orig.HEAD
## touch run$( printf "%02d" $run).zcut.$( printf "%02d" ${gg} )
done
done
## echo gg now stores the number of cuts or groups of slices
## now that the runs are broken into groups of 18 slices you can run
## 3dTcat to get all the tuns together for analysis with 3dDecon
for (( gg=0; gg<${#starts[li]}; gg++ )) ; do
[/li] echo 3dTcat -prefix all.runs.zcut$( printf "%02d" ${gg} ) run??.zcut.$( printf "%02d" ${gg} )
## touch all.runs.zcut$( printf "%02d" ${gg} )
echo 3dDeconvolve -input all.runs.zcut$( printf "%02d" ${gg} ) -prefix stats_zcut$( printf "%02d" ${gg} )
done
## now concat all the zcuts together
## 3dZcat -prefix stats stats_zcut??+orig.HEAD
I see now. I misunderstood what 3dZcutup was doing before. I now see that it’s cutting up volumes rather than dividing runs into subsets of volumes. This makes sense and we’ll give it a try. Thanks!
Does your 3dDeconvolve command output both the errts
and the fitts time series datasets? If so, there is no reason
to output both, as all_runs = fitts + errts. Computing the
fitts after the fact, if you want it (and as can be done by
afni_proc.py -regress_compute_fitts) will save 30-40% of
the RAM usage, depending on the other options.
Good to know! Yes, it’s doing both. I’ll try it without fitts.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.