As a lab, we ran afni_proc.py to complete the preprocessing for a few participants and would like to run the first level modeling after with 3ddeconvolve for task-based fMRI, but we are unsure about the correct output. Would it be the all_runs.SUBJ+orig.BRIK, errts.SUBJ.tproject+orig.BRIK, or something different?
To state the conclusion first, I would just add the full regression to the current afni_proc.py command and reprocess. That will take more computer time, but might save quite a bit of your time. But to babble…
afni_proc.py would apply the last pb*.HEAD datasets, matching all_runs, except the using multiple files tells 3dDeconvolve the run breaks.
Even if the regression is not fully carried out at first with afni_proc.py, it is good to put in a regress block (akin to treating it as rest), just for the extra QC that comes with it. The extra cost is disk space.
For this step, and since you will presumably add regression to the afni_proc.py commands for future subjects, it might be good to start by making a complete afni_proc.py command now, including regressors of interest. Then add the option -write_3dD_script (and a script name) to the command. It will create a 3dDeconvolve command for you, which assumes all of the other processing.
On the flip side, that single command script will not create censor and enorm time series (if censoring is being used), so you might still have to run 1d_tool.py. But at least it will suggest what to do.
This also shows the benefit of adding a regress block early on, even without the tasks of interest. All of the preparation would be in place to use for a new 3dDeconvolve command script.
At this point, it might be good to create a full AP script with regression, and work from there an example. OR preferably, you could just reprocess those subjects with the AP script has everything it needs (full regression). That is what I would do. Your time is more important than computer time.
We are running the AP in an online web server where we have to run each AP script manually and cannot loop through subjects to add the stim timing files (it will also take a bit of infrastructure upgrade to add the timing files to this web platform), hence why we would like to run the 3ddeconvolve separately on our computer via bash for example. We do include a regress block to clean up the data in the web-based AP script, see below the command. If we want to run 3ddeconvolve on the output of the AP (with basic regress and preprocessing), do we use the all_runs output file or the eerts?
You could use the all_runs dataset, but it is better to use the pb06(?) *.scale datasets, so that 3dDeconvolve knows there are 2 runs and where the run breaks are. Look at the proc script that was currently run, and start with that 3dDeconolve command.
Or generate a new proc script locally and do not even run it, just get the 3dDeconvolve command.
And if that works, you could go with the -write_3dD_script/_prefix options, too.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.