I am concatenating two resting state runs collected back to back in the same testing session into one longer run. Each run has 148 time points so the resultant concatenated run has 296 time points.
3dTcat seems to do a great job except there are differences in distortion around the edges (on the time series it looks like there’s a huge jump where the two runs are combined). Is there something I can do to fix this?
I am trying to concatenate 1D files to match the concatenated run that contain information about motion, etc. to regress out of the final dataset. The command I am using is
1dcat subj_run1_Motion.demean.1D subj_run2_Motion.demean.1D > subj_run12_Motion.demean.1D
The command works and creates 12 columns instead of 6. When run 3dinfo -nt the output is 12. However, when i try to the run my script I get the error
subj_run12_Motion.demean.1D is 148 long, but input dataset is 296
What am I doing wrong here? Is there another way to check the number of time points in a 1d file besides 3dinfo? Why is it concatenation but the number of time points is not doubling?
I think it would probably be best to use afni_proc.py to do this work for you. Indeed, just concatenating runs is like to preserve a jump at the junction point, and that would be a strong+artificial feature in the dset.
You can load in multiple EPI dsets to afni_proc.py, and then let it do the volume registration in each and perform the concatenation (with appropriate orthogonal polynomials in the design matrix) for each.
I think for any FMRI processing afni_proc.py is a very good friend to have.
Let’s just say I’m using a different pre-processing pipeline than afni_proc.py… I love afni_proc.py and I see how valuable it is. Is there any way to clean up the junction point without afni_proc.py (I know this is not a very fair question).
You want temporal concatenation of those vectors,
where the result is 6 longer vectors, not 12 of the
same length. So use just ‘cat’ instead of ‘1dcat’, as in:
cat subj_run1_Motion.demean.1D subj_run2_Motion.demean.1D > subj_run12_Motion.demean.1D
… and for the 3D+t volume concatenation, I guess you would want to regress out low order polynomials to get rid of the jumps; that can be done with “3dTproject -polort pp …”, where pp is the order you would want to project out. To get rid of the mean, that would just be pp=0; for getting rid of a linear term, pp=1; etc.
I don’t know if/how you are scaling your time series in your other processing stream, but that will also likely affect the “smoothness” of the joint setting at concatenation. In AFNI, we recommend voxelwise scaling to “BOLD percent signal change,” for several reasons.
That being said, I don’t know that you want to get rid of polynomials like this outside of processing. We recommend doing one big GLM where things like censoring, bandpassing, motion-term regression and polynomial regression are done simultaneously for consistency and mathematical correctness. These are done within one call of 3dDeconvolve, typically.
Concatenating multiple runs during processing seems like a pretty fundamental thing; there might be some processing option/stream for doing so in whatever other processing stream you are using, I would hazard a guess?