We have preformed a “standard” afni_proc.py pre-processing procedure and now want to deep dive a little by focusing only on the first 5 min out of a 7 min scan since we have some biometric data (blood samples) that only span over the first 5 minutes. We simply feed the pre-processed (wapred pb04.subjX.blur+tlrc) data into a new 3dDeconvolve command with our new special blood regressor.
My questions is this: We did not get identical results (potentially we could of course have done a mistake) when utilising two different “cutting” methods. One would be to tell 3dDeconvolve to stop at e.g. TR 300 (total TR count is 500) by using “-nlast 300” the other would be to fill the censor file with “0” from the 300th line.
Are these methods theoretically different? In the censor file case, perhaps the program still takes the rest of the time points into account for the regression without using them?
Also, I guess we could have screwed up since -nlast uses 0 as the first element and the censor file starts at line 1. -nlast 300 would mean to include 0-300(?). I.e. we should start filling with zeroes in the censor file at line 301.
As an example: Let’s say we have a 5 min fMRI run during which we have information about a physiological process (concentration of some metabolite) from minute 2 until minute 4. Our goal is to preform a GLM using 3dDeconvolve but only for the period of time where we have this information (i.e. min 2-4).
The question arises when creating the stim-file. It’s a file that in arbitrary units describe the levels of this metabolite (during min2-4). Currently we create a single column .1D file where each row matches one TR. We fill with zeroes until 2 min, then the values come until 4 minutes and zeroes until the end of the scan 4-5 min. We implement it via:
-stim_file k sname sname = filename of kth time series input stimulus
*N.B.: This option directly inserts a column into the
regression matrix; unless you are using the 'old'
method of deconvolution (cf below), you would
normally only use '-stim_file' to insert baseline
model components such as motion parameters.
We do match the number or rows to the number of TRs here but do you have to for -stim_file? Or does it stretch/fit the file data across the whole run? (hence making the zero-filling important). Then we use -nfirst and -last to specify which part we want to be run in the regression.
Is this approach fine?
Because I also saw the option:
[-input1D dname] dname = filename of single (fMRI) .1D time series
where time run downs the column.
This also seem to make sense. What is the difference between -stim_file and -input1D? Would the stimfiles look the same with the need of zero filling?
The documentation hinted that -stim_files should be use for baseline stuff like motion regressors. I currently use it to regress out physiological noise and now trying to make a regressor of interest from our metabolite measures. Perhaps -input1D is better?
Thanks for the bump, it is definitely helpful sometimes.
The stim_file method, with leading and trailing zeros,
seems fine.
If you are going to input all 5 minutes to 3dDeconvolve,
then the stim file should reflect that, regardless of
whether the regression is restricted to the 2-4 minute
interval.
No, 3dDeconvolve will not stretch the 1D file to match
the regression size.
The -input1D option is a different beast. That is to
specify the data input (the y-curve) as a 1D file, rather
than a 3D+time dataset. So -stim_file is to specify a
regressor (part of the X-matrix), -input1D is to specify
the actual input data, like -input, the ‘y’ part of the
model.
Sorry for being slow,
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.