In the past I ran proc_py on my data, where I smoothed/blurred the the data. Now I’d like to go back and create new stats files based on the unsmoothed timesseries versions of the data for ultimate MVPA analysis. What is the best way to go about this without rerunning all the preprocessing steps?
I know that the pb04 files (in my study at least) are the unsmoothed/blurred data. I’m guessing pb03m volreg files might include the unsmoothed timeseries.
However my preprocessing blocks in proc_py (-blocks tshift despike align volreg blur mask regress ) include both mask and regress blocks that operate on the pb04 input file before 3dDeconvolve. (I do not use the -regress_apply_mask option, though, so I don’t think the mask is applied in regression.)
I want to make sure I am not loosing any necessary pre-processing steps besides the blur if I were to use 3dDeconvole and/or REML on the pb03 files. For reference, I will include my current proc py script so you can see the steps/options i use.
(i.e. I would like the stats files to be analogous to those I already have, except for the fact that they are derived from the unsmoothed timeseries.)
HI there! Just checking in again since I havent heard back yet, to make sure that the pb03 volreg files (pre the pb04 blur step) as input to 3dDeconvolve and. or 3dLSS is the proper move for MVPA analysis. For reference, the ordering of my blocks are
I’m thinking that the volreg files should be fully pre-processed data, just minus the smoothing step.
Yes, I think you can restart from the end of the motion mitigation step (‘volreg’) and skip the smoothing step (“blur”) from the pipeline.
3dDeconvolve and. or 3dLSS
What is your inter-stimulus interval? Use 3dDeconvolve+3dREMLfit if possible. I don’t recommend 3dLSS in general.
-GOFORIT 12\
Be careful with this. I would not blindly add this option. Instead, whenever a warning about high collinearity occurs, examine the situation and determine the nature of the warning before making the decision and proceeding.
My stimulus trial duration is 3 seconds, followed by a 1.5 second fixation cross. The fixation cross is not modeled as a regressor so it assumed baseline. So the actual stimuli are shown every 4.5 seconds. How should my stimuli duration factor into my decision to use or not to use 3dlss?
And thank you - I added the Goforit flag back when I was new to AFNI. Now that i understand things better I can remove it.
I don’t really see why you want to avoid re-processing the data without the blur block. Since this is all in orig space, the script probably runs pretty quickly in the first place. Why not just run it properly, including all of the accurate QC info? How long does it take to run?
Of course you can just give the volreg data to the regression commands (the rest of the regression (including anaticor) should be unchanged), but I would still just run it properly for complete and accurate QC results.
My stimulus trial duration is 3 seconds, followed by a 1.5 second fixation cross. The fixation cross is not modeled as a
regressor so it assumed baseline. So the actual stimuli are shown every 4.5 seconds. How should my stimuli duration
factor into my decision to use or not to use 3dlss?
No jittering in terms of the fixation period?
Try 3dDeconvolve/3dREMLfit first and see if you could get it pass. 3dLSS is just a band-aid as a last resort when you have exact collinearity. When you don’t have the multicollinearity problem, 3dLSS as an approximation performs inferior to the typical approach.
Unfortunately no jittering. At the time of design i was told that the different stimulus types would serve as an innate jitter, but in hindsight it would have much better from an analysis perspective.
Do you have advice on best way of viewing the time series to see if it is even feasible to disentangle single trial responses?
I’m going to reprocess the data without any GoForIt option and re-check for collinearity warnings. While I’m fortunate in that all my stimuli are 5 seconds apart, there is the disadvantage of no jitter, and the fact that all my stimuli are face images. Though they differ in certain conditions, perhaps the overlap in perceptual appearance would lead to collinearity. I may check back in, based on the results to see if you recommend trying 3dlss.
One other question. If there is a multicollinearity problem, is it recommended to re-evaluate whether you should be modeling certain conditions as separate, despite the fact that you know a priori that they are? For instance, I have a bunch of face stimuli, where there are several different types of faces. But if I see a lot of collinearity between these separate regressors, should I combine the condition to the extent that I can? Or should I at that point turn to 3dlss?
Be in touch, and thanks for all the help thus far!
the different stimulus types would serve as an innate jitter, but in hindsight it would have much better from an analysis perspective.
That might work if you build your model at the condition/task level. However, to estimate effects at the trial level, each trial is a separate regressor, and it does not matter which condition/task a trial belongs to. So, definitely we would need some extent of jittering.
Do you have advice on best way of viewing the time series to see if it is even feasible to disentangle single trial responses?
I doubt simple visualization would give you any hint of one way or another.
If there is a multicollinearity problem, is it recommended to re-evaluate whether you should be modeling certain conditions as
separate, despite the fact that you know a priori that they are?
I’m not so sure what you mean here. If you want to model everything at the trial level, you would not differentiate those trials at the condition level in the model (only in labeling).
Yes well I at least can go ahead with the condition level RSA, though I was hoping to do single-trial as well. Perhaps condition-level may be all my experimental design allows me to do. I’ll have to continue testing to see!
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.