I am trying to run a PPI analysis using AFNI, following the general instructions at https://afni.nimh.nih.gov/CD-CorrAna . Because I ran my univariate analysis in FSL, all processing up to the extraction of the time series was done there. Using AFNI, I am upsampling the time series to 0.1 s, and running 3dTfitter to deconvolve the time series, as was suggested in the instructions. When I check the results from that deconvolution in 1dplot, though, I’m getting results that look weird.
Some of the weirdness seems to depend on which preprocessing steps I do. I’ve tried 3 different ways of generating the timeseries:
A1: Generating a nuisance analysis with all preprocessing done, and regressing out the motion regressors, and using the residuals for further analysis. I added 10000 to all voxels in those residuals, and extracted a seed region time series from the residuals+10000 (following some old instructions I had for an FSL/SPM hybrid PPI method).
A2: Same as A1, using nuisance residuals, except that I didn’t add 10000.
B: No nuisance analysis; I instead extracted the seed region time series from filtered_func_data from the univariate analysis, which has all preprocessing done but does not regress out motion parameters. This one also seems to have a mean of about 10000, which I think is what A1 was trying to match.
When I run 3dTfitter on A1 and B, and plot the results in 1dplot, I get a lot of up and down “activity” at the beginning, then much less “activity” for most of the run duration, and then a notable drop off at the end. It seems like this has something to do with the mean being 10000 (maybe it’s assuming a value of zero at the beginning and end?), but I’m not sure why that would be. Is there any way to account for this, or do I just need to make sure that my data are centered around zero rather than 10000?
The deconvolution on A2 (where the residuals are centered around zero) does look more like what I would expect. However, there are some small spikes in the original data, which seem to get much bigger on the “deconvolved” output, to the point that I’m guessing they would overwhelm other effects when I run the actual analysis, and which probably aren’t real effects. I did notice that when I use -2 for the final 3dTfitter parameter, and also add -l2lasso -6 (following recommendations on this list instead of what was on the Web site), the spikes get smaller, though they’re still larger than in the original dataset. Further fiddling with these settings didn’t seem to change much, though.
On this, I first wonder if you have any ideas as to what the spikes are. Could they be motion artifacts, which could be resolved by censoring TRs at the next stage of the analysis? Are there any further tweaks to the 3dTfitter parameters that you would recommend to smooth out these spikes as much as possible?