Hi,
I have been using AFNI to construct PPI regressors, following the instructions at https://afni.nimh.nih.gov/CD-CorrAna . In April, I had posted a question to this list about the fact that there were large spikes in the original time series data that seemed to be made larger by the deconvolution algorithm used by 3dTfitter. (see the attached image; the original time series is on the bottom while the deconvolved time series is on top.) Gang replied to my question then, saying (as I understood it) that this didn’t seem like a problem, since the spikes were in the original data.
While it is true that the spikes are in the original time series, it seems to me that they are probably due to motion, and they are also much larger than other variation in the signal. I am censoring motion outlier TRs, which seems to partially line up with these spikes, but not fully. Thus, I am concerned that these TRs with large motion spikes are over-weighted in the final regression analysis, and are thus washing out some of the actual signal in the PPI analysis.
Gang had also asked how my analysis looked. I can say now that in my most recent analysis, there was an effect that looked kind of like what I had expected to see, but that did not reach significance. However, I have no way of knowing if the effect was weakened by these spikes, if it’s just weak because PPI is underpowered, or if there really is no effect there.
From my initial exploration, it seemed like the parameters used in 3dTfitter can impact the relative size of these spikes.
I have been using the following command for the deconvolution:
3dTfitter -RHS [input time series] -FALTUNG [output time series] 012 -2 -l2lasso -6
My question is whether there is some tweak I could make to the parameters of 3dTfitter that would at least make these spikes no larger (on a relative basis) than they were in the original data (or, even better, if their weighting could be reduced relative to other variance in the time series).
I wish that I could have responded earlier: I’ve been traveling during the past 10 days.
The challenging part with deconvolution through 3dTfitter is that, as you pointed out, we don’t know the ground truth about the neuronal responses, and unfortunately it’s not easy to make some back-and-forth adjustment based on the result. At least I’m not aware of such regularization approach.
I have been here, but still did not get around to replying…
In addition to Gang’s comment, it seems that your concern
is for the “neural timing” data, just after 3dTfitter. But that is
not where you should necessarily be looking. The real
concern is whether motion spikes bleed into the regressors
of interest, and specifically outside censored time points.
So look at applied regressors around the times of those
spikes. If they are not there, presumably due to censoring,
then there does not even seem to be a concern.
The spikes could propagate due to temporal partitioning
of the spiky intervals. Hopefully that has not happened.
Thanks for the replies. I was trying to find a good example of what I’m concerned about, but after looking over a few runs of data in my dataset, I couldn’t find any particularly concerning examples. So maybe this issue is less of a problem than I had thought. (At least, it seems that after deconvolution and reconvolution, these spikes don’t look notably larger than they did in the raw input data, in most cases.)
There may be still be some spikes that don’t line up fully with motion censoring. I haven’t spotted any in particular, but if I do see any in the future, I can keep an eye out for whether those look like places where the spikes shifted, vs. the volumes containing spikes never having been censored at all.
I am still curious to learn a bit more about how the parameters of 3dTfitter affect how it deals with these spike outliers, though. I’m not sure I’d understand the mathematical details (as I didn’t fully understand the details in the help file), but I did notice when I fiddled with settings previously that those parameters can be important. For instance, the output seemed to be less noisy when I used the settings for PPI shown in the 3dTfitter help file, compared to the settings on the AFNI PPI page. I thought there might be some way to optimize them further, but I don’t understand what the parameters are doing well enough to figure out on my own what to try optimizing. Should I assume that the parameters I’m currently using are optimal, or is there something else you’d suggest trying out on my data?
What parameters are optimal is hard to say. The 012 -2 -l2lasso -6
parameters are used to make the PPI filter more-or-less invertible.
That removes the decision making of what is BOLD from that step,
plus it removes concerns of smearing/distorting the signal of
interest and making bad interactions between it and the original
regression model.
But that is to say it makes no attempt at removing spikes.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.