I noticed something when trying to implement an AFNI-based real time fMRI engine that I made.
To trouble shoot I collected all EPI files after the run and stitched them together into a full 4D run.
Using the full run I used 3dvolreg to align it to the EPI containing the mask from where we would like the feedback. For trouble shooting purposes I used visual cortex to get a strong signal. The paradigm is looking at images. I then scaled it.
The scaled data was the input to 3dDeconvolve using the motion_demean.txt file I got from the 3dVolreg command (dfile.txt → motion_demean.txt).
This gives the motion corrected data, errts. I did not use any drift regressors or the mean (polort = -1).
This gives a very nice average time course:
But when actually running the neuro feedback we can ofc not use the full run. What I do is to 3dTcat out 13 TRs of data (of which we use TR 0-3 as baseline (red), and TR 9-12 during the peak of the BOLD response to the image (green).
Using this “zoomed-in” snippet of data, the time course is identical as using the full data set (the corresponding 13 TRs) if using the .scaled data. But motion correction (3dDeconvolve) removes the increase of BOLD that is induced by seeing the image.
Is this expected? We know visual cortex do react to viusal stimuli. The full run with motion correction still has an increase in signal at fixation (red) and image peak (green). But not when running mot-corr on just those 13 TRs (red until green).
I would think this is a “regression artifact” of having too few TRs.
At 13 time points, you have half as many regressors as time point. The linear regression will in general look very noisy at the beginning (at least one of the motion regressors will probably fit that gentle rise that you are looking for), and will start to stabilize after sufficient time to have useful variance in the more “ideal” time series you are looking for. In this case, I would expect you to start getting reasonable results around time point 50, when the signal of interest has had initial time to rise and fall.
We have a couple of sample datasets that you can review, the older of which (in AFNI_data6) shows a similar phenomenon to yours.
For a different real-time demo package, consider running:
@Install_APMULTI_Demo2_realtime
But that package does not have any real-time regression examples, not yet. It also does not have actual multi-echo examples, though the data is there and it would take only a tiny effort to code an example (more time to set up regression testing (not linear regression), which is part of the purpose of the demo).
Thanks Rick! Didn’t even think about degrees of freedom. Makes so much sense.
The short story here then is that we cannot do motion correction if we for each block want to use the same number of TRs.
I could extend the blocks to each contain data for about 14 extra TRs (include the TR’s left of the first red block until 0). Making the snipped 27 TRs long. I shoulnd’t bother with testing that then if you say 50.
Is there another way to correct for motion in this scenario?
We can always run the regression on thefull dataset after the run is complete to check if the difference between just looking at the affective image (first image) and trying to down regulate (second time they see the image) is still there. To calm reviewers that they didn’t learn motion, instead of actual down regulation.
To be sure, is this regression and difference being done just within the ROI mask? Are you taking an ROI average, regressing out motion, and then using the resulting TR differences for feedback?
If so, consider regressing the motion parameters from the entire ROI average time series, not just from 13 TRs. Only after this might you extract values for the 13 time points of interest, using the relevant ones for feedback.
That is not how I do it, but I can if you think that will help!
What we do now:
Time progresses, each scan/TR/epi is identified and stored.
When it’s time for the subject to get feedback (the last TR to include in the calculation) the progam gets that TR and 12 TRs worth of history.
E.g. TR 14…27.
This BLOCK, as I call it, is volreged to the mask that we made in a previous localizer run.
This block of 13 TRs is scaled, giving it a mean value of 100 and I only keep the voxels that are in the mask to save time and space.
Then I run
3dDeconvolve using the motion information I got from volreg above.
This is done voxel by voxel, like in a normal pre-proc.
THEN I take the average of the mask - and get the problem I described.
What you suggest, it that I average WITHIN the mask for each of the TRs, before the regression. And I feed 3dDeconvolve a single (averaged) timecourse instead of the 4D-data (.scale).
Did not know you could feed 3dDeconvolve with non image data =)
Do you think that would make a difference?
And how would I do it?
3dMaskave block1.scale+orig. >> avg_scale_mask_tc.txt
Change it suck that each TR is in a new row?
"**** You can input a 1D time series file here,
but the time axis should run along the
ROW direction, not the COLUMN direction as
in the -input1D option. You can automatically
transpose a 1D file on input using the \'
operator at the end of the filename, as in
-input fred.1D\' "
3dDeconvolve -input avg_scale_mask_tc.txt
-errts avg_scale_mask_tc_motiuon_corr.txt
Then take row 0-3 for my baseline
and
row 9,10,11,12 as my image peak
Signal = (peak - fixation) / fixation
How can I use the entire timeseries to do the regresson if this is a live neuro feedback run?
Firstly, the first time we collect data is about 27 TRs into the run. The following TRs have not been generated yet. So I don’t see how I can use the full run.
Secondly, if we did this but “backwards”
First time we calculate something: use TR 0 up until last TR to include (e.g. 0-27)
and so forth.
In the later feedbacks, we would get TR 0 - 428.
This would take more time. To get 400+ niftis. Concatinate, scale and run a regresson. Would take more than the 3 s we allow ourself as a delay betwen a scanned event and the feedback.
Long story short:
We can skip motion correction.
And if we find improments (reduction of activity in the chosen region over time) we can do the motion corrected version offline aftwerards and see if it stil holds up?
I was not expecting you to run 3dDeconvolve on a 4D dataset, ever, but to do something like:
extract an ROI average time series of choice
(the realtime plugin can send this (or multiple)) to an IP address
regress out the motion parameters (and polort) on the full time series
(run 3dDeconvolve on the 1D average)
perform your comparison computation on the windowed time series
Having 429 time points might be managable.
Also, note that projecting out a few time series (motion, polort, sinusoids) from a 1D file might be faster to do directly in python (scipi.linalg). While a compiled C program is generally faster, you have to deal with writing to and reading from the file system, as well as repeatedly starting up the external program (3dDeconvolve). This overhead might make it more reasonable to do via scipy.linalg, since it is just a simple projection of unwanted terms from one time series.
I’m not sure we will get any further. I’m sorry if things are unclear.
But I can’t use the full time course.
Once we reach TR 27, live in the scanner, I want to instantly calculate the average (in an ROI) of TR 23,24,25,26.(index start at 0) and provde feedback to the subject. TR 23-26 is when the subject is watchig an image.
Best I can do is to use TR 0…26. Then, as you suggest, average within the ROI, and do 3dDeconvolve on this .1D file with 27 rows.
Then read TR avg(23, 24,25,26) as my image signal
and read TR (11,12,13,14) as my baseline signal (fixation cross).
Calculate the signal (image-baseline) / baseline.
Feed this value back to the fMRI computer and show the participant their activity while watching the image. Feedback will come at about TR 28/29. It takes about 2 seconds to process.
I tested this and it is not enough TR’s. The increase in visual cortex is not completley gone due to the motion regressors, it’s better than before (only using 13 TRs). But as you said I would probably need more TR’s (about 50)… And since the first feedback is at 28 seconds (28TRs) I don’t have more data to include in the regression. And for later feedbacks I don’t want to base the regression on different ammount of data, to bias it.
Hope that makes it clear.
To me I only have two options. Include more time before first feedback in order for my motion regressors to not screw up my signal.
Or do motion correction offline afterwards and see if the real time feedback effects (learning how to down regulate) holds up.
By “full time course” I mean as much as you have, every time you run the computation, and it will grow over time. That could be done every single TR, as data comes in.
The plugin can send the new ROI average every time point.
At each TR:
append the new ROI average to the previous “full” time series
project motion/baseline from that full (current) time series
Thanks, an improving moco over time then
Haven’t used AFNIs plugin. Does it use a listener on a specific folder to which the scanner sends the data?
I built my own engine using python with the watchdog library/function.
That triggers a chain of events every time a file is detected. And at specific TRs it runs an afni script that provides my a time course / results for whichever TRs I want.
Afni is on a server, accessed via thinlic on the fmrl computer. Stimuls software is also on this server. The analysis server has the scanner export dir mounted on it.
Any program can monitor a file system or similar, and send data to afni. Dimon is such a program. It scans a directory, sorts images (, looks for errors), finds volumes, and then sends them to afni’s real-time plugin. The plugin will do motion correction and averaging and such, and can then send motion parameters and ROI averages to an external program, such as realtime_receiver.py.
Note that Dimon does not use a watchdog signal, though that has been on the todo list for a long time.
At any rate, the receiving/feedback program can get new ROI averages, regress motion, and decide on the feedback to the user. But working with ROI averages this way means you do not need to do any 4-D regression.
rick
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.