Hi AFNI experts,
We have 32 slices with descending order of acquisition and TR of 2 s in our fMRI experiment. So each voxel in the brain was acquired at these time points:
2.0000 1.9355 1.8710 1.8065 1.7419 ... 0.1935 0.1290 0.0645 0
An ROI was drawn on the anatomy image which had done ‘3dWarp -oblique2card’ and was normalized to the standard space.
I need to know the acquisition time (acquired at which time point between 0 and 2 s) of all the voxels in this ROI for further analysis.
Do you know how to do this?
You could do this with your original EPI datasets:
3dcalc -a epi_r1+orig -expr t -prefix tt_epi_time
This replaces every voxel with the time it was acquired because ‘t’ is a reserved variable for that purpose in 3dcalc. Then you could just apply the same warping to cardinal with 3dWarp on that time dataset. There is the question of what to do with interpolation there. It may be most appropriate to use Nearest Neighbor interpolation rather than computing an interpolated value.
I had to do some preprocessing to the EPI data:
[size=small][size=x-small]3dWarp -oblique2card (linear interpolation)
3dresample (resample to higher resolution)
volreg (align to the base, align to anat, warp to tlrc space)[/size][/size]
And it took much time to get the pre-processed EPI files.
The acquisition time of the voxels of the pre-processed EPI files is needed and you say I can do this with the original datasets by using 3dcalc to get the time dataset. So I think I should apply the exact same preprocessing to the time dataset. Otherwise, the locations of the voxels in the pre-processed EPI files and in the time dataset can’t match 100%.
Then three new problems come out:
Won’t the following preprocessing for matching locations changed the time values in this time dataset too much?
I’m afraid that the time values in that dataset will become ridiculous through so much processing.
How to apply the exact same preprocessing?
The transformation matrix of volreg had been saved, so I can repeat the exact same transformation to the time dataset. However, the problem is how to do the exact same 3dwarp, 3dresample or 3dDespike?
You say we should do NN interpolation to the time dataset in the process of 3dWarp. But linear interpolations were done to the processed EPI files.
Can’t the distinct interpolation method lead to unmatched locations between the voxels in processed EPI files and in the time dataset?
How to add timing information to the .Brik files?
The slice timing information seems to be not included in my original EPI dataset because 3dcalc -expr t outputted an all zeros dataset.
I have known the slice timings by reading the raw .IMA files. So how to add these slice timing information to the .Brik files?
I am not sure whether this can work appropriately:
[size=x-small]3drefit -TR 2 -Tslices 2.0000 1.9355 1.8710 1.8065 1.7419 ... 0.1935 0.1290 0.0645 0[/size]
I will be very appreciated to your replies!
You are right - all the spatial transformations will change the acquisition time of the voxel. It’s fairly tricky to follow all the spatial transformations altogether. Concatenating all the transformations and applying all of them together as we do with align_epi_anat.py and afni_proc.py would be the most correct way. Interpolation is still an important point because if you have higher resolution in voxels that cross between two acquisition times, averaging is probably not the best thing to do, but nearest neighbors aren’t exactly right either. Larger voxels than the original definitely blend times too, so there’s no really good solution for that. In the end, I think few if any go through the trouble of tracking times across all the processing pipeline. If it’s done at all, slice timing correction is done at the beginning of most pre-processing pipelines.