I’m using 3dTProject and I have some questions after looking at the output from this command:
3dTproject -input bold_orig.nii.gz -polort 1 -prefix bold_project.nii.gz
the time-series plots look identical between bold_orig and bold_project but the two images look very different. bold_orig looks like a standard bold image whereas bold_project is more like a homogeneous image of gray voxels. I cannot really identify brain structures in bold_project anymore but maybe that’s how the output is supposed to look like? I know 3dTProject appears to center the data around zero, so maybe that explains it. Are there any example output images from 3dTProject I can compare mine with?
What do you recommend the mask to be? At the moment I’m using a whole-brain mask, but I’m wondering if it’s better to include only gray matter, would that be more sensible?
I’m not sure if I should use the option blur option, especially if a whole-brain mask is given, because white matter voxels will be blurred with gray matter voxels, right? What about the norm option? Is that recommended?
Thank you for your reply!
Yes it’s a single subject analysis and I’m using the output of 3dTProject to compute RSFC (outside afni_proc at the moment). I don’t smooth my data during preprocessing so I assume shouldn’t use the blur flag in 3dTProject? Is normalization a good idea though?
I don’t recall whether scaling affects RSFC, it might be good to post a specific question about that, which Paul Taylor will likely respond to.
Right, there is no reason to have 3dTproject blur in any case. If you wanted to blur, that would go in the preprocessing.
Anyway, to be clear, it is okay that the 3dTproject output does not visually look like a brain anymore.
Paul Taylor has seen this reply, and notes that RSFC parameters like ALFF do contain units, and hence would be affected by scaling/normalization. The fractional ones (e.f., fALFF) are dimensionless ratios, and so should not be affected by uniform scaling.
If you use the “scale” block during processing, then your outputs would have units of BOLD % signal change, and that would seem to be useful and to not require further normalization considerations.
Is it somehow possible/reasonable to bring back the contrast? I’m using the default polort, should I simply add the mean of each voxel’s time series after running 3dTProject?
You could do that, but would have to be very careful not to add it to censored time points.
But really, why do it at all?
If the point is to have a better idea of where you are in the brain, that is more appropriately answered with the final_vr_base volume, viewing its registration with the final_anat, and perhaps with the template (to be sure they match). If everything is well aligned, and hopefully it is, then locations in space are better driven using an anatomical underlay.
You can open many connected controllers with the afni GUI (using the black “New” button in the lower left of the main window), and with these locked together, coordinates can come from an Underlay template or final_anat volume, rather than a residual regression data that has no contrast.
Or is it something else? What is driving your interest in having contrast in the residual time series.
One reason might be running statistical analysis on software that expect contrast in the images, because when I tried it with the output from 3dTProject it failed. But I understand there is no possibility to switch off demeaning, right?
If you have to have a contrast-based non-zero mean in the time series, you could always compute the mean from all_runs and add it to the errts time series using 3dcalc, e.g.,
3dcalc -a errts+tlrc -b mean+tlrc -expr a+b
The -a dataset should be the time series, while it is okay for -b to be a single volume. The mean would be added to each time point.
Thank you Rick!
The mean+tlrc you mention is the average of the time series that I pass to 3dTProject, and the errts+tlcr is the output from 3dTProject, is that right?
So if bold_orig_ts.nii.gz are the raw time series, then:
3dTproject -input bold_orig_ts.nii.gz -polort 1 -prefix bold_project.nii.gz
3dTstat -mean -prefix mean_bold_orig.nii.gz bold_orig_ts.nii.gz
3dcalc -a bold_project.nii.gz -b mean_bold_orig.nii.gz -expr a+b
Is this correct?
Yes, something like that would be a good way to add in a temporally constant spatial contrast.
I expect your 3dTproject is more complicated than what is shown, but otherwise it looks right.
Do you mind if I ask a few more basic questions?
- How many regressors do you find reasonable to include? I notice that I’m running 3dTProject with a few dozen regressors, and I’m concerned this might affect the quality of the projected data. Will more regressors have a negative impact on the data’s quality, or is there absolutely no relationship?
- Is 3dTProject doing a regression analysis and keeping the residuals?
- Are the data standardized or simply demeaned? You mentioned that demeaning occurs at the voxel-level, is it particularly advantageous to demean the data?
- How are passband regressors generated? Are these cosine regressors? And can I generate them outside 3dTProject with an afni program (this is just for understanding, I’ll use those automatically created by 3dTProject)?
Thank you very much!
- Indeed, depending on what the further analysis entails, there should be a limit on the degrees of freedom lost/projected out. There is no hard number for that though, and it also depends on the quality of subjects. Sometimes a patient group is difficult to acquire, or might generally be expected to come with more motion.
Just think through the minimum fraction of degrees of freedom you would want remaining, and exclude subjects failing that.
Yes, 3dTproject is keeping the residuals of a least squares fit to the data. It would be the same as with 3dDeconvolve, but it is faster since the partial statistics are not bothered with.
Scaling the data would be up to you. Paul suggested that if you are computing ALFF for example, then scaling is important. Scaling would not affect correlations.
Yes, though it is the omitted frequency bands that are projected out. Note that if you do bandpassing with afni_proc.py, it will use 1dBport to output the sinusoids to regress out, and apply them in 3dDeconvolve or 3dTproject. If you do an example like that, the steps should be made clear.
For example, see the afni_proc.py command in AFNI_data6/FT_analysis/s06.ap.rest[/url] and the corresponding [url=https://afni.nimh.nih.gov/pub/dist/edu/data/CD.expanded/AFNI_data6/FT_analysis/s16.proc.FT.rest]s16.proc.FT.rest processing script.