So I’ve been running my 4d resting state functional data in uber_subject.py and what I’ve noticed is that once it’s done running it outputs a 1D file called final_epi_vr_base+tlrc. The functional data I originally input however, had 108 volumes (functional data) but the preprocessed output from uber_sujbect.py only has one volume. I’m wondering why that is and how do I overcome this?
Once I’m done preprocessing my data using uber_subject.py, I want to later input this preprocessed data into MELODIC (FSL’s ICA), but this is causing a lot of errors for me because it expects a functional data (with 108 volumes and TR information) all of which is not there because of AFNI’s uber_subject.py (also happens to erase my TR information for some odd reason)… I’m not sure what to do.
I’m fairly new to this process (preprocessing my data using AFNI) and would appreciate any guidance. Thank you so much.
This file “final_epi_vr_base+tlrc” is not actually the final EPI dset. It is the EPI volume’s base for “vr” (=volume registration). That is why it is a single volume.
To get information about the output file(s) of interest, one can check afni_proc.py’s help:
Main outputs (many datasets are created): ~1~
- for task-based analysis: stats dataset (and anat_final)
- for resting-state analysis: errts datasets ("cleaned up" EPI)
Correct me if I’m wrong but I thought all the motion stuff will be output into errts, and that it’s not the final preprocessed version of my functional data. Even when I open it up on my computer it looks like a bunch of noise and not a functional brain image.
I’m just not sure how I get the final version of my preprocessed data, if that makes sense.
I think the best thing to do would be to go through some of the AFNI Bootcamp materials about using afni_proc.py and seeing what it is doing.
Lectures #11-15 here (from an AFNI Bootcamp at MIT a couple years ago) would be a great place to start:
Rick goes through in careful detail a lot of using afni_proc.py.
In the case of resting state analysis, an errts dataset will indeed be the final output. It is the residual time series from the regression where all of the known noise components were removed.
Thanks Paul, I’ll have a look at these resources
And thank you Rick for your feedback that makes sense.
I realize now that the errts files are the final output for resting state from uber_subject.py .
The reason I was running my data through subject_proc.py first is because I wanted to preprocess my functional data with afni prior to applying ICA (with FSL) on my data.
I know this might sound like an FSL issue but it has more to do with AFNI’s output in question.
I wanted to incorporate it into my pipeline by take the output of AFNI (the preprocessed data) and use it as input for FSL’s ICA analysis.
I wanted to do this by converting my brik files (errts files) into nifti files and use it as input for fsl’s ICA analysis, MELODIC. The problem is that the ICA analysis crashes when inputting the afni preprocessed data as input. I took a look at one of the issues and it had to do with registration of my data. One of the steps for MELODIC (fsl) involves registration of my funtional data to standard space and I get something that looks really funky. I’ve attached an image to show you what I mean. Naturally in the registration step I’d see my functional brain registered nicely to the standard space, in the case where I use errts I get this box image that is trying to be warped into standard space. I don’t see a cleaned up functional brain (it has noise in the background it seems).
Looking at past literature it seems that this is what errts outputs look like, but I can’t seem to understand how to integrate it into my pipeline after preprocessing.
Thank you so much for your help I appreciate it.
I don’t believe that FSL takes BRIK/HEAD-format files as input. You can convert those to NIFTI with something like:
3dcopy OLD_NAME.HEAD NEW_NAME.nii.gz
As part of running afni_proc.py, you can certainly have your subject data output in a standard space. This is done by including the “tlrc” block-- you can read about it in the afni_proc.py help. What afni_proc.py command are you currently running?
The output errts dataset should not be used for registration, because it no longer looks like a brain. The AFNI processing should be able to register it to the template space of your choice, then registration would not be needed in MELODIC. FEAT and FLIRT are likely not needed here.
It might be good to find out (from the FSL folks) what sorts of preprocessing are acceptable before running MELODIC. I expect they would not want censoring done, for example (I am not sure whether you are doing that).