Hi AFNI gurus:
Was hoping for a little advice.
For a new study we scan on two consecutive days (each a different condition). Each day gets the same task-based fMRI, resting-state scans, and a MEMPRAGE scan. One of the days also gets a Diffusion weighted scan. My goal is to compare these metrics between days/conditions within-subject, but keep as many of the imaging modalities linked to each other as possible.
Right now I use Freesurfer+SUMA to reconstruct the MEMPRAGE and use it as a base for functional analyses (both task and resting), and for diffusion-scans using FAT)CAT to link it all together. Thought given that the Diffusion image is only on Day 2, I’ve started questioning which T1/MEMPRAGE scan to use.
Is it better to use each day’s own scan as an anatomical base (i suppose theoretically closer in alignment to the functionals) but each day’s data would be out of alignment with the other day’s, or pick the best scan of either day and use it for “everything.” The benefit of the latter approach would clearly be that all data for both days has an identical registration base and therefore easily related to one-another.
Anyone have similar pipelines for their data?
Thanks in advance for the insight!!!
I’ve found the differences to be pretty small overall. But the pipeline I currently work with is to take the two anatomical (MPRAGE/MEMPRAGE) scans and put them both into Freesurfer. Depending on your theoretical interests, you could do this in the longitudinal pipeline or as two inputs to the same recon-all (in the more traditional model of dealing with motion). I then run @SUMA_Make_Spec_FS on either that longitudinal template or “two input” output and grab the brain.nii for use with my afni_proc.py commands (using the -anat_has_skull no).
For DWI/DTI, I also collect a T2 image, that I then SkullStrip and align to the brain.nii file. I then run TORTOISE[/url] on my DWI file with the T2 as my “structural” file. Some info about this is [url=http://blog.cogneurostats.com/?p=728]written out here. If you don’t have a T2, you can use the T1, though you have to turn off some of the DWI distortion correction things in TORTOISE. I’d say that’s decent motivation to start collecting a T2, and also blip-up/blip-down DWI data.
Using this method, I have my structural MRI (sMRI), fMRI, and DWI/DTI all in similar space.
Thanks Peter! This is a great help. I’m also a great fan of your blog, so thank you for all you do there! It’s been a big help.
A few follow up questions, if ok:
I had thought about feeding both MEMPRAGE scans to recon-all all. Do you do dura correction using the multi-echos? Where I got held up is which echos to use if I’m feeding two different MEMPRAGEs as initial input?
We don’t acquire a T2 currently. We’re up against time in our protocols as it is. Would it be a better gain for our time to get 1 MEMPRAGE on day 1 and 1 T2 on day 2, and use the same T1 for all registration of both days? It’s not real longitudinal study, we just collect condition 1 on Day 1 and condition 2 the next day, we don’t expect changes in GM across that 24-hour delay.
Finally, in your current pipeline when you’re registering to the average FS output…are you using big/giant-move or just normal alignment?
Thanks again for all the help!
Nice to know someone reads the blog!
I tend to use the -T2pial option with Freesurfer 6, which seems to do a pretty decent job handling dura for me. This has been true whether I’m using a standard T1 MRPAGE or a MEMPRAGE. You could collect just one T1 (MEMPRAGE), but I would stay vigilant about making sure that your T1 is good quality and not contaminated with motion. Not having a T2 usually isn’t as painful as not having a good T1. I tend to lean towards the “more data is better” mentality.
My afni_proc.py scripts all use -giant_move, and align the EPI to the anatomical. I also use auto_warp.py (-tlrc_NL_warp) to help bring group level ROIs back to subject space with a bit more precision.
Thanks Peter again for the helpful reply!
We’re still at the beginning of data collection for this study, so I’ll look at adding a T2 for our next subjects!
If you’ll indulge one more question on this line of thought: the axialization step for the structural.
I’ve noted in the new Tortoise instructions and in your post, the use of axialization to bring the structural into roughly AC-PC space. Where would this fit in the pipeline? Would it be better to ACPC the input to recon-all by hand and then reconstruct from there and use the outputs as the base for both tortoise and AFNI, or feed the raw structural into recon-all and ACPC/axialize the output for future processing?
Thanks again for all the insight. Linking these functional/structural pipelines has been a real goal for me, and this has really helped actualize that.
Yes, the rough ACPC or “Axilization” of data… I’m still testing how much impact this has, but the folks at TORTOISE tell me it’s a good idea. You can specify a “reorientation” image, which can be your brain.nii file and it’ll warp it back to that at the end.