I’m currently attempting to use gen_ss_review_scripts.py with the necessary files located in multiple folders. Basically, what I’ve done is run the preprocessing steps, and then I have separate sub-folders for each GLM I’ve run. Therefore, some of the files are in the main folder, and some are in the subfolder. Assuming that just copying the files into the correct folder isn’t an option (one I’ve tried, but not sustainable), is there any way to specify sub-folders from the arguments in gen_ss_review_scripts.py when defining which datasets and files to use?
I have not gotten around to testing this out, but in
theory, it should work (for whatever that is worth).
One should be able to specify any of the dataset names
on the command line (either via -uvar or direct variable
names), so the program will not need to search for them.
I would expect some difficulties with this, so please
let me know what you run into.
Thanks for your help. I think I’ve got it 90% working. I’m able to direct it to the majority of the files in the subfolder (including the stats dataset, the errts dataset, etc). However, for some reason I can’t get it to recognize the mask dataset. I’ve triple-checked that I’ve spelled everything correctly, and each time I get the message “** no mask_dset dset, cannot drive view_stats, skipping…” This is with using the argument:
-uvar mask_dset [Foldername]/[filename].HEAD
Any idea what I could be missing? The full command is below just in case (it runs perfectly if the mask_dset line is omitted, but doesn’t include that info in the @ss_review_basic output)
I guess as a follow-up question - is it possible to provide the blur_est file in an argument, so that the estimates are captured in @ss_review_basic? I don’t see that option under the list of user variables.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.