How to extract run-wise beta value for each condition

Hi,

To create training/testing dataset for MVPA, I’m doing 3dDeconvolve to extract beta values for each condition for each run.
My ‘-input’ in 3dDeconvolve is a preprocessed file that concatenated 16 runs.
I have onset timing txt files for each condition that consists of 16 rows (i.e. for each run) and N columns (i.e. for each trial).

I put ‘-concat’ to let 3dDeconvolve know the concatenated runs, and I used ‘-stim_times’ with row index’{…}’ of a timing txt file (e.g., onset_C_test.txt’{1}') to indicate a specific run’s onset timing. For example, onset_C_text.txt looks like this:
[size=x-small]*
*
*
*
46.18 74.24 102.33 130.36 158.38 186.43 214.45 242.50
*
*
46.12 74.19 102.19 130.19 158.25 186.26 214.32 242.37
46.16 74.17 102.17 130.23 158.23 186.24 214.28 242.29
*
*
46.14 74.14 102.21 130.23 158.24 186.29 214.36 242.36
*
*
*
*[/size]

But afni says errors like this:


** ERROR: mri_read_ascii_ragged_fvect: couldn’t open file /MAINTASK/beh_data/onset_C_test.txt{1}
** FATAL ERROR: ‘-stim_times 17’ can’t read file ‘/MAINTASK/beh_data/onset_C_test.txt{1}’ [nopt=129]


I’d appreciate if anybody let me know what makes error or any recommendations.
Below is a script that I used.


3dDeconvolve \ -input ${inputEPI} \ -mask ${Bmask} \ -polort A \ -float \ -jobs 2 \ -local_times \ -concat '1D: 0 124 248 372 496 620 744 868 992 1116 1240 1364 1488 1612 1736 1860' \ -num_stimts 86 \ -stim_times 1 ${INITvLaL} 'BLOCK5(36,1)' -stim_label 1 init_vLaL \ -stim_times 2 ${INITvLaR} 'BLOCK5(36,1)' -stim_label 2 init_vLaR \ -stim_times 3 ${INITvLaS} 'BLOCK5(36,1)' -stim_label 3 init_vLaS \ -stim_times 4 ${INITvLaN} 'BLOCK5(36,1)' -stim_label 4 init_vLaN \ -stim_times 5 ${INITvRaL} 'BLOCK5(36,1)' -stim_label 5 init_vRaL \ -stim_times 6 ${INITvRaR} 'BLOCK5(36,1)' -stim_label 6 init_vRaR \ -stim_times 7 ${INITvRaS} 'BLOCK5(36,1)' -stim_label 7 init_vRaS \ -stim_times 8 ${INITvRaN} 'BLOCK5(36,1)' -stim_label 8 init_vRaN \ -stim_times 9 ${TOPUPvLaL} 'BLOCK5(12,1)' -stim_label 9 topup_vLaL \ -stim_times 10 ${TOPUPvLaR} 'BLOCK5(12,1)' -stim_label 10 topup_vLaR \ -stim_times 11 ${TOPUPvLaS} 'BLOCK5(12,1)' -stim_label 11 topup_vLaS \ -stim_times 12 ${TOPUPvLaN} 'BLOCK5(12,1)' -stim_label 12 topup_vLaN \ -stim_times 13 ${TOPUPvRaL} 'BLOCK5(12,1)' -stim_label 13 topup_vRaL \ -stim_times 14 ${TOPUPvRaR} 'BLOCK5(12,1)' -stim_label 14 topup_vRaR \ -stim_times 15 ${TOPUPvRaS} 'BLOCK5(12,1)' -stim_label 15 topup_vRaS \ -stim_times 16 ${TOPUPvRaN} 'BLOCK5(12,1)' -stim_label 16 topup_vRaN \ -stim_times 17 ${maeC}'{1}' 'GAM' -stim_label 17 maeC_R1 \ -stim_times 18 ${maeC}'{2}' 'GAM' -stim_label 18 maeC_R2 \ -stim_times 19 ${maeC}'{3}' 'GAM' -stim_label 19 maeC_R3 \ -stim_times 20 ${maeC}'{4}' 'GAM' -stim_label 20 maeC_R4 \ -stim_times 21 ${maeC}'{5}' 'GAM' -stim_label 21 maeC_R5 \ -stim_times 22 ${maeC}'{6}' 'GAM' -stim_label 22 maeC_R6 \ -stim_times 23 ${maeC}'{7}' 'GAM' -stim_label 23 maeC_R7 \ -stim_times 24 ${maeC}'{8}' 'GAM' -stim_label 24 maeC_R8 \ -stim_times 25 ${maeC}'{9}' 'GAM' -stim_label 25 maeC_R9 \ -stim_times 26 ${maeC}'{10}' 'GAM' -stim_label 26 maeC_R10 \ -stim_times 27 ${maeC}'{11}' 'GAM' -stim_label 27 maeC_R11 \ -stim_times 28 ${maeC}'{12}' 'GAM' -stim_label 28 maeC_R12 \ -stim_times 29 ${maeC}'{13}' 'GAM' -stim_label 29 maeC_R13 \ -stim_times 30 ${maeC}'{14}' 'GAM' -stim_label 30 maeC_R14 \ -stim_times 31 ${maeC}'{15}' 'GAM' -stim_label 31 maeC_R15 \ -stim_times 32 ${maeC}'{16}' 'GAM' -stim_label 32 maeC_R16 \ ... [and so on]
  • ${maeC} is ‘onset_C_test.txt’.

To be clear, you want to compute separate betas for each run, for the onset_C_test.txt file, where each row has the stim timings for one run? But for the other stim files, the betas are for all runs and you are cool with that?

If the answer to both questions is YES, then you have to split onset_C_test.txt into 16 separate files. There are two reasons:
[ol]
[li] The timing file for EACH -stim_times option must have 16 rows, one for each run.
[/li][li] Unlike most options that take .1D files, the “ragged” input for -stim_times does not allow row {} or column [] selectors. Here, “ragged” means that the same number of entries are not required in each row, unlike the more common (and earlier in time) .1D file format.
[/li][/ol]
Another minor point is that row (and column) selectors start at 0, not at 1 – but since they are not allowed here anyway, that is not a direct problem with your command as shown.

Rick Reynolds’ program timing_tool.py can do this splitting up for you, with a little preliminary effort on your part. For your particular application, you will first have to make a “partition” file, which contains the number “1” n times in the first row (where “n” is the number of stimuli per row), the number “2” n times in the second row, the number “3” n times in the third row, etc. – up to the number “16” n times in the 16th row.

For a smaller example, suppose the following is your timing file qqq.txt (n=6 entries per row, 3 rows=3 runs):


1 2 3 4 5 6
11 12 13 14 15 16
21 22 23 24 25 26

To create a partition file qpp.txt, here is a C shell (tcsh) mini-script, using the system command “jot” – for bash, you’ll have to change the looping:


foreach jjj ( `jot 3 1 3` )
   jot -s " " 6 $jjj $jjj >> qpp.txt
end

In the above, the two "3"s are the number of rows, while the “6” is the number of entries per row.
Here is what qpp.txt looks like


1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3 3

Then the command


timing_tool.py -timing qqq.txt -partition qpp.txt qnew

will produce 3 files (1 for each input row, which is 1 for each distinct value in qpp.txt), named qnew_1.txt, qnew_2.txt, and qnew_3.txt. Here is the middle one of these:


* *
11 12 13 14 15 16 
* *

As you can see, there are 3 lines in this file; however, all but the second line have only asterisks, which are placeholders for “nothing to see here” – that is, there must be 3 lines because there are 3 runs, but this stimulus class only occurs in run #2, so you need placeholders for the non-existent stimuli in the other runs.

For a little more detail, see the timing_tool.py -help output.

You’re my lifesaver!!!
I clearly got what was the issue and solved the problem.
Thank you so much!

I’m glad your problem is fixed. As usual in life, that means you can now rush ahead to the NEXT problem.