about creating timing files for a video presentation and a timing file for a parametric regressor

AFNI version info (afni -ver):

Precompiled binary macos_10.12_local: May 8 2023 (Version AFNI_23.1.05 'Publius Helvius Pertinax')

Hi,
I am working with EPI data corresponding to 10 minute runs. Every run corresponds to a 8 minute video that is preceded and followed by a 1 min blank. EPI volumes are acquired every 2 seconds . Every run is,

30 volumes (blank) + 240 volumes (video) + 30 volumes (blank) = 300 volumes,10 mins

I also have a continuous regressor that is a property of the video and has one measurement corresponding to every 2 seconds of video . Can you please help with the correct way to set up the timing 1D file for the video type condition that is ON for 240 volumes in a 300 volume run. Also, the correct way to specify a timing file for the continuous regressor that has 240 numbers corresponding to 240 volumes acquired during video presentation.

A session for example has 13 runs where one of 5 types of videos was shown in each run, eg; {1,2,3,4,5,1,2,3,4,5,1,3,4}

  1. I want to make a stimulus timing file for each of the 5 video types, so 5 1D files with 13 rows.

  2. I also want to make a stimulus timing file that specifies the parametric regressor vector, it will be nice if this can also be a single file with 13 rows.

Then I can these timing flies into afni_proc.py in the -regress_stim_times
field. It is not clear to me how to specify video onset and duration per run for video type condition and how to specify volume wise regressor value and duration (2 seconds) in a way that I can have a single 1D file for the parametric regressor also.

Best regards,
Harish

Hi Harish,

First, let me give just a mechanical answer. While simple, a response to this will be long and with some detail. But then there are some choices that would be good to clarify afterwards.

So the full model might be with 13 10-minute runs, where the only stimulus timing files that you might convolve are for the 8 minute video events. These would have 1 event per run (and you might use '*' to make them look like local times). For example:

videos.txt:
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *
   60 *

Then there would be 5 regressor files: reg_1,2,3,4,5.1D. Each of these files would be the full 3900 lines long. Within each of the 13 runs (i.e. for that 300 set of lines), it might have 30 zeros, 240 useful values, and 30 more zeros. Consider reg_2.1D for example, which is "active" for runs 2 and 7. It would have 3900 rows of single values that might look like:

0
... (run 1: 300 total zeros)
0
0
... (run 2: start: 30 zeros)
0
... (run 2: main: 240 USEFUL values)
0
... (run 2: end: 30 zeros)
0
... (runs 3,4,5,6 : 1200=300x4 total zeros)
... (run 7: start: 30 zeros)
... (run 7: main: 240 USEFUL values)
... (run 7: end: 30 zeros)
... (runs 8-13: 1800=300x6 total zeros)

Then they could be passed to afni_proc.py as:

-regress_stim_times videos.txt reg_*.1D \
-regress_stim_types times  file file file file file \
-regress_stim_labels video v1 v2 v3 v4 v5 \
-regress_basis 'BLOCK(480,1)' \

Here the BLOCK(480,1) would only apply to videos.txt.

===========================================================================
Things to ponder:

  1. There might be little to gain by including the pre- and post- video data. Any general shifts from that (as specified by videos.txt) will basically be swallowed by the slow drift terms. It might be cleaner to just process 13 runs of 240 volumes.

  2. You might want to model videos.txt separate timing files/regressors per run. For that, you could "cheat" by calling this stim_type "IM" instead of "times". Depending on the answer to point 0, the might even disappear.

  3. Do the 240 value files not need convolution? For example, if they were something like brightness measures, one might expect them to come with BOLD responses, meaning that time series might beg for convolution with your basis function.

  4. How might the transition from pre-video to video to post-video look in the 300 lines (per run) versions of the response files. For example, are they demeaned or not before including the 30x2 lines of zeros around them?

  5. If there is a question about point 3, maybe those pre- and post- sections could be additionally modeled out, though the difference between this and videos.txt would just be the 2 transitions. Also, this brings us back to point 0: what is the gain of having the extra 2 1-minute periods per run?

So there still might be some things to work out. And it might be better to chat over than to do via messages. We can ponder that.

  • rick