On ortvec and glts

Hello,

I’d like to create an impulse response model. My functional data are not continuous (e.g. sparse) and thus it is not appropriate to insert stimulus timing files. For example, my timing files begin at the beginning of the experiment, but the functional data are not continuous. So if I use TENT or CSPLIN then it would take the timing file and generate my impulses accordingly, except that it wouldn’t be in the same temporal domain as the functional data (because the fMRI data isn’t continuous).

So I have my own impulse response matrix that I would like to provide where columns refer to an event type and rows refer to a specific volume.

For example, here is a sample of 6 rows:


1 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 1

Here is how I am getting the matrix into 3dDeconvolve.


3dDeconvolve 												                \
			-input $runs -jobs 24										\
			-polort A 	 										                \
			-ortvec MoPar_demean.1D 	demean 						        \
			-ortvec MoPar_derv.1D 	        derv 						        \
			-ortvec FIR.1D 		        ir	# impulse response matrix	        \
			-censor censor.1D 						                                        \
			-num_glt 1 											        \
			-gltsym 'SYM: ir[9..11] -ir[0..2]'  -glt_label 1 sound-nosound 			\

Obviously there are more deconvolve options for statistics and such, I just wanted to provide how I’m setting up the relevant parts.
I want to be able to submit some of the impulse response columns to a glt test. In the above example, lets say that I want to do a t test between columns 9 through 11 minus columns 0 through 2. How would I specify this?

If I specify it in the above way I receive an error:


** ERROR: row #1 of matrix 'SYM: ir{9]}' is all zero!
** ERROR: -gltsym errors immediately above from file 'SYM: ir{9]}'
** FATAL ERROR: Can't continue after the above -gltsym problems!

My rationale for this specification is because this is how I’d do it in TENT or CSPLIN. Thank you for your time.

Dustin

Hi Dustin,

While those parameters go into the baseline, it does
seem possible to do this. The main difficulty is that
the labels are not actually known to be referenced as
symbolic GLTs using -gltsym.

However you can construct the GLTs the old fasioned
way, using -glt. To get the proper regressors order,
either add -bout to see labels in the output bucket,
or look at the X.xmat.1D header to figure it out.

Given that, try to create each full GLT matrix that
you want. It is tedious, but it should at least work.

  • rick

Hi Rick, thanks for the response.

As you can imagine, I was trying to sidestep using the old fashioned way! The only reason I thought that this might work is due to this line in the help file:

*N.B.: The q-th column of ‘fff’ will get a label
like ‘lll[q]’ in the 3dDeconvolve results.

I will try as you say, thanks for the input!

Ok, this seemed to work and the results look as I’d expect. However, I had to use a -GOFORIT 4 to get it to run, which makes me nervous so I’d like to confirm with you.

The design matrix includes:
columns 0 - 23 - polort
columns 24 - 35 - motion and derivatives
columns 36 - 47 - impulse response matrix as formatted in my previous post

Relavant columns for my example:
36 - 38 are columns for baseline trials
42 - 47 are columns where participants hear a sound

I then created a -glt by making one row with columns 36-38 ‘-1’ and columns 42-47 ‘1’ so that I can create a sound>no sound contrast (all other columns are ‘0’).

Here is my 3dDeconvolve command:


3dDeconvolve 												                         \
			-input $runs -jobs 24										         \
			-polort A 	 										                         \
			-ortvec ${s}_MoPar_demean.1D 	demean 				                         \
			-ortvec ${s}_MoPar_derv.1D 	        derv 					                 \
			-ortvec ${s}_FIR.1D 		        ir						                 \
			-censor ${s}_censor.1D 					                                                 \
			-num_glt 1 											                 \
			-glt 1 sound-nosound.1D 	-glt_label 1 s-bl 				                 \
			-overwrite -bout -tout										         \
			-x1D ${s}.GLM.xmat.1D -x1D_uncensored ${s}.uncensor.GLM.xmat.1D	 \
			-bucket ${s}.GLM.stats -GOFORIT 4

Here is the 3dDeconvolve output:


++ 3dDeconvolve extending num_stimts from 0 to 24 due to -ortvec
++ 3dDeconvolve: AFNI version=AFNI_16.3.20 (Dec 31 2016) [64-bit]
++ Authored by: B. Douglas Ward, et al.
++ current memory malloc-ated = 666,928 bytes (about 667 thousand [kilo])
++ loading dataset S10_V180_1.smooth+tlrc.BRIK S10_V180_2.smooth+tlrc.BRIK S10_V180_3.smooth+tlrc.BRIK S10_V180_4.smooth+tlrc.BRIK S10_V180_5.smooth+tlrc.BRIK S10_V180_6.smooth+tlrc.BRIK
++ current memory malloc-ated = 1,174,677,253 bytes (about 1.2 billion [giga])
++ Auto-catenated input datasets treated as multiple imaging runs
++ Auto-catenated datasets start at:  0 180 360 540 720 900
++ STAT automask has 205353 voxels (out of 271633 = 75.6%)
++ Skipping check for initial transients
++ Imaging duration=360.0 s; Automatic polort=3
++ Number of time points: 1080 (no censoring)
 + Number of parameters:  48 [48 baseline ; 0 signal]
++ total shared memory needed = 107,566,668 bytes (about 108 million [mega])
++ current memory malloc-ated = 1,174,966,124 bytes (about 1.2 billion [giga])
++ mmap() memory allocated: 107,566,668 bytes (about 108 million [mega])
++ Memory required for output bricks = 107,566,668 bytes (about 108 million [mega])
++ Wrote matrix values to file S10.GLM.xmat.1D
++ ========= Things you can do with the matrix file =========
++ (a) Linear regression with ARMA(1,1) modeling of serial correlation:

3dREMLfit -matrix S10.GLM.xmat.1D \
 -input "S10_V180_1.smooth+tlrc.BRIK S10_V180_2.smooth+tlrc.BRIK S10_V180_3.smooth+tlrc.BRIK S10_V180_4.smooth+tlrc.BRIK S10_V180_5.smooth+tlrc.BRIK S10_V180_6.smooth+tlrc.BRIK" \
 -tout -Rbuck S10.GLM.stats_REML -Rvar S10.GLM.stats_REMLvar -verb

++ N.B.: 3dREMLfit command above written to file S10.REML_cmd
++ (b) Visualization/analysis of the matrix via ExamineXmat.R
++ (c) Synthesis of sub-model datasets using 3dSynthesize
++ ==========================================================
++ Wrote matrix values to file S10.uncensor.GLM.xmat.1D
++ ----- Signal+Baseline matrix condition [X] (1080x48):  3.84811  ++ VERY GOOD ++
*+ WARNING: !! in Signal+Baseline matrix:
 * Largest singular value=1.86247
 * 1 singular value is less than cutoff=1.86247e-07
 * Implies strong collinearity in the matrix columns!
++ Signal+Baseline matrix singular values:
             0      0.125775      0.152245      0.186387      0.210547
      0.297469      0.315213      0.524212      0.563959      0.710978
       0.80135      0.908348      0.927272      0.953512      0.969709
      0.982664       0.98997      0.995209      0.997447      0.999185
       0.99998             1             1       1.00023       1.00086
       1.00154       1.00536       1.00797        1.0121       1.01273
       1.01361       1.02203       1.03139       1.03458        1.0483
       1.06934       1.07048       1.07709       1.08865       1.09839
       1.12271       1.14064       1.22223       1.27599       1.42174
       1.52205       1.66472       1.86247
++ ----- Baseline-only matrix condition [X] (1080x48):  3.84811  ++ VERY GOOD ++
*+ WARNING: !! in Baseline-only matrix:
 * Largest singular value=1.86247
 * 1 singular value is less than cutoff=1.86247e-07
 * Implies strong collinearity in the matrix columns!
++ Baseline-only matrix singular values:
             0      0.125775      0.152245      0.186387      0.210547
      0.297469      0.315213      0.524212      0.563959      0.710978
       0.80135      0.908348      0.927272      0.953512      0.969709
      0.982664       0.98997      0.995209      0.997447      0.999185
       0.99998             1             1       1.00023       1.00086
       1.00154       1.00536       1.00797        1.0121       1.01273
       1.01361       1.02203       1.03139       1.03458        1.0483
       1.06934       1.07048       1.07709       1.08865       1.09839
       1.12271       1.14064       1.22223       1.27599       1.42174
       1.52205       1.66472       1.86247
++ ----- stim_base-only matrix condition [X] (1080x24):  2.28456  ++ VERY GOOD ++
*+ WARNING: !! in stim_base-only matrix:
 * Largest singular value=1.60031
 * 1 singular value is less than cutoff=1.60031e-07
 * Implies strong collinearity in the matrix columns!
++ stim_base-only matrix singular values:
             0      0.306619      0.324891      0.474359      0.558376
      0.600785      0.730422      0.826685      0.886871      0.987358
       1.00504       1.01798       1.02599       1.02889       1.03712
       1.06551        1.0745        1.0774       1.07927       1.13786
       1.37112       1.41741       1.52672       1.60031
++ ----- polort-only matrix condition [X] (1080x24):  1.01269  ++ VERY GOOD ++
*+ WARNING: +++++ !! Matrix inverse average error = 0.00520833  ** BEWARE **
++ Matrix setup time = 1.10 s
*+ WARNING: !! 3dDeconvolve -GOFORIT is set to 4: running despite 4 matrix warnings
*+ WARNING: !! See file 3dDeconvolve.err for all WARNING and ERROR messages !!
*+ WARNING: !! Please be sure you understand what you are doing !!
*+ WARNING: !! If in doubt, consult with someone or with the AFNI message board !!

Please let me know if any of those warnings are actually concerning. I have a hunch that the warnings are because it thinks that all -ortvec parameters will go into baseline. There are no collinearity issues in my impulse response regressors. The range of collinearity between the regressors I’m interested in is r = [-0.15 -0.03].

Thanks!

Dustin

Hi Dustin,

Would you send me those 2 xmat.1D files via email?
Click on my name for the address.

I cannot tell what is wrong just from this text.

  • rick

Hi Dustin,

Sorry for dropping this. In your X-matrix (GLM.xmat.1D), the first
six regressors of interest (columns 36…41) are the exact negatives
of the last 6 (columns 42…47).

It is unclear to me why the program is showing “VERY GOOD” for
the condition numbers that include those 12 regressors.

In any case, the model is currently over-specified.

Does that seem reasonable?

  • rick

Hi Rick,

Yes that makes total sense. I’ll investigate this further. Our GLM analysis is just to see if the paradigm is ‘working’. However, our main analysis of interest is to generate TR impulse responses for each trial for MVPA.

Thanks,
Dustin

Hi Rick,

Thanks for your help on this. The reason that the design matrix was over specified is because I was also labeling the baseline volumes. I will change these to 0.

I have another question about this analysis:

I would like to calculate trial-specific betas using the LSS method in 3dLSS. However, from what I understand, 3dLSS can only be used if you specific a single -sitm_times_IM timing file in the design matrix. Do you know a way of creating a design matrix using -ortvec as I did in my earlier example, but then taking this to 3dLSS?

Hi Rick, I wanted to follow up.

As a reminder, I have ISSS data where participants hear a sound during a silent period, followed by 3 image acquisitions. I have 6 runs with 180 time points per run (which are not continuous). Previously in this thread I was trying to find the main effect of sound > no sound. My issue with the over specified design matrix was that I was coding the baseline volumes as ‘1’ in my matrix. When I took these out there were no collinearity errors, I didn’t need a -GOFORIT, and the results look as expected.

Now I want to get trial-wise beta values for MVPA. Rather than 1 beta per trial, however, I’d like 1 beta value per TR. So, for a given trial, I’d receive 3 beta values that correspond to the three volumes that follow a stimulus onset. Because the data are not continuous,I have been using -ortvec to insert a design matrix and -glt to code the sound>no sound in the previous steps, as you suggested.

Here is a sample matrix for my sound < no sound:


1 0 0
0 1 0
0 0 1
0 0 0
0 0 0
0 0 0
1 0 0
0 1 0 
0 0 1 

Columns 0,1,2 correspond to the 1st, 2nd, and 3rd volumes following a stimulus onset, respectively. Then, for a sound > no sound (or baseline) I’ll enter ‘1 1 1’ into a -glt.

This works well. Now I want to get trial-level beta values (with each trial having 3 correspond beta values for the 3 volumes that follow the stimulus). I have created a matrix like this:


1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1

However, I also have columns of all 0s that correspond to the baseline (silent) trials. It was easier to code from the timing files and I figured that they wouldn’t affect anything other than giving me a warning in 3dDeconvolve.

Here is my 3dDeconvolve command:


3dDeconvolve 												                \
			-input $runs -jobs 24										\
			-polort A 	 										                \
			-ortvec MoPar_demean.1D 	demean 						        \
			-ortvec MoPar_derv.1D 	        derv 						        \
			-ortvec FIR.1D 		        ir	# impulse response matrix	        \
			-censor censor.1D 						                                        \

My full experiment consists of 6 runs, with 8 silent (baseline) trials, 4 catch trials (requiring a behavioral result, but of no interest in analysis), and 48 trials of interest per run. We are trying to analyze all of the runs at once in 3dDeconvolve. We have all of the catch trials in one column of the design matrix, and we have one column per trial per TR for each of the trials of interest (similar to the above matrix). Admittedly this is a huge matrix with 1080 rows (180 volumes per TR * 6 runs) and 1009 columns. When I run 3dDeconvolve in this way, I get hung up on this:


 **** SVD avg err=0.00696691; recomputing ... new avg error=2.11461e-16 **OK** :-)


 **** SVD avg err=0.00693284; recomputing ... new avg err=0.00736581; re-recomputing the hard way ... newer avg err=7.17349e-11 **OK** :-)


 **** SVD avg err=0.00539245; recomputing ... new avg error=2.32423e-16 **OK** :-)


 **** SVD avg err=0.00784686; recomputing ... new avg err=0.00665956; re-recomputing the hard way ... newer avg err=7.17525e-11 **OK** :-)


 **** SVD avg err=0.0069601; recomputing ... new avg error=2.24406e-16 **OK** :-)

It keeps looping through the above output after creating (and writing) the design matrix but before the voxel loop. I have no idea if there is a better way to generate trial-wise beta values in this manner, but ideally I’d like to calculate them using the same method in 3dLSS. Thank you for your help.

Dustin

I will have to look into this a bit.

The alternative would be to fill the gaps in the time series
and use -stim_times_IM to generate the series of betas.
Of course you seem to be also using TENT.

This seems really close to simply detrending the data
and extracting the time points of interest…

  • rick

Just to follow up, after a little more thought, this is not quite
similar to detrending and then picking out the time points.
To make that accurate, you would have to censor the time
points of interest, since they would not actually affect the
trend estimate. That would likely make a non-trivial
difference.

To put that into perspective, the regressors of interest
(for TR-locked TENTs with IM) are spike functions, which
behave like censoring with respect to the rest of the model,
meaning the data values at those time points would not
affect the rest of the model.

  • rick

Thanks, Rick. I’ll look into detrending as you suggest.

I have also gone a different route by mean padding the time series to appear continuous. You suggested this here.

As a proof of concept, I want to calculate a sound>no sound contrast using the padded time series. Previously in this thread you showed me how to generate this contrast by specifying my own design matrix using -ortvec and my own -glt contrast. So I wanted to compare the two results with the understanding that the should be identical.

  1. I created indices to insert the mean time series in the proper location

1deval -expr 'bool(mod(t+1,4))' -num 240 > pad_times.1D

  1. I created a mean volume and appended it to the end of the scan.

3dTstat                                   \
     -prefix dummyMean          \
     -datum 'short'                    \
      pb04.scale+tlrc                \
3dTcat                                    \
     -prefix 1pad.ts                   \
     pb04.scale+tlrc                  \
     dummMean+tlrc
3dTcat                                     \
     -prefix pb05.dummyPad    \
     -1pad.ts+tlrc'[1dcat pad_times.1D']

  1. Then I pad the motion (and derivatives) and create a censor file to censor the mean-padded volumes. Initially I created a censor file for volumes that exceed 0.5mm FD, but lets just say that this person did not move so that I have a vector of all 0s.

1deval -expr 'bool(mod(t+1,4))' -num 240 > issues_pad.1D
1d_tool.py                                        \
     -infile MoPar.1D                          \
     -censor_fill_parent isss_pad.1D \
     -write MoPar_pad.1D
1d_tool.py                                        \
     -infile censor.1D                          \
     -censor_fill_parent isss_pad.1D \
     -write censor_pad.1D

  1. Then I regress…

3dDeconvolve 												\
	-input $runs -jobs 12										\
	-polort A 	                                                                                                \
	-ortvecMoPar_pad.1D              demean 							\
	-ortvec MoPar_derv_pad.1D    derv 							\
	-censor censor_pad.1D 							                        \
	-num_stimts 2 											\
	-stim_times 1 trials.1D 'BLOCK(0.5,1)' 	            -stim_label 1 trial 	\
	-stim_times 2 catch_trials.1D 'BLOCK(0.5,1)'     -stim_label 2 cat    	\
	-gltsym 'SYM: trial' -glt_label 1 sound 							\
	-overwrite -tout -bucket trial.stats 							        \
	-x1D xmat.1D -x1D_uncensored uncensor.xmat.1D

I have attached 2 images. The one that looks like the auditory cortex was created without padding the time series, but I specified the design matrix using -ortvec and you suggested earlier in this thread. The image where the whole brain is deactivated are the results of the same sound>no sound contrast, same voxelwise threshold (p<0.01) and same cluster correction (arbitrary k=50, just to clear it up) but I used the mean-padding procedure outlined above.

Do you have any recommendations? Thanks for your help!

Dustin

meanpad.jpg

fir.jpg

Hi Dustin,

The step where you create pb05.dummyPad does not look
correct, since pad_times is just 0 and 1 values, rather
than actual indices. The time series data should be
just alternating values of 0 and something else.

Also, definitely attach the dummy volume at index 0,
not at the end. That 0 index will match the zero-
padded index list create next…

Consider reversing steps 2 and 3, working with 1D files
first. The messy part that I did not cover in the old
thread was zero-padding the indices 1…NTsmall. But
that can be done via censor_fill_parent. So create
an initial index list of length 180 (I think):

1deval -num 180 -expr t+1 > index_list.1D

Then zero-pad this list:

1d_tool.py -infile index_list.1D -censor_fill_parent pad_times.1D \
     -write index_list_pad.1D

Then put dummMean at index 0 of 1pad.ts:

3dTcat -prefix 1pad.ts dummMean+tlrc pb04.scale+tlrc

Then create the padded time series:

3dTcat -prefix pb05.dummyPad -1pad.ts+tlrc'[1dcat index_list_pad.1D]'

And see how that works. Look at the time series data
between the 2 cases. The shapes had better match,
except that the padded one has the extra zeros.

  • rick

Hi Rick,

Thanks for your response. I generated the padded time series exactly how you suggested and it looks right. However, for my 1st level analysis, the stats still look weird.

Here is my 3dDeconvolve command:


3dDeconvolve 													        \
			-input $runs -jobs 24									        \
			-polort A 	 											        \
			-ortvec MoPar_demean_pad.1D demean 						\
			-ortvec MoPar_derv_pad.1D derv 							        \
			-censor censor_pad.1D 							                        \
			-num_stimts 2 											\
			-stim_times 1 trials.1D 'TENT(2,6,3)' 	-stim_label 1 trial 	                \
			-stim_times 2 catch_trials.1D 'TENT(2,6,3)' 	-stim_label 2 catch 	        \
			-gltsym 'SYM: trial[1..2]' -glt_label 1 sound 						\
			-overwrite -bucket trial.stats                     						\
			-x1D xmat.1D -x1D_uncensored uncensor.xmat.1D

I have checked that the motion parameters are padded properly, they seem to be. The censor file includes all of the padded volumes so that they are not included in the analysis - those seem accurate as well. I decided that TENT would be a better model so that I can choose which TRs to include in my t test. As a reminder, the stim is presented during 2 seconds of silence, followed by 3 2-second TRs.

The stats file has a very pronounced striping throughout the brain. This is not the case in my previous model where I did not mean pad and just inserted a design matrix using -ortvec. I have attached 2 images. This is of a sound>no sound contrast and it is threshold at a ridiculously stringent value (p<3.-39). Please refer to my previous post for an image that looks correct given our hypotheses.

Untitled.jpg

Untitled2.jpg