We have an experiment where the subjects rate two different stimuli. Let’s say condition A and B where each have 3 onsets of duration 10 and each onset has a rating of pleasentness connected to it.
What we have done is first to run a non-modulated GLM where we just have the onsets and durations. The gives a X.stim.no.censor file where A and B each have 3 spikes/onsets and they have a value of 5 (I guess arbitrary value since we don’t limit the stimfile to values between 0-1?).
Now we want to see which regions correlate to the ratings. I.e. we want to do amplitude modulation to each onset. We use [-stim_times_AM2 k tname Rmodel] in the 3dDeconvolve. This is supposed to create 2 models/regressors:
[-stim_times_AM2 k tname Rmodel]
Similar, but generates 2 response models: one with the mean
amplitude and one with the differences from the mean.
This gives us, as expected, one regressor where each onset of duration 10 has the mean rating - onset rating as its value.
The other regressor is exactly the same as in the non modulated stimfile (i.e. all onsets have a value of 5). I thought that the second stimfile should have the average rating as its value?
The problem we run into is that we cannot compare condition A and B since we only get the difference from the mean. If A is rated 9/10 and B is rated 2/10 the regressors might still look the same since we only get the difference from the mean. We are also interesed in regions who differ in pleasenness across conditions, not only differences within each condition.
The unmodulated case peaks at 5 presumably because of using
BLOCK(10) as the basis function, rather than BLOCK(10,1),
for example. While BLOCK is designed to have a peak of 1,
convolving it with a 10 second box car would indeed sum up
to approximately 5. Compare BLOCK(10) and BLOCK(10,1).
The functions are identical, except for this scaling factor.
But there is no effect of the scaling of the beta weights
on the statistics of significance at the group level. If
all subject betas are similarly scaled, the statistics
should be unchanged (e.g. for the t-stat, that scales both
the numerator and the denominator equally).
Regarding the modulators, this seems to refer to Gang’s
favorite topic, centering.
Indeed, the regressor for the mean effect should be
identical between the original (non-modulated) and the
modulated cases. The modulated term is where individual
responses are scaled by the difference from the mean.
By default, that mean is from the modulators from that
one timing file. To equate the modulator centers across
conditions (and subjects? this question bleeds into the
group analysis), you should de-mean the modulators before
running 3dDeconvolve, and then run it with the environment
variable AFNI_3dDeconvolve_rawAM2 set to YES, e.g.
3dDeconvolve -DAFNI_3dDeconvolve_rawAM2=YES …
But then it will be up to you to manage the means across
conditions and subjects. If the means are removed on a
per-subject basis, then it might be good to include the
subject mean as a group-level covariate. Gang might have
something to say about this.
First off, we want to find the regions (where bold is modulated by / following the ratings) where the modulated CondA and CondB are different. So to the very least we want to center the modulators across conditions in each subject. In Subject 1 we have the modulators
A: 2,4,1 and B: 8,7,9. AVG = 5.17.
New Centering:
A: -3.17, -1.17, -4.17 B: 2.83, 1.83, 3.83
Would this be how to do it? Then
3dDeconvolve -DAFNI_3dDeconvolve_rawAM2=YES … as a part of the command.
Asssuming that is corect: Would it be the same way if you have 4 conditions that are modulated, which we do?
Also. When it comes to across subject centering. How would that work? Will there be one large mean across all subjects and conditions? Is this a reccomended approach? Or was your point that it is not necessary if you add the subject mean as a covariate?
It seems a little difficult comparing the modulation between
conditions A and B when the means differ so much. Your new
centering uses a common mean between the conditions, rather
than each condition getting its own mean.
Note that the mean applied to each modulator actually affects
the beta of the mean response, rather than the beta of the
modulated one. Extending to group analysis, centering would
still not affect the group results of the modulation betas,
but it would affect the group results of the unmodulated ones.
At any rate, it is hard to say what is appropriate here.
And it certainly gets messy with more modulated conditions.
Maybe Gang will have something more useful to add…
The means are not really that different. I just gave an example.
Can you re-center across Conditions in a “normal” un-modulated analysis approach (e.g. stim times)?
We previously established that the mean response from the modulated apprach gives exactly the same betas/results as the normal non-modulated stim times apprach. I.e. Condition A vs Condition B (stim times) would be the same for Condition vs Condiiton B (mean response from AM2). If A and B (mean response from AM2) were re-centred this would no longer be true? SInce you say a re-centering woud affect the betas of Cond A and Cond B (mean response). When would this be a good idea?
Since you can compare Cond A and Cond B with the classic stim times approach, why would you want to re-center? And why with AM2 since the modulated regressors are not affected by it?
By centering across conditions without modulation, do you
mean to offset the entire mean of a regressor, or the mean of
the individual responses? If it is the former, that would not
affect the betas of interest, but would just affect the constant
polort terms. If it is the latter, then it would either change
the shape of the responses, or it would change the overall
expected betas, which could be evaluated at the group level.
So either way, at the single subject level, I would only expect
to apply such thinking via amplitude modulation.
You can test this, but yes, re-centering the means via AM2
should indeed affect the main A-B contrast.
The “why” aspect, included in part 3), seems better to leave
to others to respond to.
Let me try to understand the situation. You have two conditions, A and B, each of which has 3 trials. Each trial lasts for 10 seconds. In addition, you have behavioral data (pleasantness) that are associated with each trial. Is this accurate?
First of all, the number of trials per condition seems too few to achieve a robust estimate for the pleasantness effect.
The modulation is modeled with two regressors per condition. By default, 3dDeconvolve automatically removes the mean for each condition when constructing the regressors. Specifically, the first regressor corresponds to the condition effect (let’s call it b[sub]0[/sub]) when the behavioral data is controlled at the mean while the second one is associated with the pleasantness effect (let’s call it b[sub]1[/sub]) for that condition.
It makes sense to compare b[sub]1[/sub] between the two conditions if you want to. It also makes sense to compare b[sub]0[/sub] between the two conditions, but the interpretation is that the difference is associated with the pleasantness being held at the respective average under each condition, which may or may not be what you want.
Let me stop here and see if we are at the same page.
Well, almost. This was just a simplified example for me to understand. The actual scenario is this:
4 Conditions:
A = Fast Touch on Arm
B = Fast touch in Palm
C = Slow Touch on Arm
D = Fast touch in Palm
Each condition consists of more than 5 onsets (don’t remember exactly).
The researchers are doing a un-modulated analysis (just onset:duration) of this but since the subjects also rate the pleasentness of each onset they also want to investigate which areas of the brain that correlate with the rating: e.g.: Are some areas activated more when the subject is finding something more pleasent?
Is comparing the betas of e.g. A and C for in a normal un-modulated design the same thing as comparing b0 from A and C using the modulated stim files? Or do they differ since the pleasentness is not controlled for in the normal/un-modulated scenario? Like you wrote:
What would be the interpreation of comparing b0 across A and C when centering across A and C instead of centering within the conditions? If the centering matters the variance sucked up by b1 from A and C would differ (assuming I understood 1 correct).
In the modulated case: What is the interpretation of comparing b1 across e.g. condition A and C when it comes to centering? Since they by default are de-menaed within each condition one would have a hard time finding any thing (if the rating differs equally but from different means), right? Does the interpretation change when centering across conditions?
So to summerize they want to find areas that correlate with the rating (i.e. what areas activate more, or less, when rating something high, or low) and then see if any of these areas (or other) also differ between e.g. location of touch.
One thing needs clarification: Is the duration for each trial correlated (or confounded) with the pleasantness rating? If so, modulation analysis would be a little bit shaky.
Is comparing the betas of e.g. A and C for in a normal un-modulated design the same thing as comparing b0 from A and C using
the modulated stim files? Or do they differ since the pleasentness is not controlled for in the normal/un-modulated scenario?
They would not be the same, but usually they should not differ too much either unless the ratings are very screwed.
What would be the interpreation of comparing b0 across A and C when centering across A and C instead of centering within
the conditions? If the centering matters the variance sucked up by b1 from A and C would differ (assuming I understood 1 correct).
Centering would only have impact on the interpretation of the first effect estimate (b0), and would not have any impact on the rating effect (b1). When comparing b0 across A and C when centering across A and C, you get the difference of b0 between A and C when the pleasantness is held at the common mean of the ratings between A and C.
In the modulated case: What is the interpretation of comparing b1 across e.g. condition A and C when it comes to
centering? Since they by default are de-menaed within each condition one would have a hard time finding any thing
(if the rating differs equally but from different means), right? Does the interpretation change when centering across conditions?
Each Condition (A-D) has 3 onsets per run. They run 2 runs so each condition has a total of 6 onsets. Each onset has a fixed duration of 10 s (no correlated to pleasentness).
Okey! Thanks.
Not sure I understand why b1 is not affected by a different centering. b1 is the mean minus the rating, right?. If we have two conditions with 4 ratings:
ratings1: 4 4 6 6 (mean=5)
ratings2: 2 2 4 4 (mean=3)
In the case of the default (within) centering the b1 regressors would be (for rating - mean):
ratings1: 4-5 4-5 6-5 6-6 = -1 -1 1 1 (i.e. 4 bumps with amplitude 1)
ratings1: 2-3 2-3 4-3 4-3 = -1 -1 1 1 (i.e. 4 bumps with amplitude 1)
These two would be identical, even if the rating 1 was overall more pleasent.
In the case of acrosss condition centering the b1 regressors would be (for rating - mean):
Across cond mean = 5 +3 / 2 = 4
ratings1: 4-4 4-4 6-4 6-4 = 0 0 2 2 (i.e. 2 bumps with amplitude 2)
ratings1: 2-4 2-4 4-4 4-4 = -2 -2 0 0 (i.e.2 bumps with amplitude -2)
Here you can separate them. Where do I missunderstand this? Thanks!
I’m ofcoruse assuming you are right. So if centering does not matter for b1 then, when constrasting A and C, you would only find differencies if the ratings did not differ the same ammount from the within mean?
So, if they want to find voxels that correlate with rating they should simply compare b1 of e.g. A and C (they found nothing doing this btw)?
What would be the difference/prefered way: Doing this or using the ratings as covariats in the group analysis? Thanks a bunch Gang!
if they want to find voxels that correlate with rating they should simply compare b1 of e.g. A and C (they found nothing doing this btw)?
If you want to see if the correlation with pleasantness is different between the two conditions, you can compare the b1 for the two conditions.
What would be the difference/prefered way: Doing this or using the ratings as covariats in the group analysis?
The modulation analysis at the individual level is different from the group analysis with a covariate, and they address different questions. I’m not so sure what effect are you trying to compare at the group level: b0 or b1?
I’m not sure exactly what they want to do. They have these ratings and want to incorporate these in some way. Since the ratings differ they expect to see some of the brain to correlate with rating. So comparing b1 makes sense but running a group analysis with 3dttest++ comparing the b1 maps for e.g. arm and palm gives no significant voxels, even though the rating means are different for these areas of touch… Thanks!
Another question that is related to this project. They also took some blood samples during a similar experiment and want to incorporate these results. Lets say they have measured some blood parameter “BLOOD” when being touched by two different categories of people (P1 and P2). The end goal is to see where the BOLD signal correlate to BLOOD and if has an effect and if it is different when touched by P1 and P2. They want to use BLOOD as a covariate. The design is:
4 Conditions:
A = P1 Touch on Arm
B = P1 touch in Palm
C = P2 Touch on Arm
D = P2 touch in Palm
They only have two BLOOD values per person (one for touched by P1 and one for P2): B_P1 and B_P2. They don’t have BLOOD for the individual areas of touch (arm/palm). So when using B_P1 and B_P2 as covariates, how should one proceed? Add the same value to Arm and Palm or collapse them by adding the betas for A and B and D and C and compare those with the covariates? Or to use all of it in 3dMVM to better account for all variance due to location of touch. How would one do that in 3dMVM?
Something like that? The problem is that the B_P1 value for arm and palm is the same. The value only reflects being touch by either P1 or P2. So should arm/palm be collapsed in some way?
Thank you so much and sorry, I’m not super used to 3dMVM!
Since the ratings differ they expect to see some of the brain to correlate with rating. So comparing b1 makes
sense but running a group analysis with 3dttest++ comparing the b1 maps for e.g. arm and palm gives no
significant voxels, even though the rating means are different for these areas of touch
A couple of possible reasons for the situation: 1) the ratings are discrete values, and might be too coarse to look for strong correlation; 2) power could be too weak: failure to survive the significance thresholding does not necessarily mean the nonexistence of the effect.
They only have two BLOOD values per person (one for touched by P1 and one for P2): B_P1 and B_P2. They
don’t have BLOOD for the individual areas of touch (arm/palm).
If you have missing values of BLOOD, then there is no effective way to make them up. I would just forget about using BLOOD as a covariate, and perform a simple 2 x 2 ANOVA.
Well, the values are not missing.Blood samples were simply not matched with one blood sample per condition but one sample per person touching them (just three samples over time for each toucher).
I guess you could collapse the arm/palm regressors to just beeing touched and compare beeing touced by P_1 and P_2 and using the two BLOOD samples as covariates BLOOD_P1 and BLOOD_P2?
Blood samples were simply not matched with one blood sample per condition but one sample per person touching them
So BLOOD varies between P1 and P2, but remains the same between arm and palm? If so, you have to use 3dLME then by repeating the same BLOOD between arm and palm.
Also, does the average BLOOD across subjects differ substantially between P1 and P2? If so, you may have to center the BLOOD value for P1 and P2 separately between you put them into 3dLME.
The
National Institute of Mental Health (NIMH) is part of the National Institutes of
Health (NIH), a component of the U.S. Department of Health and Human
Services.