Dear AFNI experts,
I performed a group analysis and the effect of the factor A was significant in many voxels (30 peaks, using automated search algorithms).
The group analysis was performed with 3dMVM after a individual-level GLM on 40 subjects; activity was estimated using TENTzero function (10 non-zero betas, over 12 seconds).
Given many results, the regions of interest which are modulated by A express many ‘types’ of timelines (TENT activity).
My question is: to increase interpretability of the results, is it possible to “cluster” the timelines (TENT betas over time in all of the modulated voxels)? This would led to N (number of clusters) maps of modulation-over-time, which would be more readable (example, cluster1 has increased activity in A1 in the first part of the trial; cluster2 has increased activity in A1 in the second part of the trial and in A2 in the first part of the trial, … ).
If it is possible, how do you suggest to do that?
I hope the question is clear.
Simone, do you mean that you want to extract those regression coefficients associated with the estimate HDR curve? If so, you can use something like
3dbucket -prefix myHDR myInput+tlrc’[a…b(c)]’
where ‘a’ and ‘b’ are the beginning and end sub-brick index while ‘c’ is the gap between two consecutive regression coefficients (e.g., 2 or 3). Then you can load the file myHDR+tlrc into AFNI and use the ‘graph’ button to visualize the HDR shape.
Thanks for the answer.
This is useful, but my question is slightly different.
I have already extracted and visualized the estimated HDR shape in many regions of interest (I have around 30 ROIs if I consider significant peaks).
The problem is: these results are hard to interpret, globally.
However, I noticed that some regions (e.g. left inferior frontal gyrus and left superior temporal gyrus, both language-related) have a similar shape and a similar modulation across levels of the factor A (for example, they are modulated in the first phase of the trial).
So my question: is it possible to extract ‘representative’ modulations among the significant regions? After, for example, using a clustering/pattern recognition approach, it would be possible to extract: 1 representative HDR shape for language-related regions; 1 representative HDR shape for attentional regions; … .
This would break down 3000 voxels / 30+ peaks modulated in 4/5 modulations, each associated with a specific HDR shape and a community of voxels.
However, I’m not sure about ‘how’ this may be performed.
Hope is clearer now
Simone, unfortunately I don’t fully understand what you’re trying to achieve, and I hope someone else has a solution.