You are here

Heterogeneity of T1 T2 variance

5 posts / 0 new
Last post
KarenBurton's picture
Offline
Joined: 09/13/2016 - 03:14
Heterogeneity of T1 T2 variance

Hello

Does anyone have an ACE script for enabling the variance for twins 1 and 2 to be estimated separately (and separately for MZ and DZ twins) to deal with heterogeneity in their variance?

(or even just the adjustments that need to be made to the standard script?)

Thanks so very much

Karen

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
submodel of saturated model?

It sounds like what you want is some nested submodel of the "saturated" model. If I understand you correctly, you'd be estimating 7 parameters from the data: 1 phenotypic mean, and 3 unique elements of the covariance matrix for each zygosity group. Is that right?

Unfortunately, the ACE biometric variance components won't be identified without the constraint that the phenotypic distribution is the same for T1 and T2, for both zygosity groups. So, I don't know how informative the model I described would be. In your dataset, is it arbitrary which twin in a pair is T1 versus T2? If so, I wouldn't worry about heterogeneous variances for T1 relative to T2 (except perhaps if it's indicative of a data-handling error).

I'm not sure how I would interpret markedly different variances for MZs compared to DZs, though.

KarenBurton's picture
Offline
Joined: 09/13/2016 - 03:14
T1 T2 heterogeneity: nested submodel of ACE model

I was under the impression that it is important to test to ensure the variance of twins 1 and 2 was homogenous in order to just use one variance in the model, and that when there was heterogeneity this is a model violation. When such violations occur during the assumption testing, how do people usually deal with this?

While there is no arbitrary assignment of twin 1 and 2 in our dataset and no indication of data-handling, I still thought that if their variance differed there was a problem with the assumptions of the model?

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
I was under the impression
I was under the impression that it is important to test to ensure the variance of twins 1 and 2 was homogenous in order to just use one variance in the model, and that when there was heterogeneity this is a model violation. When such violations occur during the assumption testing, how do people usually deal with this?

That's how this stuff is typically taught. But in practice, it's usually arbitrary which twin is T1 or T2 in a given pair. So, if it doesn't matter who is T1 and who is T2, then it doesn't matter that the T1s have a different variance from T2s! You could just randomly swap the "T1" and "T2" label in each pair until you get variances that are more alike, and it wouldn't change anything substantive.

While there is no arbitrary assignment of twin 1 and 2 in our dataset and no indication of data-handling

But, it sounds as though, in your dataset, it DOES matter who is T1 vs. T2 in a pair. Do I understand correctly? And if so, what distinguishes T1 from T2? Also, did you mean "data-handling errors"? I'm talking about things like using -999 as a "user-missing value" in SPSS, and then forgetting to change those values to NA in R.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Different variances

In MZ and DZ same sex twins, we typically do not have any systematic order as to who is twin 1 and who is twin 2. Therefore it is not recommended to have different variances for twin 1 and twin 2. Indeed, such a specification would lose statistical power.

Some biometrical models make different predictions about MZ and DZ variances, particularly those for sibling interaction. Cooperation between the members of a twin pair would lead to var(MZ) > var(DZ) whereas competition would yield var(MZ) < var(DZ), assuming that there is some additive genetic variance. These models are described in the Neale & Cardon 1992 book, which could be cited, and can be read about a bit more easily online in this pdf

On the whole it seems better to have a model for WHY variances may differ than it is to simply allow them to differ. Of course, some 5% of analyses should show significant variance differences at the 5% alpha type I error level. So I wouldn't worry if occasionally a test of equal variances fails.