You are here

mxSE standard errors for nested models (AE, etc.).

8 posts / 0 new
Last post
mirusem's picture
Offline
Joined: 11/19/2018 - 19:32
mxSE standard errors for nested models (AE, etc.).

I noticed that the standard errors that can be computed for say, the AE model, return the same standard errors for the A and the E estimate. Is this expected? Just want to be sure. I know SEs in general aren't recommended for CIs, but I am doing this for the sake of a different analysis so just want to be sure that this is what would be expected. I noticed this isn't the case when it's a full model (ACE, etc.).

Also, would it be fair to say, the SEs of given constrained parameters (that are constrained to 0) are 0 themselves? That's what it looks like for the cases in which you calculate the SE of C for the AE model, for example; but as an alternative example (as this isn't technically fully modelled), for example, if the ACE model C parameter is positive (in the direct variance approach), is it appropriate to say the D estimate is 0 (logically, albeit, D = -C in another line of thought), and the D SE is 0?

Thanks as always!

AdminRobK's picture
Online
Joined: 01/24/2014 - 12:15
SE is 0

Yes. If a parameter is fixed under a given model, then its estimate necessarily has a repeated-sampling variance of zero, and therefore a standard error of zero. That's because, over repeated sampling, the fixed parameter will always be "estimated" as whatever value to which it is fixed.

mirusem's picture
Offline
Joined: 11/19/2018 - 19:32
Got it

Got it--makes sense, that's definitely helpful. Appreciate it as always!

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
Not expected equality

I would not expect the SEs to be the same for E and A. Usually those for E are quite a bit smaller than for A. It could happen, depending on things like the relative Ns for MZs and DZs, but I suspect that there's some error in the code, especially if they match closely. So, how close are they, and can you share your code? mxGenerateData() is one way to avoid sharing confidential data, while retaining enough of its properties that the issue remains visible.

mirusem's picture
Offline
Joined: 11/19/2018 - 19:32
General

Hi AdminNeale,

Sorry for the delay, just saw this (and the other posts, really appreciate everyone's responses, etc.). Is this the case for subnested models (for example) such as AE? For example, when I run it specifically for ACE, the E SEs seem to usually be lower than A as you describe.

But for AE, the SEs seem to be the same when computing them (for both A and E, with C as it is fixed having a 0 SE). This is specifically when running mxSE(top.A_std,AE,run=TRUE) (where, in this instance, AE is my model, and top.A/top.C/top.E are all looked at in the same way).

mirusem's picture
Offline
Joined: 11/19/2018 - 19:32
Checked with genetics workshop example

Just getting back--I tested this same way of doing things with the example/data from the genetics workshop with the AE model from umx: https://ibg.colorado.edu/cdrom2021/Day02-hmaes/practical1/.
So that would just be that example, followed by:
SE_A = c(mxSE(top.A_std,AEmodel_1,run=TRUE)
SE_E = c(mxSE(top.E_std,AEmodel_1,run=TRUE), etc.
And in doing that, I get the same pattern (A and E SEs are the same).

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
A=1-E

Yes, likely real that SEs of A and E are equal distances from their estimates for AE model, since A and E are the only two variance components.

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
A=1-E

Yes, likely real that SEs of A and E are equal distances from their estimates for AE model, since A and E are the only two variance components.