Hi all,

I'm running a nuclear twin family design model (an SEM model that contains a number of non-linear constraints). Two issues:

(1) The point estimate of the pathways are fine - and agree with those I simulated - but the standard errors are WAY off; e.g., the standard error for a path coefficient that cannot be dropped (e) is 30.3 and the point estimate is -.54! Here are the other coefficients from summary():

summary(ASDE.Fit)

name matrix row col Estimate Std.Error

1 AddGenPath MZNTF.a 1 1 0.63320641 1.9272561

2 DomPath MZNTF.d 1 1 -0.31596212 0.4459057

3 EnvPath MZNTF.e 1 1 -0.54266100 30.3050451

4 AMCopath MZNTF.mu 1 1 0.21215070 0.8786654

5 SibPath MZNTF.s 1 1 0.38618256 1.2540086

6 VarPhen MZNTF.Vp1 1 1 0.98593865 0.1272815

7 CovPhenGen MZNTF.delta1 1 1 0.69880636 0.3767022

8 LatentVarAddGen MZNTF.q1 1 1 1.10359962 2.9363220

9 mean MZNTF.expMeanMz 1 Tw1 -0.01938465 2.4590259

(2) Less critically, why do the observed statistics change when I drop a parameter? The difference in degrees of freedom between two models where *a single* parameter was fixed is "4" according to OpenMx.

sessionInfo()

R version 2.10.1 (2009-12-14)

x86_64-apple-darwin9.8.0

locale:

[1] C

attached base packages:

[1] stats graphics grDevices utils datasets methods base

other attached packages:

[1] OpenMx_0.2.5-1050

loaded via a namespace (and not attached):

[1] tools_2.10.1

I've attached the script I'm working on. The code is self-contained. Any help with the two questions above would be appreciated.

Matt

Attachment | Size |
---|---|

NTF.ASDE_.R | 8.93 KB |

I'm getting a syntax error on line 180 when I try to load the script.

Err wait, I thought standard errors were not to be trusted when constraints are used in a model? This falls outside my comfort level of understanding the topic, so I'll defer to anyone else on the forums to follow up with a better explanation. There is a short explanation in the classic Mx manual on page 94.

Hi Matt

This is a known problem with Standard Errors based on the Hessian. It's important to use likelihood based (or bootstrap) confidence intervals instead when there are non-linear constraints. See this thread for some discussion http://openmx.psyc.virginia.edu/issue/2009/08/error-estimate-summary-wrong-so-hessian#comment-340 and see this comment http://openmx.psyc.virginia.edu/thread/153#comment-1054 for a quick and dirty revision of the objective function in order to compute likelihood based CI's. It would be easy to turn this hack into a function, with the 3.84 being computed automatically from the requested interval percentage (say via pchisq), and the model and parameter names being passed as the other arguments. Hint hint :). Oh yes, if you prefer bootstrap (often better if there are lots of parameters) there's an example here: http://openmx.psyc.virginia.edu/issue/2009/08/error-estimate-summary-wrong-so-hessian