I am trying to get the confidence intervals for the free parameters in an ordinal ACE model. But it seems I screw it up. The range of confidence intervals is quite wide. So I have questions about confidence intervals in OpenMx.

How does OpenMx calculate the confidence intervals? I didn't get much information from 'OpenMx Manual' or 'OpenMx User Guide', but I found some in 'MxManual'. Are the principles of OpenMx and Mx same?

If it's just a silly code error, my code is below.

Any help would be appreciated!

univACEOrdModel <- mxModel("univACEOrd",

mxModel("ACE",

# Matrices a, c, and e to store a, c, and e path coefficients

mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="a11", name="a" ),

mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="c11", name="c" ),

mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="e11", name="e" ),

# Matrices A, C, and E compute variance components

mxAlgebra( expression=a %*% t(a), name="A" ),

mxAlgebra( expression=c %*% t(c), name="C" ),

mxAlgebra( expression=e %*% t(e), name="E" ),

# Algebra to compute total variances and standard deviations (diagonal only)

mxAlgebra( expression=A+C+E, name="V" ),

mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),

mxAlgebra( expression=solve(sqrt(I*V)), name="sd"),

# Constraint on variance of ordinal variables

mxConstraint( V==I, name="Var1"),

# Matrix & Algebra for expected means vector

mxMatrix( type="Zero", nrow=1, ncol=nv, name="M" ),

mxAlgebra( expression= cbind(M,M), name="expMean" ),

mxMatrix( type="Full", nrow=1, ncol=nv, free=TRUE, values=.8, label="thre", name="T" ),

mxAlgebra( expression= cbind(T,T), dimnames=list('th1',selVars), name="expThre" ),

# Algebra for expected variance/covariance matrix in MZ

mxAlgebra( expression= rbind ( cbind(A+C+E , A+C),

cbind(A+C , A+C+E)), name="expCovMZ" ),

# Algebra for expected variance/covariance matrix in DZ, note use of 0.5, converted to 1*1 matrix

mxAlgebra( expression= rbind ( cbind(A+C+E , 0.5%x%A+C),

cbind(0.5%x%A+C , A+C+E)), name="expCovDZ" )

),

mxModel("MZ",

mxData( observed=mzData, type="raw" ),

mxFIMLObjective( covariance="ACE.expCovMZ", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThre" )

),

mxModel("DZ",

mxData( observed=dzData, type="raw" ),

mxFIMLObjective( covariance="ACE.expCovDZ", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThre" )

),

mxAlgebra( expression=MZ.objective + DZ.objective, name="min2sumll" ),

mxCI(c("a11", "c11", "e11"),interval = 0.95),

mxAlgebraObjective("min2sumll")

)

univACEOrdFit <- mxRun(univACEOrdModel, intervals=TRUE)

Without seeing your data, I can't tell whether the wide CI's are reasonable. Summary of the fitted model would help considerably. You should be aware that binary data ate much less precise than continuous data - they contain much less information, especially if the binary variables are a long way from 50:50 split. If you look at this paper:

Neale, M.C., Eaves, L.J. & Kendler, K.S. (1994) The power of the classical twin study to resolve variation in threshold traits Behavior Genetics 24: 239-258. [Fulltext PDF]

You can see that it is perhaps an order of magnitude less informative for say a 60:40 split.

The likelihood-based confidence intervals method is described in this paper:

Neale, M.C., Miller MB. (1997) The use of likelihood-based confidence intervals in genetic models. Behavior Genetics 27:113-120 [Fulltext PDF]