You are here

small/NaN standard errors

5 posts / 0 new
Last post
lotgeels's picture
Offline
Joined: 12/22/2009 - 06:13
small/NaN standard errors

Hi all,

I'm fitting a univariate threshold model on binary data and it appears to work well (see below for the script), except that in the summary I get either very small or NaN standard errors, like this:

 name     matrix row col   Estimate     Std.Error

1 ab11 ACE.aboys 1 1 0.5690466 2.121996e-314
2 cb11 ACE.cboys 1 1 0.7157107 6.365987e-314
3 eb11 ACE.eboys 1 1 0.4049003 NaN
4 ag11 ACE.agirls 1 1 0.2620615 NaN
5 cg11 ACE.cgirls 1 1 0.8921872 NaN
6 eg11 ACE.egirls 1 1 0.3678666 1.060998e-313
7 threall ACE.Tall 1 1 -0.5358325 NaN
8 thredzm ACE.Tdzm 1 1 -0.7374305 NaN

Another thing is that when I run the submodel which equates a, c and e across sex I get the following output in OpenMx:

 name    matrix row col   Estimate Std.Error

1 ab11 ACE.aboys 1 1 0.4833659 NaN
2 cb11 ACE.cboys 1 1 0.7886090 NaN
3 eb11 ACE.eboys 1 1 0.3800700 NaN
4 threall ACE.Tall 1 1 -0.5369859 NaN
5 thredzm ACE.Tdzm 1 1 -0.7300285 NaN

Observed statistics: 2807
Estimated parameters: 5
Degrees of freedom: 2802
-2 log likelihood: 2961.374

Whereas when I run the same model in classic Mx it says:

X 1 1 1 95.0 0.4816 0.2191 0.6461 0 1 1 1

Y 1 1 1 95.0 0.7899 0.6788 0.8761 0 1 0 1

Z 1 1 1 95.0 0.3796 0.3077 0.4583 0 1 0 0

B 1 1 1 95.0 -0.5125 -0.5789 -0.4464 0 0 0 0

R 1 1 1 95.0 -0.6879 -0.7839 -0.5934 0 0 0 1

-2 times log-likelihood of data >>> 2954.333
Degrees of freedom >>>>>>>>>>>>>>>> 2802

So there is some difference in both the parameter estimates (mostly the thresholds) and in the -2LL, should this worry me? And I'm curious about the OpenMx standard errors, why they are so small or NaN.

Thanks again!

Lot

ACE model specification (OpenMx):

univACEOrdModel <- mxModel("univACEOrd",
mxModel("ACE",
# Matrices aboys, cboys, and eboys to store a, c, and e path coefficients for boys
mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="ab11", name="aboys" ),
mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="cb11", name="cboys" ),
mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="eb11", name="eboys" ),
# Matrices Aboys, Cboys, and Eboys compute variance components for boys
mxAlgebra( expression=aboys %% t(aboys), name="Aboys" ),
mxAlgebra( expression=cboys %
% t(cboys), name="Cboys" ),
mxAlgebra( expression=eboys %% t(eboys), name="Eboys" ),
# Matrices agirls, cgirls, and egirls to store a, c, and e path coefficients for girls
mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="ag11", name="agirls" ),
mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="cg11", name="cgirls" ),
mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="eg11", name="egirls" ),
# Matrices Agirls, Cgirls, and Egirls compute variance components for girls
mxAlgebra( expression=agirls %
% t(agirls), name="Agirls" ),
mxAlgebra( expression=cgirls %% t(cgirls), name="Cgirls" ),
mxAlgebra( expression=egirls %
% t(egirls), name="Egirls" ),
# Algebra to compute total variance for boys and girls and standard deviations (diagonal only)
mxAlgebra( expression=Aboys+Cboys+Eboys, name="Vboys" ),
mxAlgebra( expression=Agirls+Cgirls+Egirls, name="Vgirls" ),
mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"),
mxAlgebra( expression=solve(sqrt(IVboys)), name="sdboys"),
mxAlgebra( expression=solve(sqrt(I
Vgirls)), name="sdgirls"),
# Constraint on variance of ordinal variables
mxConstraint( alg1="Vboys", "=", alg2="I", name="Varboys"), # varianties sommeren tot 1 (identiteitsmatrix)
mxConstraint( alg1="Vgirls", "=", alg2="I", name="Vargirls"),
# Matrix & Algebra for expected means vector
mxMatrix( type="Zero", nrow=1, ncol=nv, name="M" ),
mxAlgebra( expression= cbind(M,M), name="expMean" ),
# threshold voor de hele groep:
mxMatrix( type="Full", nrow=1, ncol=nv, free=TRUE, values=-.6, label="threall", name="Tall" ),
mxAlgebra( expression= cbind(Tall,Tall), dimnames=list('th1',selVars), name="expThreall" ),
# threshold voor de dz jongens:
mxMatrix( type="Full", nrow=1, ncol=nv, free=TRUE, values=-.8, label="thredzm", name="Tdzm" ),
mxAlgebra( expression= cbind(Tdzm,Tdzm), dimnames=list('th1',selVars), name="expThredzm" ),
# Algebra for expected variance/covariance matrix in mzm
mxAlgebra( expression= rbind ( cbind(Aboys+Cboys+Eboys , Aboys+Cboys),
cbind(Aboys+Cboys , Aboys+Cboys+Eboys)), name="expCovmzm" ),
# Algebra for expected variance/covariance matrix in dzm, note use of 0.5, converted to 11 matrix
mxAlgebra( expression= rbind ( cbind(Aboys+Cboys+Eboys , 0.5%x%Aboys+Cboys),
cbind(0.5%x%Aboys+Cboys , Aboys+Cboys+Eboys)), name="expCovdzm" ),
# Algebra for expected variance/covariance matrix in mzf
mxAlgebra( expression= rbind ( cbind(Agirls+Cgirls+Egirls , Agirls+Cgirls),
cbind(Agirls+Cgirls , Agirls+Cgirls+Egirls)), name="expCovmzf" ),
# Algebra for expected variance/covariance matrix in dzf
mxAlgebra( expression= rbind ( cbind(Agirls+Cgirls+Egirls , .5%x%Agirls+Cgirls),
cbind(.5%x%Agirls+Cgirls , Agirls+Cgirls+Egirls)), name="expCovdzf" ),
# Algebra for expected variance/covariance matrix in dosmf
mxAlgebra( expression= rbind ( cbind(Aboys+Cboys+Eboys , .5%x%(aboys%
%t(agirls))+ cboys%%t(cgirls)),
cbind(.5%x%(agirls%
%t(aboys))+ cgirls%*%t(cboys) , Agirls+Cgirls+Egirls)), name="expCovdosmf" )
),
mxModel("mzm",
mxData( observed=mzmdata, type="raw" ),
mxFIMLObjective( covariance="ACE.expCovmzm", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThreall" )
),
mxModel("dzm",
mxData( observed=dzmdata, type="raw" ),
mxFIMLObjective( covariance="ACE.expCovdzm", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThredzm" )
),
mxModel("mzf",
mxData( observed=mzfdata, type="raw" ),
mxFIMLObjective( covariance="ACE.expCovmzf", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThreall" )
),
mxModel("dzf",
mxData( observed=dzfdata, type="raw" ),
mxFIMLObjective( covariance="ACE.expCovdzf", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThreall" )
),
mxModel("dosmf",
mxData( observed=dosmfdata, type="raw" ),
mxFIMLObjective( covariance="ACE.expCovdosmf", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThreall" )
),
mxAlgebra( expression=mzm.objective + dzm.objective + mzf.objective + dzf.objective + dosmf.objective, name="min2sumll" ),
mxAlgebraObjective("min2sumll")
)

univACEOrdFit <- mxRun(univACEOrdModel)
univACEOrdSumm <- summary(univACEOrdFit)
univACEOrdSumm
LL_ACE <- mxEval(objective, univACEOrdFit);LL_ACE

And the submodel:

ACEequalModel = univACEOrdModel

ACEequalModel$ACE$agirls <- mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="ab11", name="agirls" )
ACEequalModel$ACE$cgirls <- mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="cb11", name="cgirls" )
ACEequalModel$ACE$egirls <- mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="eb11", name="egirls" )

ACEequalFit = mxRun(ACEequalModel)
ACEequalSumm = summary(ACEequalFit)
ACEequalSumm
LL_ACEeq = mxEval(objective,ACEequalFit)
LL_ACEeq

stevepowell99's picture
Offline
Joined: 03/23/2010 - 12:52
Hi, I have the same problems,

Hi, I have the same problems, wondering what the NaN SEs mean.

neale's picture
Offline
Joined: 07/31/2009 - 15:14
I suspect that the NaN

I suspect that the NaN Standard Errors are being generated because there are non-linear constraints in the model. SE's are known to be problematic in this case.

When ordinal data are being analyzed, it is possible to circumvent this problem by estimating the variances and means, while constraining the first two thresholds to equal 0 and 1. This reparameterization avoids non-linear constraints, but it does become necessary to re-standardize the estimates. Restandardization does of course mess up the standard errors.

For now, perhaps the best thing to do would be to use the hack for likelihood-based confidence intervals, described in this post:
http://openmx.psyc.virginia.edu/thread/153#comment-1054

lotgeels's picture
Offline
Joined: 12/22/2009 - 06:13
Ah thanks very much! About

Ah thanks very much!

About the differences in estimates en likelihood, are they maybe because of the FIML estimator? The differences seem a bit large...

Lot

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Hello Lot Hmmm I agree that

Hello Lot

Hmmm I agree that the differences are a bit large. Did both scripts use the same starting values? Both classic Mx and OpenMx have the same FIML estimator, which has been shown to agree much more closely in other examples.

In a case such as this, one would want to look at the script that ends up with the larger (more positive) -2lnL, i.e. the one that has the lower likelihood. Probably its starting values are not as good as those of the classic Mx script which found the solution more directly. Possibly, there is a model specification error in one or other of the scripts. One way to test this would be to fix the starting values in the OpenMx script so that they equal the solution from the Classic Mx script. If that does not yield the same (or very close, say <.1 difference) -2lnL, then the model is probably not specified the same in both scripts.