You are here

Confidence Intervals generated by OpenMx do not seem to look right to me.

8 posts / 0 new
Last post
AnneN.'s picture
Offline
Joined: 02/27/2014 - 19:40
Confidence Intervals generated by OpenMx do not seem to look right to me.

Hi all,
I ran a simple regression model in AMOS and repeated it in OpenMx in order to obtain the confidence interval for my regression coefficients and see that the a certain coefficient in my output came out significant (p <.001) in amos but not significant in OpenMx (i.e., confidence interval contains zero). I am not sure what to make of it. Here is the info from the AMOS output:
Estimate S.E. C.R. (Critical Ratio) P.
.276 .077 3.589 ***

The OpenMx output for that estimate and CI are:
Estimate Std. Error Std. Estimate Std.SE
.2757 .0756 .3895 .1069
confidence intervals:
Lbound estimate ubound
.000 .276 .413
And if I didn’t set the lower bound=.00, I would get the confidence interval of [ -.413, .413]

Just by calculating the CI by hand: .276 +/- (1.96 * .0756), I get a CI of (.127524, .423876), which converges with the significance of the AMOS output. This is very mysterious because the rest of the CIs seem to converge with Amos results and my calculations by hand so I am very eager to hearing what your thoughts on this are.
Thank you for your time,
Anne

mhunter's picture
Offline
Joined: 07/31/2009 - 15:26
mxCompare and mxCI

Hi Anne,

Judging by the output you produced, AMOS is doing a very crude significance test that does not in general give correct results.

Estimate is .276
S.E. is .077
C.R. is 3.589 = .276/.077 = Estimate/S.E.
P is from a z-test with observed z=3.589, or a t-test with some degrees of freedom and observed t=3.589.

These kinds of significance tests often fail to give accurate results in SEM. The preferred methods for testing the significance of parameters are (in no specific order) (1) likelihood-based confidence intervals, and (2) likelihood ratio tests.

OpenMx is doing likelihood-based confidence intervals whenever you use mxCI(). Likelihood-based confidence intervals see how much the free parameter can move to get a fixed change in model fit. They are not necessarily symmetric, and are not based on the standard errors.

To do likelihood ratio tests, set up a model that represents your null hypothesis and use mxCompare to compare it to the model of interest to you. For example, set up a model where a parameter is fixed to zero.

interestingModel <- mxModel(...)
nullModel <- omxSetParameters(model=interestingModel, labels="aParameterName", free=FALSE, values=0)
interestingRun <- mxRun(interestingModel)
nullRun <- mxRun(nullModel)
mxCompare(nullRun, interestingRun) #likelihood ratio test
AnneN.'s picture
Offline
Joined: 02/27/2014 - 19:40
Hi Rob and Hunter,Many

Hi Rob and Hunter,
Many thanks for your insights on this. Based on your suggestions, I first checked the codes for my CIs and interestingly, there were a couple of code 6, but only for the other paths and not for the one that I thought was weird, which had a code of 0. I assume that this means that there is nothing wrong with the CI for the path that was of interest to me.
Then I created the models that Hunter suggested. Here is the output:

> mxCompare(interestingRun, nullRun)
base comparison ep minus2LL df AIC diffLL diffdf p
1 Interesting Model 24 3245 1400 445 NA NA NA
2 Null Model 23 3249 1401 447 3.96 1 0.0465

This is interesting because based on the results of the likelihood ratio test, the estimate for that path is not zero, whereas the CI tells me that it is. How do I reconcile this? Is there a way for me to get a modified CI for that path that is consistent with the likelihood ratio test result?
Thank you,
Anne

mhunter's picture
Offline
Joined: 07/31/2009 - 15:26
Odd

That's odd to me. On a hunch, are you using the Beta or the stable. 1.4 version of OpenMx?

AnneN.'s picture
Offline
Joined: 02/27/2014 - 19:40
Hi Hunter, I think this is

Hi Hunter,
I think this is what you are looking for: 1.3.2-2301. Please let me know if you find out anything.
Sincerely,
Anne

mhunter's picture
Offline
Joined: 07/31/2009 - 15:26
More Info

More information at this point would be good. Could you provide (1) the full summary of "interestingRun", (2) the full summary of "nullRun", and (3) the output of interestingRun@output$confidenceIntervalCodes for both the bounded and unbounded parameter of interest?

Thanks!
Mike Hunter

RobK's picture
Offline
Joined: 04/19/2011 - 21:00
Hmm...

Anne, if you run the following lines (after changing the model names to what they actually are in your script), do both lines return TRUE?:

all(eigen(interestingRun@output$calculatedHessian,T,T)$values>0)
all(eigen(nullRun@output$calculatedHessian,T,T)$values>0)

This is a check to make sure that the optimizer found a minimum on both runs. It most likely did, but it's still worth checking.

Also, I notice that the LRT statistic in your mxCompare() is 3.96, with p=0.0465. That's approximately what it should be if zero is the lower limit of the 95% confidence interval. Ideally, it should be 3.84 with p=0.0500, so the optimizer appears to be finding a lower confidence limit that's slightly below where it ought to be. What happens if you do mxCompare() with a null model that has this parameter fixed to the upper confidence limit of 0.413?

In any event, it looks like you can reject the null hypothesis that this parameter equals zero. Your inferences via the Wald test (with the standard errors) and the LRT agree about that conclusion, although they disagree about "how significantly" it differs from zero. You could try fixing the parameter to 0.0004 (which rounds to .000) in a null model and see how close the LRT is to 3.84. If it's very close, then you've found a reasonable lower limit to the CI.

RobK's picture
Offline
Joined: 04/19/2011 - 21:00
I was going to post something

I was going to post something very similar to what Hunter posted. Calculating significance and CIs from the standard errors (as you and AMOS are doing) and calculating CIs from the profile likelihood (as OpenMx is doing) are approximately equivalent for sufficiently large sample sizes. But, we have to work with realistic sample sizes, and the likelihood-based CIs have superior theoretical properties. As a rule, I would trust what they are telling you.

However, their disadvantage is that they require additional numerical optimization. You can check the quality of the confidence limits found by querying myModelFit@output$confidenceIntervalCodes. Confidence limits that have a code of 6 might need to be further checked. This further check entails making a new model in which the value of the parameter in question is fixed to the confidence limit being checked. You would then compare the deviance (-2loglikelihood) of this new model to the full model. For a 95% confidence interval, the change in deviance should be about 3.84.