You are here

Likelihood-based confidence intervals

15 posts / 0 new
Last post
Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Likelihood-based confidence intervals
AttachmentSize
Binary Data test4b.R2.02 KB
Binary Data test4a.R1.58 KB
Plain text icon test4a.txt6.79 KB
Plain text icon test4b.txt7.34 KB
Plain text icon test4b output.txt15.93 KB
Plain text icon test4a output.txt7.63 KB

Hi, all.

I have some issues with the CIs in OpenMx_1.4-3532 and OpenMx_2.0.0-3575. I am fitting a random-effects meta-analysis with two effect sizes. My model was generated from the metaSEM package (metaSEM.model in the attached file). Another version was specified directly in OpenMx (mymodel in the attached file). This model worked fine. It was only used for comparisons. Both versions were supposed to be equivalent. The loglikelihoods and the parameter estimates were the same, while there were some issues on the CIs.

The files of test4a were based on OpenMx_1.4-3532. The lbound and the ubound of the CIs in metaSEM.model were the same.

The files of test4b were based on OpenMx_2.0.0-3575. The metaSEM.model worked fine with CSOLNP as the optimizer. The metaSEM.model generated NA in both the lbound and ubound with NPSOL as the optimizer. When I rerun the model, it threw an error.

Although these two models are equivalent, the model generated by the metaSEM package is more complicated in terms of model specification and number of matrices involved. This may partly explain the differences in the behaviors.

Any suggestions are welcome. Thanks in advance.

Mike

jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
test4b

Regarding the first 2 models in test4b, I obtain the same -2LL but very different CIs. You comment in the file that "It seems fine." Is this expected?

confidence intervals:
lbound estimate ubound
Tau2_1_1 0.004626727 0.004727034 0.004668791 ** estimate outside of CI
Tau2_2_1 0.001193085 0.003934128 0.005295539
Tau2_2_2 0.008317209 0.008413367 0.008421468 ** estimate outside of CI
Intercept1 -0.017700266 0.001350473 0.028866220
Intercept2 0.035016073 0.068825948 0.096814068

-2 log likelihood: -161.9216

confidence intervals:
lbound estimate ubound
Tau2_1_1 0.002101854 0.004727078 0.009489479
Tau2_2_1 0.001193054 0.003934105 0.008349120
Tau2_2_2 0.004601975 0.008413355 0.015237170
Intercept1 -0.026791136 0.001350212 0.028866220
Intercept2 0.035016079 0.068826046 0.102484186

-2 log likelihood: -161.9216

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Thanks for pointing out

Thanks for pointing out this.

No, I don't expect them to be different. They should be the same in theory.

AdminHunter's picture
Offline
Joined: 03/01/2013 - 11:03
test4b also seems to work

test4b also seems to work when you change the lower bound to 1e-6, -1e-10, or 0.

However, re-running the original model with intervals=FALSE still generates the error. I think this is a re-running problem.

## No CI reported.
metaSEM.fit <- mxRun(metaSEM.model, intervals=TRUE)
summary(metaSEM.fit)
# Runs okay, but doesn't actually give you upper and lower bounds on the CIs.
# That seems like a bug to me.
 
## Throw an error when RERUN the model
metaSEM.fit <- mxRun(metaSEM.fit, intervals=TRUE)
#Also
## Still throws an error when RERUN the model, even when intervals are FALSE.
metaSEM.fit <- mxRun(metaSEM.fit, intervals=FALSE)
jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
lbound

Zero definitely does not work. Take care to only set the lbound on the diagonal elements,

metaSEM.fit$Tau$lbound[1,1] <- 0
metaSEM.fit$Tau$lbound[2,2] <- 0

-1e-10, -1e-3 both fail, but -1e-2 works.

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Thanks for looking into the

Thanks for looking into the matter.

Mike

mhunter's picture
Offline
Joined: 07/31/2009 - 15:26
NPSOL Issue

I wanted to post an update on this issue in case anyone reads about it. After investigation by Joshua and myself, we determined this is a problem with one of our optimizers, NPSOL. This works fine with CSOLNP. The problem is triggered by having a starting values for an optimization be at 0.01 or nearer to zero when there is also a bound near zero. In this case, NPSOL moves the starting value to the bound near zero, and things don't really get much better from there.

In this example it is triggered because the point estimate is near zero (.0047) with a lower bound near zero (1e-10). The estimates are fine, but when these are used as starting values in the confidence interval estimation, the NPSOL bound bug is triggered and it chokes.

This is a strange corner case in NPSOL and the OpenMx team has not found a way around it within our code. This is another reason why it's great that we now have CSOLNP as an optimizer. In the Beta, it is the default and can be set by

mxOption(NULL, "Default optimizer", "CSOLNP")
Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Hi, all. I am comparing

Hi, all.

I am comparing OpenMx_2.0.0-3719 against the latest beta version OpenMx_2.0.0-3953 in the git repository. I do not know whether or not this bug has already been reported.

OpenMx_2.0.0-3719 gives some sensible CIs with CSOLNP, while OpenMx_2.0.0-3953 gives either an error with NPSOL or segmentation fault with CSOLNP.

I am attaching the input, data, and output for your testing. Thanks.

Mike

AdminJosh's picture
Offline
Joined: 12/12/2012 - 12:15
thanks for testing

Yeah, I agree that 3953 introduces a bug in CSOLNP. We have reverted that change. We will investigate the regression with NPSOL.

AdminJosh's picture
Offline
Joined: 12/12/2012 - 12:15
NPSOL?

Wait, can you be more specific? Was NPSOL working in OpenMx_2.0.0-3719? Or are you only pointing out the regression in CSOLNP?

The bad change is reverted. Can you retest with SVN trunk?

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Thanks. CSOLNP works in

Thanks. CSOLNP works in OpenMx_2.0.0-3956, while NPSOL still generates an error:
Error in if (ci[1] == ci[3] || ci[1] > ci[2] || ci[2] > ci[3]) { :
missing value where TRUE/FALSE needed

NPSOL produces NA in the CIs in OpenMx_2.0.0-3719, while CSOLNP generates CIs similar (but some are not exactly the same) as those in OpenMx_2.0.0-3956.

AdminJosh's picture
Offline
Joined: 12/12/2012 - 12:15
relief

That is a relief. As for the "missing value where TRUE/FALSE needed", that looks easy to fix. Can you try the attached patch?

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
I've just tried the patch. It

I've just tried the patch. It now returns NA in the CIs rather than an error. Thanks.

RobK's picture
Offline
Joined: 04/19/2011 - 21:00
Confidence interval codes?

When it runs without an error but gives no confidence limits, are they all NA? If so, what are the confidence interval codes? When I checked them, they were all -1, indicating that optimization was aborted prematurely. This might be happening if the initial solution is at or very near the boundary of the parameter space. Then, the optimizer just finds NAs on the first iteration of every attempt to find confidence limits.

jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
NPSOL bug with lower bounds

With respect to test4b.R,

It is hard for me to believe, but if you clear the lower bounds on the Tau matrix before you re-run the NPSOL metaSEM.fit model then it works,

## No CI reported.
metaSEM.fit <- mxRun(metaSEM.model, intervals=TRUE)
summary(metaSEM.fit)

metaSEM.fit$Tau$lbound[,] <- NA # add this line!

Throw an error when rerun the model

metaSEM.fit <- mxRun(metaSEM.fit, intervals=TRUE)

Maybe somebody with the NPSOL source code can look at lssetx_ and see if there is something obviously wrong with the bounds checking.