You are here

CIs on mxAlgebra

12 posts / 0 new
Last post
Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
CIs on mxAlgebra
AttachmentSize
PDF icon CI.pdf144.31 KB

Hi, all.

I have a mxAlgebra of a parameter multiplied by a constant, say new_x = 2*x. When I construct an LBCI on both x and new_x, I expect that the CIs on new_x are the same as the CIs on x multiplied by 2. It turns out that they are not exactly the same (see the "diff" in the following output). When the constants are larger, say 5 or 10, the CIs on the new_x even become NA.

Any ideas why this happens? Thanks.

## Multiplied by 2
 
## variance Two.two_variance.1.1. variance_x2 diff
## lbound 0.7693044 1.537972 1.538609 -0.0006373491
## estimate 1.0030992 2.006198 2.006198 0.0000000000
## ubound 1.3427832 2.688370 2.685566 0.0028032732
## Multiplied by 5
 
## variance Five.five_variance.1.1. variance_x5 diff
## lbound 0.7693044 3.840211 3.846522 -0.006311445
## estimate 1.0030992 5.015496 5.015496 0.000000000
## ubound 1.3427832 NA 6.713916 NA
## Multiplied by 10
 
## variance Ten.ten_variance.1.1. variance_x10 diff
## lbound 0.7693044 NA 7.693044 NA
## estimate 1.0030992 10.03099 10.030992 0
## ubound 1.3427832 NA 13.427832 NA

Best,
Mike

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
Hi, Mike. I reproduce what

Hi, Mike. I reproduce what you report. If you pass argument verbose=TRUE to summary(), you can see a CI details table which explains the NA-valued confidence limits. They are all due to the change in -2logL from the MLE to the confidence limit differing too much from the expected value of about 3.841. For instance, the change in -2logL corresponding to the upper limit of 'ten_variance' is 4.1023.

However, by switching to SLSQP as the optimizer at the beginning of the script, I get no confidence limits reported as NA, and much smaller differences between the calculated and estimated confidence limits. OpenMx's default behavior is to use an inequality -constrained representation of the confidence-limit optimization problem with SLSQP, but to use a quadratic-penalty representation with NPSOL and CSOLNP. My guess is that multiplying the variance as your script does serves to make the quadratic-penalty representation ill-conditioned.

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Hi, Robert.

Hi, Robert.

Thanks for the comments and suggestions.

I have compared the performance of the three optimizers. Attached is a real problem I have in using OpenMx to conduct a meta-analysis. I am interested in calculating the CI on Tau/(Tau+s2) where Tau is a parameter and s2 is a constant (0.08486598 in this example).

SLSQP does not work well on both the lbound and ubound.

                lbound  estimate    ubound
Tau2_correct 0.2911837 0.6078023 0.8110563
Tau2_mxCI    0.2762502 0.6078023 0.6748704

CSOLNP works okay in the ubound but not the lbound.

                lbound  estimate    ubound
Tau2_correct 0.2908848 0.6078023 0.8115252
Tau2_mxCI    0.2748456 0.6078023 0.8112579

NPSOL works similarly as that of CSOLNP.

                lbound  estimate    ubound
Tau2_correct 0.2908847 0.6078023 0.8115252
Tau2_mxCI    0.2748456 0.6078023 0.8112579

If we look at the CIs on the mxAlgebra, it is hard to tell which ones, if any, are the correct CIs. Any suggestions? Thanks.

Best,
Mike

File attachments: 
AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
validation

You can always validate a profile-likelihood confidence limit as follows. First, make a new model in which you constrain the reference quantity (the thing you provide as value to mxCI() argument reference) to the confidence limit. If the reference quantity is a free parameter, that can be done by fixing it to the limit; otherwise, you'll need an MxConstraint. Then, run the new model. Finally, compare the -2logL from the fitted new model to that of the original model. If the difference is sufficiently close to 3.841 (for a 95% interval), then you've validated the confidence limit.

See the attached script. SLSQP and NPSOL both validate their lower limits. CSOLNP has trouble running the model to validate its lower limit, but I think its lower limit is still trustworthy, since it's approximately the same as the other two optimizers' lower limits. However, SLSQP's upper limit does not validate.

An OpenMx function to automatically attempt to validate confidence limits is a planned feature.

File attachments: 
Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Thanks, Robert.

Thanks, Robert.

It's very helpful.

Mike

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
You're welcome!

You're welcome!

tbates's picture
Offline
Joined: 07/31/2009 - 14:25
can you post the output of verbose?

hi mike,
can you post the output using mxSummary(..., verbose = TRUE)? That will allow abetter look a the diagnostics for us

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Hi Timothy,

Hi Timothy,

Here it is. Thanks.

Mike

File attachments: 
jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
verbose output

We need to see verbose summary output for the model from your 05/20/2019 post. This trivial model you have posted is not helpful; the output looks fine.

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Please see the attached

Please see the attached output with verbose on the 05/20/2019 post. Thanks.

File attachments: 
jpritikin's picture
Offline
Joined: 05/24/2012 - 00:35
local minimum

So if we line up the relevant info then it looks like SLSQP is getting stuck in a local minimum,

SLSQP  upper 0.67487035 31.64063 neale-miller-1997 success 0.17615598 0.7993611
CSOLNP  upper 0.81125790 31.65046 neale-miller-1997 success 0.36477390 0.5894397
NPSOL  upper 0.8112579 31.65046 neale-miller-1997 nonzero gradient/red 0.36477394 0.5894395

I don't really see anything wrong here, in terms of bugs. Gradient-based optimizers cannot always find the global minimum.

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
Seems to me that the user

Seems to me that the user could have more control over the number of automatic retries the optimizers use when trying to find a confidence interval. This will be added as a feature in the not too distant future, likely as an argument to mxCI.