You are here

Exclamation points indicating

3 posts / 0 new
Last post
likelycandidate's picture
Offline
Joined: 09/28/2017 - 18:36
Exclamation points indicating

I am estimating twin data using the standard model (https://openmx.ssri.psu.edu/docs/OpenMx/2.7.9/GeneticEpi_Path.html).

It appears that the program is having difficulty estimating the standard error (or confidence intervals). In one case I get a ! in the A column for estimates of one of the ACE components. From what I have read, I know that " An exclamation point in the 'A' column indicates that the gradient appears to be asymmetric and the standard error may not accurately reflect the variability of that parameter. As a precaution, it is recommended that you compare the SEs with likelihood-based or bootstrap confidence intervals."

In another case, when I print the confidence intervals for A/V, three exclamation marks (!!!) also are printed. I don't know what this means, but parallel modeling in STATA using acelong reveals a potential problem with estimation of this same CIs.

Any recommendations? Is there a way to avoid the problem by using a different specification? When the SEs are inconsistent with the CIs, is either one more reliable? Or are both suspect?

Thanks for the help. I am, obviously, a newbie.

**
Problem 1:
name matrix row col Estimate Std.Error A
2 c ace 2 1 X.XX X.XX !

Problem 2:
confidence intervals:
lbound estimate ubound note
ACE.StdVarComp[1,1] X.XX X.XX X.XX !!!

AdminRobK's picture
Online
Joined: 01/24/2014 - 12:15
CIs are flagged with a triple

CIs are flagged with a triple exclamation mark in three cases:
1. At least one of the confidence limits is equal to the point estimate.
2. At least one of the confidence limits is on the wrong side of the point estimate (e.g., an upper limit that is smaller than the point estimate).
3. The difference in -2logL differs too much from the target value (which is about 3.841 for a 95% CI).
Note that case #1 can occur with legitimate results when the point estimate is on a boundary, or when it's an "estimate" of something that can't change (such as a diagonal element in a correlation matrix).

You can see detailed CI output if you use summary() with argument verbose=TRUE. You should probably post that detailed summary output. If you do, please attach it to your post as a text file, for the sake of legibility. Importantly, you can subtract the 'fit' column in the CI details table from the fit of your model to see the change in -2logL. If the change is approximately 3.841 (assuming these are 95% CIs), then the confidence limit is probably OK.

Unless the detailed CI output shows something seriously wrong, I would trust those profile-likelihood CIs over CIs constructed using the standard errors. The profile-likelihood CIs are not required to be symmetric, respect bounds and constraints, and are invariant under change-of-parameter.

BTW, which optimizer are you using? If you're using CSOLNP, you might have better luck with SLSQP or NPSOL. You can switch optimizers with mxOption(), e.g. mxOption(NULL,"Default optimizer","SLSQP").

AdminRobK's picture
Online
Joined: 01/24/2014 - 12:15
Actually...
Importantly, you can subtract the 'fit' column in the CI details table from the fit of your model to see the change in -2logL. If the change is approximately 3.841 (assuming these are 95% CIs), then the confidence limit is probably OK.

It turns out I was mistaken about this. Having a change in fit close to the target value is necessary but not sufficient for the confidence limit to be valid. If you're using CSOLNP or NPSOL, the optimizer can reach a change in fit that's around 3.841 even when the confidence limit is no good. The proper way to verify the confidence limit is to re-run the MxModel, with the reference quantity fixed at the confidence limit, which can be accomplished with omxSetParameters() if the reference quantity is a free parameter, or with an MxConstraint if the reference quantity is an MxAlgebra element. For a 95% CI, the confidence limit will be valid if the constrained model's -2logL is worse than the unconstrained model's by about 3.841.