ran two models fitting the same model to the same data. only difference is that one has no parameter bounds while the other does. here are the confidence limits for the two:

> summary(testNoBounds)$CI

lbound estimate ubound

testCI.parms[1,1] 0.2229001 0.6259667 1.0243803

testCI.parms[2,1] -0.4849764 -0.1423273 0.1882364

> summary(testBounds)$CI

lbound estimate ubound

testCI.parms[1,1] 0.1880914 0.463186 0.5739613

testCI.parms[2,1] 0.0000040 0.000000 0.2158373

from my calculations (see attached) the lower CI for parms[1,1] should be the same.

or did i do something stupid?

Attachment | Size |
---|---|

confLimitsExampl.R | 2.19 KB |

Restrictions on one parameter can and will affect other parameters. Not only are the CIs different for parm[1,1], but the estimate as well. The objective function value is 0.7 in the restricted model vs 0 for the unrestricted. The product of the parameters and the design matrix mean that all of the elements of Rpre are weighted sums of your two parameters. When you forced the second parameter (C factor, I believe) to be positive (it is negative in the first model, and sitting on the zero bound in the second), you force the first parameter (A factor) to a poorer fitting value to compensate. This worsens the fit of the model and changes the value and CI of parm[1,1].

pardon the delayed response, ryne. been away.

what you said is true--the two do give different solutions. i suspect, however, that OpenMx may be giving an incorrect solution in the bounded case.

here goes:

(1) in the unbounded solution (model testNoBounds), there are two observed statistics and two free parameters, so there is a perfect fit and no degrees of freedom for the ostensible chi square of 0. The CIs are calculated by fixing one parameter at various values, leaving the other free, and finding where the chi square hits the critical value for the desired alpha. These CI chi squares all have 1 df.

(2) in the bounded solution (model testBounds), the second parameter hits a bound, there is only one free parameter and only one degree of freedom. In calculating the LOWER CI, however, the first parameter is fixed and the second one is now free to vary, giving a chi square with 1 - 1 = 0 df. note also the the two models are not nested in this area (free parm 1, fixed parm 2 model vs fixed parm 1, free parm 2 model)

You're probably right that we should somehow alert the user (or simply fail to run the test) if the value in question is at a limit. This is trickier than it sounds, since the limits can be arbitrarily specified in a series of algebras.

Would a notice in this case (maybe an asterisk on the summary) be sufficient in that case?

Your demonstration that the CIs are the same is wrong, however. Your example looks at the absolute Chi-square value of the model at each value of Va, which makes the assumption that this is a comparison between this model and a model with a Likelihood of zero.

That's true in the case of the unbound model--the model reaches a perfect fit (to within machine tolerance). But the bounded model can't reach the minimum because the minimum is outside the bounds. So it has a likelihood just over .69. The confidence interval likelihood needs to be compared to that non-zero likelihood, which results in a different chi-square value.

If you adjust the

`cn[i,] <-`

line to be`cn[i,] <- c(res$value, Va, res$par, 1 - pchisq(res$value-summary(testBounds)$Minus2LogLikelihood, 1))`

it will more closely mirror what OpenMx is doing.

OUCH! I am wrong! Horrors! Will have to undergo weeks of therapy.

Disagree with your statement that "So it has a likelihood just over .69. The confidence interval likelihood needs to be compared to that non-zero likelihood, which results in a different chi-square value." As i stated above, the models are not nested, so the chi square is not valid. See Self & Liang if you have problems with this.

Anyway, very strongly agree you on alerting the user about the problems with ANY constrained solution. OpenMx should be lauded for flagging parameters that come within epsilon of their boundary constraints in summary(MxModel). Suggest that you also flag any model with an MxConstraint object.

Also, suggest that a warning statement be added to the summary(MxModel) and the MxModel@output object--preferably to the printed output from mxRun--that constraints may compromise the statistics at the solution. Also suggest that a reference to Self & Liang be included in such output.

Would add more but gotta get to that psychotherapist.

Ah. I think I see what you're saying. Let me echo this back to make sure I have it right, and please correct me if I've got it wrong.

We have two parameters, A and C. C is bounded at zero, and the optimum solution found by OpenMx falls right on the boundary.

The argument is that C is no longer free at the optimum because it's precisely at the boundary. But while we're calculating the confidence boundaries for A, C might not be at the boundary any more, and in that case C becomes free again. In that case, our original model has A free but C at the boundary (therefore not free). Our confidence interval model holds A fixed as part of the interval calculation but has C free (because it left the boundary). Since these two models are no longer nested, comparing them with a likelihood ratio test would no longer be valid.

It seems like this case will apply when a bound or constraint is active (that is, within epsilon of the estimate) in either the located minimum or in a given confidence interval calculation, but not in both.

That seems to make sense.

Is there a more correct way to handle that situation, or do we just have to warn users and rely on them to interpretation the results intelligently? Or could you point me to a paper describing the best approach to reasonably estimating likelihood-based confidence intervals in these specific boundary cases?

I'm not sure offhand whether the summary reports when an MxConstraint is active at the minimum, but if not, that might be helpful to people in interpreting the output.

Is it worth also reporting when a likelihood-based confidence interval is computed that comes up against a constraint/parameter bound? Or specifically when there's a mismatch in the constraints/parameter bounds that are active at the optimum versus those active at each confidence interval bound? Any thoughts?