You are here

Error Code 6

4 posts / 0 new
Last post
dr.gw's picture
Offline
Joined: 09/09/2021 - 22:51
Error Code 6

Hi,

I have written up a script using a RAM formulation and so is maximizing the likelihood using the mxExpectationRAM() function with raw data. I have got a warning message with status code 6 when I ran my model in openMX. The model was identified, which was tested by mxCheckIdentification().

As suggested by the wiki page (https://openmx.ssri.psu.edu/wiki/errors), I re-ran the model from its solution [firstRun <- mxRun(myModel) followed by secondRun <- mxRun(firstRun)]. It made code Red disappear, which is great. However, I just wondered what happened under the hood. Why did this step resolve the problem? It would be appreciated if anyone could enlighten me on this issue.

Thank you!

mhunter's picture
Offline
Joined: 07/31/2009 - 15:26
Reset expectations

The exact language from as.statusCode for code 6 is

6,‘nonzero gradient’: The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED). To search nearby, see mxTryHard.

The phrase "first-order optimality conditions" is the way computational optimization refers to the gradient of the fit function. In turn, the gradient of the fit function is the mathematical way to refer to the rate of change (i.e., slope) of the fit function in response to small changes in the free parameters. The optimizer wants the first-order derivative (gradient) to be zero at the optimal solution. So, "does not satisfy the first-order optimality conditions" is another way of saying "the fit function changes too much or in unexpected ways when we nudge the free parameters". The "in unexpected ways" part is what can be addressed by re-running the model from its previous solution.

Re-running a model from its previous solution says "start a whole new optimization, but use the previously found free parameter estimates as the starting values for this new optimization". In many cases, the previous free parameter estimates were pretty close to the optimal ones, but something went weird with what the optimizer "thought" the gradient should be compared to what the gradient was actually computed to be. Running a new optimization tells the optimizer to reset its "expectations". I'm putting quotes here because I'm anthropomorphizing the optimizer quite a bit to give a more intuitive way of understanding the optimizer and avoid a bunch of mathematical detail.

Hopefully this helps!
Mike

dr.gw's picture
Offline
Joined: 09/09/2021 - 22:51
Thank you

Hi Mike,

Many thanks for your prompt reply. The answer is very clear.

Best regards,
Geng

AdminNeale's picture
Offline
Joined: 03/01/2013 - 14:09
Precisely

By Running a new optimization tells the optimizer to reset its "expectations". the "expectations" are the variance-covariance matrix of the parameter estimates, that is usually built up numerically during optimization. If that variance-covariance matrix is a bit inaccurate then the solution (parameter estimates) may be imprecise.

Perhaps the only way to avoid this in future is to be thoughtful with starting values for the parameters. If you know the means and variances, a little consideration should enable setting starting values for the model so that variances and means are about right, and covariances are close to or at zero. Variances may need to be a bit larger than the calculated value. Even if this covariance guess isn't correct, it still generates a strongly positive definite covariance matrix to start with and is less likely to have data points with a likelihood close to zero (perhaps equal to zero in computer precision). Basically, the log-likelihood with a diagonal covariance matrix is simply the product of the univariate likelihoods, all of which can be arranged to be greater than zero with a judicious choice of starting values. Unless there are bonkers outliers in the data... then find them and think about trimming them or investigating why the observations are so far out there.